 Thanks for everyone who is joining us today. Welcome to today's CNCF webinar. We are introducing the Kubernetes Universal Declarative Operator. My name is Ihor Voritsky. I'm a developer advocate at the Cloud Native Computing Foundation, and I'll be moderating today's webinar. And I would like to welcome our presenter today. It's Jarod Dylan, a staff engineer at D2IQ. First of all, I'd like to mention a few housekeeping items. So during the webinar, you're not able to talk as an attendee. There is a Q&A box at the bottom of the screen, and feel free to drop your questions there. We'll get to the questions in the end, or Jarod can also take some of them during the webinar. This is an official webinar of the Cloud Native Computing Foundation, and it's subject to the CNCF Code of Conduct. So please do not add anything to the charter questions that would be a violation of the Code of Conduct. And please be respectful to all of your participants and presenters. So now I'd like to hand it over to Jarod to kick off the presentation. Thanks, Ihor. Hello, everyone. My name is Jarod Dylan, and as Ihor said, I'm a staff engineer and the CUDO product owner over at D2IQ. And these links will be up again, but as CUDO's community project, I wanted to give people the resources they need to come in, participate, get some more information on their own. So we have a website, it's at cudo.dev, our Git repo and org is under cudo builder slash cudo, and we have a Slack channel inside of the Kubernetes Slack. So if you're interested in participating in our community, and you're not already in the Kubernetes Slack, I'll have that link at the end so that you can go join the Kubernetes Slack and ask questions and get more information on actually using CUDO. So I want to start out and talk about a little bit about operators and operator development. For those who aren't aware of operators in the Kubernetes space, operators are a concept of combining Kubernetes custom resource definitions and a custom controller. There's a question about the YouTube link being invalid. I will update that in and put it as a comment in or forward that to Taylor Ihor as a comment to maybe add to the YouTube description or we'll post it in the CUDO Slack. So going back to operators and operator is a combination of a Kubernetes custom resource definition and a controller that that provides functionality on top of that controller. So, for example, you may want to stand up at CD clusters inside of Kubernetes and so so you want to represent that as something more than just Kubernetes primitives as also be concerned about the day to life cycles of of these more complicated pieces of software that operators were envisioned to cover. So over time, a couple of different frameworks for developing operators have have come and and begun to develop out in the community. When as people start to ask, okay, how do I build an operator? How do I build these custom controllers? And it turns out that's pretty hard. In order to build an operator right now in Kubernetes, you need pretty advanced Kubernetes and distributed systems knowledge. And you have to do most of this and go there. There's some other SDKs out there, but go as close as to the Kubernetes source code. A lot of it is generated anyway, and is often the more most mature client for doing that. And if you're a vendor building operators in the space, you're building operators for your software to distribute to to your customers, it can be really challenging to hire for these skills if that's not part of your core competency. If your if your stateful services are written in Java, but now you have to do software development and go. It's a whole separate team separate from your actual operational team to do so. And building a really good operator takes a lot of code and a lot of time to do in its current state. Furthermore, you probably are going to want to have multiple of these operators inside of your cluster. If you're running various pieces, various databases and you want those to be operators, you're probably going to have a lot of duplication between different operators, different controllers running. And if you're maintaining those in the house, you're generating out lots of code that you need to keep up to date. And there's no current good integration of CNC of ecosystem tools workers ongoing in that space. But but it's it's really a pretty raw experience right now with a pretty high maintenance burden. And this is this is a weird look at the elastic cloud operator. It's right now 53,000 lines of code. A lot of that is generated, but that's just a benefit of the go code generation client generation tools being really good at this time. Now, if you're actually trying to use these operators, running these table workloads can be pretty complicated. And every operator has its own workloads, own APIs, its own debugging tools. And when you have five, six of these controllers all running in your cluster, it can get pretty tough to manage. And right now, running stateful workloads on Kubernetes is actually pretty difficult. And a lot of the people who are developing these stateful databases don't have as much Kubernetes experience as the end users. Or, you know, it's still very, very early field. So I like to introduce today and talk about the Kubernetes universal declarative operator and what that is and how it can help and solve some of these challenges. So Kudo is a toolkit and a runtime for building operators that are optimized for complex, stateful, usually distributed applications. And so what we're talking about here is we're talking about databases like Cassandra, SCD, Vitess, other databases that aren't just as simple as spinning up a stateful set and calling it a day. And then it increases, it's a tool to increase developer productivity when actually building these operators. So the idea behind Kudo, and we'll talk about this a lot today, it's about being able to ship your software with its runbook. If you're shipping a MySQL operator and you want to be opinionated about how backups were stores are done, Kudo is a tool for doing that at a very high level in a very productive way. And for end users and end operators of these, of these, of these operators are at the end app developers is what we call them. There's there's an intent to provide increased productivity. And we'll talk about that some as well when operating these services by providing a common control plane for performing a lot of these tasks and and reducing the surface that people need to know about when actually going to operate these services. So we're going to go over the landscape in a little bit, but I just want to talk about and lead with why Kudo is is a good choice for building these operators. And it really comes down to the creation and maintenance burden of all of these. So when you're building his operators, like I said before, you have to have a deep knowledge of Kubernetes, how the API and incubators works, how building controllers in incubators works. And you have to be operating really if you want to be productive at all about this, be working with prior are inside of inside of Kubernetes and that's usually going to be something writing something and go. The Java client is good to there's a there's a good JavaScript client but really the most robust tooling is still written and go and maintaining these operators is hard if you if you were to imagine that Kubernetes, which it does releases every 12 weeks. There's no new features to take advantage of. There's deprecations that you have to deal with. For example, Kubernetes one dot 16 deprecated apps v one beta one and v one beta two. And so if operators were up to date for that, they had to go and maintain all that code or risk versions of their software being broken. And that can be a large maintenance burden to do every single 12 weeks to keep aware and abreast of that. And then operating services is hard. A lot of these tools are really concerned with deployment upgrades and and a lot of operators, you, a lot of these operator frameworks that are out there aren't necessarily as concerned or leave it to you as the developer to really make it application aware for your particular operator. And so there's no standards yet around things like backups and restores there's there's Kubernetes primitives for those, but the landscape is wide open on what you should do as a best practice when when going to put in these day two operations. And so we believe there's a lot of value in making this particular set of tasks a lot easier. So could help developers in a couple ways. We provide abstractions for sequencing lifecycle operations and we'll talk about this using a series of Kubernetes objects that are both native and series and then allowing to perform sequencing around them by using by using a concept called And you can think of these as as manual run books that you might keep in your organization or in your tooling of how to actually perform some sort of task, be it deploy be upgrade be at a Kafka topic be it at a at a database index. All these things are are intended to be representable as plans within Kudo. And then it also reduces boilerplate and code duplication between operators we'll talk a little bit about that more in a moment, but Kudo runs as a single controller manager inside of your cluster, rather than having one for every single operator. And so really what you're doing is is is taking a single universal operator and configuring it using a bunch of Kudo primitives, while being able to extend and break the leaky abstraction where it doesn't quite work for you. And in doing that we were trying to reduce the number of controllers in the cluster and reduce the amount of resource sprawl that you have from maintaining all these controllers that are also maintaining your applications. Kudo also provides an extension mesh mechanism. So one thing we wanted to solve was this idea that you would have to actually fork the entire code base of an operator or of a chart or whatever, in order to add in your your specific tooling to that. And so we have an extension mechanism, based on customized to allow for you to to tailor that to your environment then we want we the intent there is for organizations that have their special method of backup or their, their special operational needs. They can tailor existing charts, without having to go off on their own and have to merge back in updates down the road. And then providing ice fees with a tool to really just ship those best practices and run book alongside their software. We also ship with a testing tool. And we're probably going to split that out as its own tool because it's a really nice tool but it's it's enables TDD of Kubernetes resources on their own. And to be able to make assertions when you create Kubernetes resources, what should happen in your cluster. From a user perspective, we provide a cube cuddle Kudo plugin, and that's for deploying managing these workloads. We have the same similar API and CLI workflow experience for all of these. And these are all Kubernetes custom resource definitions and custom resources under the hood. So, we, we, while we have a plugin, everything's intended to work just straight through QCTL and through GitOps, if you want to continue using your existing workflows, but take advantage of running them with the Kudo controller manager. And we're working on tooling for this to become a bit better. But we were also working on existing operators to be managed by Kudo. And really this is for we'll talk a little bit later about how Kudo, one of the upcoming features for Kudo is dependencies. And so we want to also fit Kudo into the larger ecosystem by being able to depend on other great operators out there that are in existence. And just to repeat what Taylor just said in the chat. If anyone has any questions, please drop them into the Q&A box at the bottom zoom and at various points I am I will stop and answer those. And an upcoming is features around having centralized support bundles, metrics and alerting, RBAC features. And so really what we want this to be able to easily scale from from individual installs to multiple installs and be able to support users along the way. And this is really enabling both centralized teams and these upstream database developers to provide an opinionated way to deploy this onto Kubernetes and give users confidence in what should be monitored, what alerts matter what don't. And like I said this can all be extended but defaults are good out of the box and a lot of these Kubernetes primitives aren't quite enough to get you across the line if you're actually shipping software out to other people in other clusters. So just to walk through technically a little bit the lifecycle orchestration with Kudo. We are we are settling into this, this YAML or versioning this YAML at least so that we can continue to experiment with it. But we have a concept of plans. Plans are your top level lifecycle operations you've deployed backup or store, add topic, various other things. And phases is really a grouping mechanism for tasks and this is to handle complicated use cases such as HDFS or other software that really has to be really can't be deployed by just throwing all the manifests added at once. So this is a large very core screen grouping of tasks. It says strategy serial there but phases can only be run serially as of 080. So that's slightly incorrect. It's at the step level where you can decide between parallel and serial for actually running these steps. So within phases we have groups of steps and really steps are groupings of Kubernetes resources or tasks that do something to your cluster. And this and this is where you can really start to tweak. So for example, you may be able you may be able to deploy something zookeeper in parallel but you really want to deploy something like Kafka serially. And so this allows you to tweak how that all happens and how you actually run this during say an upgrade cycle. This is particularly important with something like zookeeper at CD where if you're going to scale or if you are going to perform an upgrade, you really want to do that in a rolling way and you want to do it with some potential API awareness. And I had a great question come in and I will address that after the next slide that is correct and that's on our roadmap but I'll address it earlier than that. And the question is actually not all dressing now what are the permissions the one Cudo operator requires to manage multiple different custom applications will Cudo be be a single large surface that will be a security concern. And the answer that to that is yes, we have on our roadmap two features for for doing that one we've discussed is having dynamic RBAC control. So that is when you go to install an operator with Cudo, it will the the operator or the cluster user who's actually doing that will use their service account through the Cudo CLI tool to adjust the roll bindings for Cudo. To be able to work with that particular operator that reduces automated automated ability a little bit because it does require the user who's adjusting those permissions to be able to do that. We are also working as part of our GA process to shrink the surface of Cudo so that you can apply a custom service account to it so that can operate on the exact namespaces and resources you want. One's a static way of achieving the same thing as the dynamic but we want to push some of that back to the hood back under the hood where possible to make the individual operator experience a little bit better. If that hopefully answers your question. If not, we can continue that over in the in the Cudo Slack. All right, so getting on to lifecycle orchestration coming with Cudo. One thing we're trying to do is push a lot of this into a much more declarative environment and do a lot more declaratively. I talked about GitOps and whatnot but it only goes a certain level with where we're at and so for Cudo next what we're working on is becoming more declarative towards components and so you would declare a set of components say a backup broker a topic and Cudo would manage these CRDs for you in your cluster. So you would write a create update upgrade delete and various custom plans for each of these components and very CLI extensions that could go along with these components and Cudo will convert this into a set of CRDs that you don't have to manage. And so all of your application components and your topology now become CRDs and they can be treated like any other Kubernetes resource. So those are subject to Valero backups, whatever GitOps tools that you're using any Kubernetes native or or webhook based RBAC that you're using, etc. Right so you may want to represent a Kafka topic as a CRD set the number of partitions. Now you both your cluster components and topology can be backed up and dropped onto another cluster, and it's all declarative. Though we're still working this out and the reason this hasn't landed or really, we're still working on this is because leaky abstractions can still be very imperative. And for example, restores are very hard to make declarative they're really an imperative thing that you want to do at a point in time. And so these are areas that we love contribution and love love more people thinking and talking about this. But because this is the direction that we're trying to go by making everything very, very, very declarative. So here's a quick breakdown of Cudo architecture. We actually just updated this today because there's one thing out of date on here. But we have the a Cudo controller manager that's running inside of your Kubernetes cluster working on a set of CRDs and those CRDs are operator. These CRDs today are operator operator version and instance. So we store everything local to your cluster there's no external package store all these operator definitions exists inside of your cluster. So you may have a Kafka operator with multiple versions and then you would have an instance of this that then runs these coffee images. And your the cube CTL Cudo plug-in is the opinionated way to interact with all that. So it's going to fetch from the file system or from a Cudo repository with an index.yaml these various operators get this definition convert it into the right set of CRDs and put that on to the cluster. And the reason we do that these operator and operator version CRDs are really internal definitions for us and not meant for you even though you can back them up and restore that. So this format this operator format that's in the repository is stable, whereas this is these are not yet public API's and so that that's because we're continuing to iterate on operator and operator version. Instance will stabilize in our in our coming version and then operator.yaml and rams.yaml inside the repo will stabilize as well. I another great question how our tasks defined when I get to the live demo I will show that Cudo 080 is actually changing that a little bit, but you have a tasks section and operator.yaml that lists off off your various possible tasks, and the Kubernetes templates are resources that go into those. So in 080 we will have an apply task, a delete task, a Helm chart task and a pipe task for piping between resources. And all those take slightly different fields. For example, apply and delete take a list of resources, whereas pipe takes a container spec and an ID to tag that with to use in future templating on the Helm chart tasks takes the Helm chart and a version and a value.yaml. Again, if that that doesn't quite answer your question I'll show a bit in the demo, and then you can also ask for in slack. Thanks for the question that's a great one. So, talking a little bit about the community we are very CNCF lines in the beginning we are we are we've been talking about the Cudo and the CNCF sandbox, as well as Kubernetes SIGs. So we have an open governance model now that's based around Kubernetes enhancement proposals. We do something slightly different and we've tweaked it for because we do not have special interest groups were just a holistic project. So it's a bit of a slim downcap process but we still do it. And really we're focused heavily on ease of contribution to our core code base and our operator code base. Right now we have four reference operators, and we have a couple of community operators in progress elastic redis and my SQL. Right now we're at 075 and we do minor releases every two to four weeks. We have a bunch of GitHub stars. We're pretty happy with the growth there for the age of the project. We've been talking with a lot of people various organizations interested in building operators, even purely internal ones using Cudo so we're not just thinking about database companies we're thinking about centralized IT teams who want to also do some internal management of cluster level services. And we have 50 same contributors with six of them working full time right now in Cudo. So robust we're growing we'd love for more participation in this. So to do a quick comparison of Cudos to some other tools out there. And I and I use comparison a little bit unfairly because we believe that Cudo sits into an ecosystem and there's reasons to use Cudo and there's reasons to not use Cudo as compared to these other tools. So really think of this as as we have much more robust comparisons on the site, but this is just a quick mental model that you can take as Cudos as compared to these other tools. These tools are all great which just fits in a little bit differently. So comparing first operator SDK and Cudo is a polymorphic controller and we're looking at multiple types of operators. So whereas with Q builder you would generate out your one holistic operator and you're working in that code base. Be it a helm one be ansible or be built and go. Cudo is entirely configured to be a CRDs we're supporting some webhook extensibility with that tasks system we're talking about coming soon. But as of right now this you don't extend Cudo itself with your own code. The other operators decay and Q builder are generative. So you generate out your operator and then you're working from that project and that has implications around how you go and upgrade at your project and some other things and with Cudo you upgrade the controller manager. And we may have a deprecation policy around our CRDs. But if you're within that deprecation policy and and the CRDs are within that you should generally expect it to be able to work and anything else is a bug. And we're really oriented towards using existing clients and tooling for software, rather than rebuilding this functionality and go or having to add these binaries to your container and shell out to them with OS.exec or something like that. But really if you have something like your my SQL dump tool, we want you to use that because it's typically going to be application aware. And that is to say, a lot of the software you can't just pick up and take the snapshot of the volume and dump it somewhere else. You actually have to do this in the way that the software prefers you do it. And that may be the case with Redis and grabbing the a Pandoli log, which is different from my SQL which is different from elastic and coffee and anything else. So we really focus on on enabling that native tooling, rather than trying to rebuild it. And like I said, we lean towards building operators using these Kubernetes primitives, rather than optimizing for software development. Neither is right or wrong. It's just what kudos optimizing for. So we bring a lot of natural comparisons to meta controller because both are these polymorphic controllers for operators. A bit of the differences though is Kudo is also trying to be a CRD operator and manage these CRDs for you dynamically. If you were to look at some of the operator implementations in meta controller that use other operators, there's no real management. Like for example, they may expect you to already have the SED CRDs and the SED operator running in your cluster. So there's no notion of an ecosystem of operators running it just sits individually. Meta controller does not have much there for you for sequencing. Kudo has much more robust sequencing. And if you need that, Kudo is a great choice for it. And Kudo also thinks about, okay, I've deployed and I want to start performing operations. Now what do I do? Meta controller has facilities for upgrades and other tooling. And it's a very robust tool, but just optimize slightly differently. And another great comparison we get often is, well, what's Kudo versus Helm? And we're specifically talking here about Helm V2 with Tiller. So Kudo really, I mean, if you look at Helm and Tiller, Helm is really about creating releases from a set of Kubernetes manifests. And then Helm steps out of the way and it is done. So you don't get really any of the operator benefits of something monitoring your application for change. So this is things like drip detection, you know, any repair alerting that you want to do. Helm will also just generate out a bunch of resources and apply them all at once. So if you do need that sequencing for these really, really complicated applications, and you find yourself with Helm and Tiller writing a lot of init containers to try to get things to happen in the right order, Kudo might be something to look at there. And then, like I said, we've talked about this a lot, but Kudo is looking at really higher level features for supportability as well. This bullet here I think could apply to a bunch of them, but Kudo, one upcoming feature is Kudo is working on automatic sandboxing. This should help answer the security question as well. So getting that dynamic R back in, I think this ended up on the wrong slide. So looking at Kudo a little bit in the wild and just doing a time check, okay, great. We have a bunch of community operators, Zookeeper Kafka, MySQL Elastic. There's a few more coming soon that are not announced. That's a fantastic question about Tecton. I will, since we're on the topic, I will address that. So question is Kudo versus Tecton. Tecton is a set of primitives and runners for CI and CD that's being donated to the CD Foundation, I believe. And they're really trying to solve two different use cases. There is no reason, for example, you couldn't use Tecton to deploy a Kudo operator as part of that pipeline. So if you think about Tecton, Tecton is really concerning itself with continuous integration, continuous deployment pipelines. And how do I ship my software versus what happens now that I've actually shipped that software. And so in the Tecton world, the full Kudo operator would be the singular unit of your continuous deployment. So we have some more operators coming soon. And I think that even we're going to announce those very shortly here at the end of the month. But we have a couple that are in progress that should complete some data processing stacks. So Kudo builds on top of QBuilder and controller runtime. So we sit on the same tools as other tooling standing on the shoulders of giants. We work with those teams very, very closely. And really, if you go back to that comparison and a tool like Kudo and operator SDK, I'm sorry, QBuilder and operator SDK builds out 50, 60% of good practices. Kudo gets you 70, 80% of the way there if you're able to follow Kudo's opinions about development. It's like looking at it as a Ruby on Rails tool, very productive, a bit omicase. You can open up the hatch if you need to. But really Kudo tries to be a little... Kudo tries to finish out those controller implementations for a certain subset of projects. But we stand on the same shoulders as these other tools do. Users can, like I said, progressively enhance other operators, but also existing Helm charts and CNAB bundles with orchestration provided by Kudo. And that's currently under development for releasing 080. The code is done. It will launch behind your feature flag. But you can actually pick up your Helm chart, deploy with Kudo, and then start adding plans for backup and restore. Our preference is still to get you into building it fully as a Kubernetes operator. But it's a great way as of this next release to start kicking the tires with these existing Helm charts and your existing databases you've developed. We're not trying to aim to solve the application to definition problem. Really, our focus is on this lifecycle and day two operations. And really, it's about, well, I have my application deployed. What do I do now with it? And for databases, that's a real problem when you need to care and feed for them. Feature versions of Kudo are really focused on application aware, declarative ops, and then workflows around these stateful applications. So that's not to say that we aren't interested in other types of applications, but that's really where our focus is right now. And that's where we really welcome community input. Because we have people coming in and ask, well, can I use this to orchestrate my Rails app and run migrations and stuff? And the answer is yes, right? Right now that's a bit of taking a hammer to a glass house. But if there's certain things we can do to make that a bit easier, especially for larger scale deployments of these, we're all ears and we would love to understand how to make that a bit better. That's an unfinished bullet there. But the package ecosystem that enables users to deploy multiple repositories. So they can have internal repositories or use our community one. Our packaging system is pretty modular at this point. And as we talked about before, we have a community governance model. And we're built from the start to be very vendor neutral in Kudo itself. So a little bit about our roadmap, and I'll show a quick little demo from the terminal. What's upcoming in a couple of different versions. So we have these dynamic CRDs that we talked about to really get towards really declarative pipelines. And really enabling a lot more expressiveness. As well as getting better and better about application aware operations. And this is really about not just snapshotting your volume and hoping it ends up somewhere else, but doing scaling, doing day two operations in a way that existing software knows about. We're evaluating Q and Starlark as more opinionated alternatives to writing operators. And so we're collecting some data on those needs for a little bit better scripting to enable some a little better abstractions around building operators. YAML and CRD based definitions won't go away. They just might be the least powerful or the most raw of those options. Operator dependencies are in progress now. And so what this is really about is if I have Kafka, I naturally depend on Zookeeper. So how does Kudo fulfill the need for that Zookeeper and we've identified a couple of different personas around that. We're right now solving the output variables problem around it. But even though this works now, we want to automate this and make this a lot easier to do. Operator extensions. So we've talked about been about extensions, but we want to be able to allow people to add plans without having to fork the entire operator and having to maintain that code base separately. So this enables any operator to be enhanced for specific environments or specific monitoring software, or if you're using Istio or LinkerD, you can maintain these versions without having to not depend on the base tech anymore. The upstream base tech. We'll have air gap support soon. That's on our roadmap. And then supportability features are on our roadmap as well so that opinionated monitoring and alerting can be defined by operator developers. All right, and I have a few questions here. First one that came in is if we're starting with the Helm or our operator in your experience, which one do we need to consider to use in our app lifecycle? With the CNCF apps, where in the code of the CNF is not available to us? Would you mind, I'll answer the other question. Would you mind breaking out the CNF acronym? And I'm happy to answer that. The other question is, before I come back to that one, how might things like pod disruption budgets fit into Kudo? That's a great question. If you look at some of our operators, we actually are using pod disruption budgets in order to help with some of that workflow in life cycles. So for deploy, we may set a pod disruption budget in one way when we want it to be parallel and serial when we want to handle it during an upgrade so we don't upgrade all these instances at once. So if you were to go look, I believe the Kafka or the zookeeper operators right now have them. You can include any Kubernetes resource you want. And really we've been hesitant to decorate in resources for users. That may change as we get into things like Q and Starlark and people are able to start building abstractions. But right now, if you want a PDB, you have to add that in. CNF means cloud-native function in the telco ecosystem. What I would say to this question, and I'll repeat it again and hopefully get the nuance right. If not, we can discuss it more in the Kudo Slack. If we're starting with the Helmer operator in your experience, which one do we need to consider to use in our app lifecycle? That's question is entirely dependent on how complex your applications are. And when I say complexed, I mean in the sense that how many braids or how many strands is this particular application tying together? And when we're talking about something like cloud-native functions or Knative or something like that, that can be fairly complicated because you are bringing many concepts together. And I find typically when I'm personally developing an application on Kubernetes, I'll likely start with a Helm chart to just deploy it. And I'll convert it. I'll start looking at an operator as soon as I need things to start happening that I don't want to do by hand or if I start needing to do some higher-level orchestration of multiple components. Like I said, if I start finding myself doing a lot of init containers, I'm probably starting to consider maybe adding in some orchestration and generating out something with QBuilder or even Kudo to do so. Let me know if that answers your question. If not, we can discuss it more. And just checking on time. Okay, great. We got a few minutes left. So we'd love for everyone to get involved. Go ahead and try it out at kudo.dev. I'll show you how to get started with it. It's pretty easy. We have a home group package. We have a crew package. Super easy to get running with it. Feel free to give feedback and open issues. We are participating in Hacktoberfest. We've got a bunch of issues that are labeled there that are both Hacktoberfest and Good First Issue. If you're interested in contributing, Good First Issue means that you'll have a Kudo core team member there to help you get used to the code base and understand what's going on. We also have our operators repo. We also have a WWW repo for documentation. And we're in the Kudo channel in the Kubernetes Slack. And if you aren't in the Kubernetes Slack today, that's where you can go to get an invite there and jump in. So with that, I will maybe stop for some any more questions real quick, and then I'll show a quick demo and turn it back over to Taylor Nehor. So here I have my terminal. I have a Kubernetes cluster running with a bunch of nodes upon EC2. This is using one of our distributions of Kubernetes called Convoy. But what I'm going to do here is I am going to show off, we have the kubectl Kudo plugin. This is installable in a couple different ways. And I have actually did not include some of my demo. It's in our documentation. But if you do a brew install Kudo CLI, it's there. You do need to tap our tap for this though. So it is under. Yeah, yeah. So here the Kudo builder slash tap slash Kudo CLI. So if you do a brew tap, brew tap Kudo builder tap Kudo CLI, it will, it will set you up. Or I think even Kudo builder tap works. But anyway, once you have that, you'll have the CLI here. You can also do it via crew via kubectl crew install Kudo, I believe. Yep, it'll do it. You can do it through there and just get the plugin immediately. So Kudo depends on its controller manager. If you want to install that controller manager, all you need to do is kubectl Kudo init. And it will install the CRDs, the default service account. And the controller manager and it will do all that in the controller manager or in the Kudo system namespace. And it deploys it as a stateful set as well, because it depends on being the only one. It does have leadership election, but that is that needs more testing before we GA that feature in H.A. Kudo. Fortunately, it's standalone. It's state entirely depends on Kubernetes. And so it's pretty compatible against, against a wide range of Kubernetes clusters. And we test on even even kind, which is Kubernetes and Docker. And we test on mini cube. And so it, it's, you know, it works in about every environment that the Kubernetes scheduler will, if not everyone, I just haven't, we haven't tested it to fully say that. So now if I want to go and install say zookeeper, I don't want to go through all these, these CLI commands today, but what I will show you is if I do a K Kudo install zookeeper and give it an instance name here, it's going to start spinning up these different pods for zookeeper. And so it, it works in about every environment that the Kubernetes scheduler will, if not everyone, I just haven't, we haven't tested it to fully say that. So it's going to start spinning up these different pods for zookeeper. And it also created PBCs and it's going to help manage those PBCs for you. So it's going to stand those all up. If I were to do a K Kudo, a plan status instance, okay, we'll actually get the plan here. And we can debug different steps in the plan. If something goes wrong to find out exactly where Kudo thinks this installation failed. But this should work. So we'll, we'll get this all running. So we're going to do this. And as I mentioned before, we have zookeeper as a, as a parallel deploy plan. Although in future versions of the zookeeper operator, we'll likely do a serial upgrade plan. So that we can upgrade one at a time and serial scale plans. So, okay, we have all three of those running. Now we'll do a K Kudo install Kafka. And we'll pass in the Kafka zookeeper URI. There's, there's a bunch of parameters. I think our Kafka has like 80 or 90 parameters, for now we're only going to set this one. And this particular parameter will actually go away once we have our dependency system in place. So this will more be a reference to another Kudo operator. And we'll go away soon. But for now, you just have to specify it using the connection string. And the documentation for this is, is pretty well documented at this point. So we'll install Kafka. And Kafka is deployed serially. So what we're going to do is we'll see Kafka stand up. It'll create the container and then it'll roll through and create the other two replicas as soon as that one's ready. So in the meantime, are there any questions anyone has? Well, that deploys. I'm sure they'll take some time to type out, but in the meantime, I will actually show the plan status too. For this one. And so we can see here we have a weird custom plan there. I would ignore that right now, but we were deploying with a serial strategy. And it's showing again in progress. And as soon as we have Kafka two, we'll know we're done. And it should show that plan is complete. Right. We'll get the status. And no active plan exists. That's done. In the next version, we will actually we'll persist that state a little better so that you can see it's complete. But for now, there's nothing actively going on. So we're just done. And with that, I thank you everyone for your time. That is Kudo. And we would love to see you in the community. I'll put up those slides again so everyone can see them. And I turn it back over to you. Oh, I did have one more question there. Can Kudo do rollbacks? Rolling back a complicated services can be very, very complicated. So. I, whereas I like that feature, I think our team would like that feature a lot. I think we need to know a little bit more of the use cases. Because it's one thing to roll back. Say a bad release of a, of a Sinatra application. It's another thing entirely to roll back. It's another thing to roll back. It's another thing to roll back. It's another thing to roll back. It's a Sinatra upgrade where schemas have changed on the underlying disk. So. We'd love for you to join and talk a little bit more about your specific use case. And we can find ways to address that in the existing system. Because the answer might be yes. Kudo can already do that. And. In the case of the words not, we should talk about where we can have features for those. So. The QCTL Kudo plugin just wraps together a bunch of opinionated ways of doing it to make it more productive. It is not intended to replace it. So every action you do should be doable with QCTL. Right. So if I were to look at these CRDs. If I were to look at the zookeeper CRD here. It's a plain Kubernetes object. If I were to go look at operator version. I believe it's got Kafka. Zero two zero. If I were to do open that up. Again, just a plain CRD. QCTL Kudo command makes this easier to work with. But everything. It's our position right now that. Everything should be able to be done some way with QCTL. And one more question. Can we integrate Kudo with helm? By default, it'll be turned on in zero eight zero. We have a helm chart task now, but. And we're working on a helm chart for the Kudo controller itself, but. To answer your question about bringing in a helm chart. Yes. And you'll be able to, and what that will do will create just a default deploy plan for that chart. And then you can add plans on top of that. So it, it fakes into that task system. Where you will have a task called. If I can scroll up and find a task here. You will have a task called. Helm chart. And then you'll specialise in that. You'll have a task called. Helm chart for the Kudo controller itself, but. You'll have a task called. Helm chart. And then you'll specify the parameters for that chart and versions. And we intended for that for CNAP on those and any other prevailing format as well. That, that either gets market share or is integrated into the CNCF as an application and definition for a form. Okay. Any more questions? We'll turn it back over to you. All right. Thank you, Jared. So, thank you for your presentation. And thanks everybody for asking their questions. We don't have any more questions today. And we can give you back 10 minutes. So thank you for joining us today. The webinar recording and slides will be online later today. We're looking forward to seeing you at the future CNCF winners and have a great day. Thank you.