 All right, we're going to go ahead and get started. I'd like to thank everyone who joined us today. Welcome to the today's CNCF webinar. The Kubernetes Test 2. My name is Orlin. I'm software infrastructure engineer at VMware, and I'll be moderating today. I'd like to thank and welcome our presenters today, Gerard Dillon, who is a principal engineer at D2IQ and Ken Sype, who is a distributed application engineer at D2IQ. Before we start, some housekeeping items. During the webinar, you're not allowed to talk to the attendees. There is a Q&A box at the bottom of your screen. Please feel free to drop any questions, and they'll be answered during the presentation or at the end in the Q&A session. Also, keep in mind that that's official CNCF webinar as such. It's a subjective CNCF code of conduct. Please do not add anything to the chart of the questions. It will be violating the code of conduct. Basically, be respectful for everyone and follow the principles and presenters. With that, I'll hand over to Gerard and Ken to kick off today's presentation. Go ahead, guys. Hey, thank you, Orlin. Thank you, everyone, for attending. And welcome to our talk about Cuddle, or as we title it, Who Needs a Cuddle. And we'll talk about a little bit more about what Cuddle is. I'm Gerard Dillon, and joined with me is Ken Sype. So, you know, my name is Gerard Dillon. Like I said, I'm a member of technical staff at D2IQ, where I work heavily on upstream Kubernetes, the Kubernetes Universal Declarative Operator, which we've done another CNCF webinar, as well as Cuddle and other tooling related to the Kubernetes network load ecosystem. On top of that, I'm a developer working on Closure and Go, and there's my Twitter and stuff. And then, Ken Sype, I'll turn it over to you to introduce yourself. Yeah, thanks, Gerard. I am an application engineer with D2IQ. I've been working with distributed orchestration systems for five years now, almost six. On top of that, I was lucky enough to join Gerard on Cuddle and then forked out Cuddle as a part of that. So, why don't we get started? Sounds great. So, what is Cuddle? Cuddle breaks out to the Kubernetes test tool. And it has its origins in the Kubernetes Universal Declarative Operator. And this is very important because what Cuddle is, it's a tool for orchestrating together Kubernetes workloads, building operators, using an entirely declarative language on entirely with CRDs. And in order, we needed a way to test these up in production. So, we wrote a declarative framework for testing called Cuddle test. And quickly, we realized the utility of that and decided to break it out into its own tool called Cuddle. And that name came about partially because we wanted to be able to say Cuddle-Cuddle. There we go. I've made the joke. But really because we saw utility beyond just operators. We'll talk about that a little bit more in a bit. Now, we say this is a test tool. If you go to the next slide, what does that exactly mean? So, if you think about all your levels of testing, you have unit tests that operate at a single isolated functionality test of a single unit, like a function. You have integration, which starts to look at a larger system around a certain system, as well as end-to-end testing, which verifies that the entire system in a live environment behaves as expected. So, Cuddle is a test tool really for your integration and end-to-end testing. And that is because it runs on actually live clusters. So, here we have various testing harnesses that we may want to declaratively test. And so, we started off Cuddle trying to test out operators, specifically Cuddle operators. But the utility quickly broke out to Helm charts, operators, things built using customized Cuddle, and then really any other applications or controllers, right? Anywhere where you may want to test that the shape of your resource comes back in a way that's expected, in the way that Kubernetes is operating it in a correct way. That's where you might want to use Cuddle. Next slide, please. So, our goal was to end up with being able to write portable end-to-end integration and conformance tests for Kubernetes without needing to write any code, without having to jump into go-test, without having to write out a separate Bash testing framework that does a whole bunch of applies and checks and sorts on them. We really wanted to feel native to Kubernetes. And that was very important, and you'll see that in the API decisions we made with that. Next slide. So, how do you get started, you know, Cuddling? We have a homebrew and a Linux brew, so you can just brew, install, Cuddle, CLI, and it will work. I think, Ken, are we doing distributions for Windows at this time? And go-releaser? I'm not sure if we're on Windows. Anyway, we'll make sure we'll check on that. That should be an open issue. But really, you can either do that. You can get it via crew, which is the Kube CTL plug-in package manager. Or you can get it from our releases page directly. And then if you want to integrate with it as an API, you can just go get it. Cuddle is both a CLI tool and an API for testing. And we want to enable people to be able to use this declarative tool, no matter how they want to integrate it. And so you can use this to integrate Cuddle into your own tooling. And that's what we're doing up in the Cuddle project now. Next slide. And Ken, I'll turn it over to you for this one. You're joking too much, probably. But if there's any time, you know, anybody work on Kubernetes and making deployments and having a fast rate of releases of Kubernetes itself and trying to keep up with it, it can cause some blood pressure challenges. So if you look up Cuddling, you might find that it's scientifically proven to have a cocktail of hormones that provide a lower blood pressure and heart rate. And so I made up this fake Wikipedia entry for what Cuddle will help you do with testing on the Kubernetes plug-in. Fake once you edit it in Wikipedia. Yeah, right. So we just want to make sure we call out, you know, we are an openly governed community. From the beginning, Cuddle, Cuddle, everything else we follow. Yes. John, would you mind asking that in the Q&A section and then I'll make sure that that gets answered. Just so not everyone's seeing the chat. I just happen to have it open. But I just wanted to give a shout out to, you know, the current Cuddle team and contributors. You know, a lot of work into building this. Anyone's welcome to join our team. It's open governance. And so we follow a very similar process to operating within the Kubernetes space. And we have our first question. So from Jane or John, I'm sorry, one way or another, I butchered your name there. But the question is, isn't it cute cuddle crew install cuddle instead of dash CLI? We'll have to check. We'll send out an updated tweet. If that's the case, it will make sure that's correct in the slides before we send it out. I just forget what the crew entry we have is. Yeah, I think so. Yeah. And done. Perfect. Next slide, please. Perfect. So quick agenda of what we're going to talk about today. Why would I even do this? Right? Why would I even use cuddle? And what does the first experience with cuddle look like? Ways you might want to interact with cuddle. Where you're going to want to do this. And then the, the anatomy of a cuddle operator. And then how do we, how do we take cuddle and turn it into an operator? Or how do we, how do we run cuddle in on an operator? A little bit of cuddle and action. We'll do a demo and we'll talk about the release and what we're going to do next. Next slide, please. So first off, why would we even want to do this? Right? And it really comes back down to this notion of, of building a declarative testing framework and, and running our tests declaratively. We talk a lot about Kubernetes about wanting to be very declarative in every, everything we do. And, and we believe it should be able to apply to, to testing as well. Next slide, please. So what does that in fact mean? Right? And then, how do we take this into account? A, a, a, a, a, a, a, a, a, a, a, a, a, a, a, a, a, a, a, a, a, a, a, a, a, a, right? Because here we're actually asserting the same exact thing came out on the other end. And we can assert on any partial piece of YAML or object that you want to assert on. So this might be a status. We'll see some examples of that later. This might be an event. If I'm running an operator, my operator is creating other objects. I want to maybe partially assert that a same thing happened, right? I spun up my my Prometheus operator and I got a pod out at the at the very end of this, right? So this is where Cuddle becomes very useful as a partial assertion framework. So let's go over a couple of different terms and how this all comes together. So we have a test suite and the test suite you can think of in a couple of ways. One, it's the folder where all your tests are in, but it's also an object where you can actually configure how your test suites run, right? What cluster you're going to run it into. I'm going to start kind. Am I just going to start a control plane? How do I want to do that? What are my timeouts? A whole bunch of different flags are in there. So test suite kind of has an overloaded meaning right there, but we have a test suite object, but everything you're running is really in a suite. And then we have a test, which is just a collection of test steps, right? And that's your atomic unit of various objects that you're going to declare and either and create in your cluster. And this usually is going to have some sort of assertion or error defined alongside of it. And then we have a test assert so that we can start to do those partial matches of those partial assertions on conditions that we want to say either pass or fail on inside of a test. Next slide, please. So again, test suite, we have two concepts here that define that, and it's really just your test folder, but it's also that configuration file. And you can see here, we're saying, okay, we only want to start a configuration plane or control plane, and we want some parallelism there. And this becomes very useful. We can talk about the control plane in a little bit, but there's many ways to actually use Cuddle, depending on what your speed is and how potentially real you want the results to be at the end of the day. Next slide, please. So our test is just a collection of those test steps. You have a test that's in a folder, and then so this happens to be the list pods test. Right? So I'm pretty straightforward how that works. Next slide. Now, in here, we have the actual steps of our test, right? So we have a test step that is going to run a command, and it's your gateway into running things fairly raw, right? So this you can use to spin up an operator before you actually do anything, and so, or anything else that you might want to do, right? This is your side effect mechanism into Cuddle so that you can bring it home to your environment without just having it be a bunch of YAML that goes back and forth. Next slide, please. And now we have an assertion on this command we just ran, right? So we have, we've set on our test assert that we're going to time out for 20 seconds, and we're going to do a partial assertion on the pod that we expect to have been created by that step. So here we have, we want, we want to make sure we have a pod named test2, and we want to make sure that it has some status, some reasonable status there that we can assert on. Next slide, please. So once we have that, we just run it, right? Cube, Cuddle, Cuddle test, and the folder you want to test against, the suite you want to test against, and it runs that whole suite. So if you've used Go test, this looks really familiar, right? This is, we follow the same formatter. Do we have any plans right now, Ken, for a J unit or any other formatting? Do you have in our own CI J unit reporter output? So we do get that as an output and look forward to a blog post or some documentation on how we're doing that. Yep. And you can actually, this is all open. You can go into our Circle CI and see that reported output so that it looks nice inside of our CI system. So okay, great. What's your first Cuddle test look like? Let's walk through this. So first, we're going to set up a test case, right? You're going to set up your end-to-end test directory. This might be in your Helm chart. This might be in your operator repository. This might just fit in the folder or your random folder of customized stuff. Doesn't really matter where it sits. It could even be in its own repository, but you do need a place where that suite sits. So once we have our folder, we can start to create our first step, right? So here, instead of having a test step, we're just going to have a pod and this is going to implicitly create that pod, right? So we're saying we want an nginx1.7.9 pod with the one container in it. And then, okay, great. You've written this. Now let's assert against it. And we're going to write out the exact same thing that time. And then once we're done with that, we just need to make sure that we have a test suite that's going to be run from your working directory. So you have a test suite object here. And that's going to list out all the directories that you want to have included when Cuddle goes to look for tests to run. And after that, you just run kcuddle test. And we'll talk about this flag in a moment here, this start control plain flag. But you'll see here, we run that test and we're good to go. Now what's that start control plain flag there? So you may have seen that in a prior screenshot. But this is another way to indicate to Cuddle, I only want to start the control plane and I want to operate against a mocked control plane. Now that's a real cube API server and a real LCD that gets stood up. But it's only those components, right? It's the minimum amount to actually run API commands against the API server. Very great for really fast sanity tests. But not so great if you're testing for side effects, because you need a controller running, right? So in those instances, you're going to want to, you can start up kind and you can start up or point at a real cluster in order to test against more real environments. Next slide, please. Actually, Ken, I think with that, I'm going to turn it over to you to go through ways to Cuddle. Okay, yeah. And on this slide, the thing to not be missed is usually a test week can have a any number of tests, in other words, folders that represent tests, but sometimes you might want to just verify one and this is pointing to that same directory, but it's saying, well, within that directory, just run this one test, which is the example test. No controls over the number of steps, it's going to be all steps that are defined within that. But there's ways to reduce the scope, which is a common interest. That's what I have, including us. Before, before we move on, we do have another question. So do the assert resources like pod use the same pod spec or can they have additional properties? And the answer is no, they do not have to be the same. This is really important, right? Because we'll go through, Ken will go through a lot of examples of this. But, you know, right now you're doing an exact match. But if you go, if we go back up to slide, where's the one with the status? Here we go, slide 21. Yeah, perfect. So if you look here, our assertion is on actually on lines five through 10. And here we're asserting against the status. Now when we created that pod that would not have had that property, right? Because the only thing that you can control going to the cluster is is the API version kind metadata spec, right? So status is something that would have been set by, by controller manager or scheduler once that pod got, got scheduled out to a kubelet. So the great thing here for an operator, for example, is if I'm going to create an instance of at CD operator or Kafka operator, I can still have a pod assertion down here, because at the end of that chain, I would have expected my my Kafka operator to have created a pod that hopefully has this label or hopefully has this a certain status, right? Or I want to assert that a service of type load balancer actually ended up with a an endpoint, depending on what sort of environment testing I'm doing. So the, you know, the example is nice just so you can see everything in one group of, okay, this all looks together when we're doing our initial demos. But the intent with Kudo is you do test potentially other properties that would not show up in your initial pod spec, just to make sure that, you know, the world around you is working in and your, your controllers and your operators are behaving as, as expected. Yeah, yeah. In fact, if we went back one more slide and we, we talked about the test step itself, well, this is different because it's a command. But when we're talking about a test step, the guarantees that Kuddle makes is that the object, if it does not exist will be created. But if the object does exist, it will do a strategic merge patch into that. So you can actually be very terse in a lot of your, your secondary steps or steps after setup, where you could just modify the ones, the one field, the one property that you're interested in. And it would be, it would be patched. So that's very common. Also on the assert, the assert after it makes the selection on the, you know, the group version kind, it is just asserting the things that are defined in the YAML. So it's not a, it's not guaranteeing anything else. And if you took a look at our, our sample that we had for our first cuddle, it's a little bit of cheating, right? Like the pod here on setup looks just like the assertion. And there are no other guarantees. As long as that pod is in the etcd, this would pass. So this would probably work fine under a mocked control plane with no side effects. But as Jared pointed out, that's probably not the assertion you would really want. You'd probably want an assertion that has an assert against the status, in which case now you're confirming or asserting that the side effect was exactly what you expected. Does that make sense? All right. I'm going to assume it makes sense. By the way, if you have other questions about this, there's contributing guidelines at the end. So if you feel, if you feel any of these questions aren't fully answered to what you would like, there'll be a link to our Slack channel. It's in the QB Slack at the at the very end and we'll highlight that. And Jared, as you already know, feel free to interrupt at the end of a slide for questions. And if you're monitoring, that'd be great. We have one more question. Actually, you know what, let's, I'll answer real quick and then we'll hold on some questions for a few, but how to find out what the properties are that can be asserted. That would be in the CRD spec if it's a CRD or the Kubernetes Open API spec for anything else. And for that, you can actually use the kubectl explain command to actually dive into all the fields on any object. But you'll find that in the actual Kubernetes API or your CRD API documentation. That's perfect. Ways to kubectl. There's a couple of thoughts here. One is, you know, we can use the, what we call the test harness. The test harness is invoked by using kubectl. And you can see the limited commands that we have really, as you can see what version you have and then run tests. We can also connect in through the API and our open source kubectl, the open source project of has this as well. It's a good example of this. But the core of what you would need, it's broken into three blocks here. The first are imports that you need to have or likely to have. Then you have this strong interest in this thing called test suite, which we're going to look at in just a few minutes. Actually, next slide. And then we have this run command where we're going to actually do a run on the harness. And then you can control certain aspects or the setups of this. Or if you have some different way of either outputting or inputting values, then you would have control over it. So kubectl also is a library. What is that? Hey, Ken, if you're talking, we lost you. Oh, I'm sorry. I got interrupted with something and that's unusual. I apologize. So yeah, so if we have different inputs or outputs, there's ways to have controls over that. The test suite itself, it's worth looking at. It does explain some of the details of setup. And if you look at it, we have a CRD directory where you can point at your CRDs as well as to a collection of manifests. The strong difference between those two is that with manifests, they're just manifests, just Kubernetes objects that you want created, declared in Yannels. And we will iterate through them and make sure they're applied prior to starting tests. So you can think of it more as like test setup or test suite setup. The second, oh, and then the difference between that and the CRD directory is that when the CRDs are applied, we actually do a wait to ensure that the CRDs are available. So otherwise, they're just like manifest files. And then you've probably already seen the collection of testers. And in our CI environment, we'll have a link out to many directories or operators that we will be testing against. And so we'll have a large number of test directories that we're using. The rest of that, again, this is all the test suite configuration. The next slide will actually have probably a little bit more detail. We have a fairly tight connection or a collaboration with kind. And because of that, we actually, you know, do you want to start kind? And it will actually start up an instance of it. Do you have a special kind configuration? Probably the most useful thing on here. Oh, first of all, most of the documentation would want around kind. You should probably look at the kind documentation. But we've got links to help provide some connectivity through Cuddle. The last point, though, is this kind containers. When you want to preload a bunch of nodes with images, this can be useful. In this way, you know, you're not incurring the fetch time of the image during the process of testing itself. It's already preloaded onto the nodes. And then lastly, within the test suite configuration, we have the ability to skip a delete. So Cuddle can be used in a number of different ways. We've talked at length about E to E, end to end testing, and even portable kind of conformance testing, which is super great. But if you have a, I don't know, like a mixed workload test like we do, or some environment that you want to establish, and then assert that it meets certain criteria, you could use Cuddle for that, and then say, go ahead and skip the deletion. So in other words, when the test is complete, don't delete those artifacts. Just leave them in place. Just verify that what I've asserted to be true is true, and then leave them alone. Same thing with leaving in the cluster behind. In this particular case, we don't delete external clusters anyway, but sometimes if you're debugging and having a challenge with your kind cluster, you don't want your kind cluster to go away before you inspect it. So that's generally the purpose of that. A highly used thing, our configuration is the timeout. So if you have a need for long running tests, then you're going to want to make modifications in that space. Parallel, as you might expect, how many tests do we want to run in parallel, and then we can change the output of the artifacts directly. And then the last thing is commands. Now, I've newly added a new feature into commands. Generally speaking, these are commands that you'd want to have done as a pre-step to testing to happen, but we've added an ability to have background commands. So usually if you're running a controller, you're going to run something that's already been released, and it's probably in Dr. Hub or something like that, but occasionally you're actually wanting to run a controller, and that controller is in development and is not released. So being able to run a command and background that is the controller and your test, this can be very useful. See, there's a open question. Oh, sure, yeah. Here, I can read a lot. The metadata is not present. Oh, okay, so great question. There are two pieces to it. The second part of that question. Can we assert that metadata is not present? Yes, we have the concept of a file, which is the assert. We also have the kind called a test assert. Those can assert that something is present. We also have a file called errors, and errors is essentially things that should not be present. It's the opposite. So it would validate or only be true or assert to be true in the absence of some value. So we do have a mechanism for that. For regex, I'm not sure. I'm going to have to pass on that for now. I do know that we do regex testing with Cuddle in our environment, so I should just say the short answer is yes, but I'm almost positive it's necessary to call out to a command right now to actually do that regex comparison. And I have it in mind, or we have it in mind, that that might be a good feature to add. And in fact, sometimes there is a need to do a regex comparison, or there's at least a couple of use cases that have been presented to us, where you want to do a regex comparison against a pod log output. So in the pod log, there's some confirmation of a process either being complete, or in the case of Cassandra, using a node tool to do some activity, there's a log that indicates that that has been accomplished. So probably look forward to seeing some features being added into to help support that. And Hey Ken, will you just repeat the question for the recording? And then so you have programmatic access to the test suite, but we also have configuration capabilities as well, as you've already seen a number of times. It's very common for me when I'm building out a cuddle, especially a new, to to leave out this like start kind or start control plane. I can just specify that as a flag. The flag will override this, so it is possible to just leave it in here and override it with like a false or something. So I tend to make this minimalistic if possible, but this should adhere to the constraints that we just we just reviewed. And then we have the cuddle CLI. What I'm showing here actually is what you get as an output, but it shows a number of different options. If you're manifest, I'm sorry, if your test suite already is fully documented, if it's complete, then you would be able to just say cuddle test. If you want to change the test suite, you could say that the configuration file is somewhere else, and there's a variety of different configurations you can see here. You can be fully declarative on the command line as well. Oh, am I not coming across? One moment everyone. We have some technical difficulties. Can someone confirm that they can hear me? Oh, you can't hear. Okay, good. Just for the sake of your question. Karen, I can't hear you. Okay, I can hear you, but I can't hear you. Sorry. Could be on my end. Okay, so yeah, I'm not sure what to do. I might have lost some sound on my end. I apologize, but it sounds like you guys can hear me. Could you continue to text me when you want to interject? Okay, thanks, Jared. Sorry about that, folks. Great. Okay, so back to where I left off. Where do you want to cuddle? This becomes super important, I imagine, for lots of people. The first is this is designed to work against live clusters. So as long as you can provide us with a cube config, and we also have a cube config flag, that will test up against any, presumably, any version that we match with the Clank Go up against a live cluster. We also have, as we indicated, an integration with kind. So you could say, go ahead and start a kind cluster. It'll be automatic created for you. And then we have the ability to have a mocked control plane, which I think Jared went over a little bit. I know this is the concept where we just have, essentially, the API server in SED, and that's it. Limited uses there, and I would consider that to be more of an integration test, and not end-to-end. But the other two are nice good, you know, very good end-to-end tests. And just to give you some figures that just came across my desk this week, we are running tests against our flavor of Kubernetes as an end-to-end test in a production environment. And the creation of that, testing of that, and teardown of that, takes an hour and six minutes, and of course takes cloud time, meaning cloud dollars. That same sort of test on kind is three minutes and 40 seconds, I believe, is what it was. So we believe that there's, you know, great value in having these really fast and testing using kind, getting the value out of that. And of course we're going to test against a live cluster, but we can reduce our cost and time of context switching for developers. So kind of a win when I believe. When we're working with a kind cluster, we can make the configurations. Again, I would advertise that going out to the kind documentation for what can be configured is where to go for that detail, but we do support it. So we can set the kind context, we can set the kind config, and again we can also say hey, don't delete that. Next, let's look at a breakdown of code in a little bit more detail. Question was, is it possible to use, we have an integration with kind, it's possible to use any other type of cluster as long as that configuration or that creation of that cluster is created ahead of using a code. So I would say we do support that, but the nice integration of having it just created for you is only supported with kind at this point. And of course it's open source, so I'll pull requests or welcome full of value weight based on demand. But we do support it. It's just a matter of automating the creation process. All right, so steps. This is really just a breakdown of the things that Jared went through in a little bit more fine detail. We do support AML and a couple of different extension names. We do also support files being in those test directories that are not AMLs. They'll be basically ignored and that's a great place for documentation to explain what this test is doing and how it, you know, any preconditions or expectations, things of that nature. There is a nomenclature associated with a file name which is the index, so zero zero is probably what you've seen examples of. There's one on the screen. Dash and some name and one would expect it's a name that's meaningful to you. It is arbitrary, but it's just like naming a test and it does show up in the output of the test runs. So you'll see a test starting and it'd be the file that is, I'm sorry, the directory name that is output and then you'll see each step and you might even see if you have a failure or an assertion failure, you'll see in which step that was failed. And we have a couple examples here. Zero zero pod, zero zero example, zero one staging, things of that nature. It's also possible to put in as many YAML documents in a single file. It's also common to see YAML assets that are used across tests, in which case today we commonly would say we use the command to do an apply and that's one of the next features we'll probably be adding into Cuddle is the ability to specify that from a test step declaration as opposed to forcing anyone to use a command. It's just one of those things that we've realized. It's worth pointing out as well. Cuddle came out of Cuddle and has been around for a year. So we just had a release in March. It was our first Cuddle release. It could feel like these are fresh bits and in some cases it's true but this has been around for a year of time roughly. Steps. Oh, so I mentioned this earlier but these are creator updates. If the object already exists you can just express the minimum amount of updates. If you want to delete then you'll need to actually declare that in a test step and then it's possible to delete things and we have a number of good examples of doing that as well. Oh, right here in fact. So the one difference is we can't just assume a manifest is delete. We actually assume that it's an update or create. However, if you want to take control over a delete it's possible just to grab a list of things that you want to delete in a given test step. And then commands. As mentioned when I want to reuse something or in this case I want to use something from the web as opposed to a relative path it is possible to provide that information within a test step. And so, you know, one way that you might install a zookeeper operator using Cuddle is to actually invoke the cube Cuddle install of Zookeeper. In this case it skips the instance but it does get the CRDs and instances the CRs in place. It's worth pointing out that of course the commands are a run that there is a requirement that the paths be resolvable which hopefully is reasonable. And then again answering an earlier question we do have the ability to assert but we also have errors. So if we have an errors file it's basically an assertion that this state does not exist and if it does exist it's an error. So after a given timeout and everything is a ventral consistency. So there's not an expectation that on the next you know second or whatever something's done it's within a certain period of time. That time is controllable as well as part of the test step configuration. So the default is 30 seconds but it can be whatever you've configured it to be. And let's see here so here's an example and this is another example that includes a status so the status base is successful. And in this case it's slightly different right? This is confirming that we did have a side effect and that side effect is a certain thing. And okay the next step would be some tips. So cuddling tips would include things like we work with everything associated with Kubernetes. So the Kubernetes events are objects and they are also assertable so we can assert that they exist or don't exist. Another tip is that we can wait for CRDs. So we wait for CRDs automatically within the test suite but if you create your own CRD within a certain step you might want to wait for that. So one way to do that is to actually as you see on the right we are wanting to create the instance or use an instance of a CRD but what you would do is in the setup step for 00 is you would create it and then you would assert that it's true. So in this example we see the status is that it's available that we're we see that the kind is by CRD. And so we have this pretty well documented on the site. We also work with Helm and when we're working with Helm you can see examples of again basically using the command so with no native integration with Helm we do work pretty pretty successfully with using Helm to do a knits or deletes or any kind of other operations and then you know the next step might be asserting that something is available or true. Okay so cuddling in the operator when looking at an operator one would expect that we have a CRD so we're going to define a CRD to load and then we're going to wait for that so that's the one difference that's mentioned before from being you know having manifest files is that we have a we have something we want to wait for that's being created. So here's just an example of what we've done within Cudo. In Cudo we have the ability to have CRDs created on a knit so we have a command within Cudo and then we'll say dash dash weight and the dash dash weight in this particular example will basically block moving forward within the test until the command is complete the command will be complete when Cudo is ready so this is a simple example it works great for like a released version. We also have this ability where we're going to insert the CRDs but then we're actually running in development so we're running in our CI environment and within CI environment we want to run what was built and not potentially not necessarily released on Docker. In this example we have it set for background equals true so at that particular point commands on the like line six we wait until it's complete but things that are marked with a background are started and we don't wait for them you would have to have some other mechanism after the manager at this point to indicate that the manager is ready. Okay so I'm having some sound difficulties I am open to looking at some cuddling action and if I can get confirmation that we can see the page that would be great great. Okay great a couple of things to note the first is that I have this test directory and the test directory has a test so this is this is the test suite this is the test if we look at list pods you can see that I have my pod and then I have my assert and this came right out of our example that we have and it doesn't really test against status but this is designed to run up against a mock test however this might be largely different than our CRD and step with the CRD I have a CRD that I'm creating I'm asserting that that is true with a status that indicates that it's in existence then I'm actually using it then I'm actually confirming that that that occurred so this is basically apply and I know this looks the same but this is confirming that the that I can apply so and then again as a quick example here's our test I want to bring it to the top so a cuddle test in the directory and then we run in this particular case we're using a mocked control plane and the my mocked control plane starts up very quickly and very rapidly but I could just as easily if I switch this around to you know start control plane it's bell equals false in this case I would have to have a cluster that I'm connecting to but I'm switching it out I'm overwriting essentially the configuration that's there in that particular case it won't work because I don't have a running cluster I would also add that we have I think reasonable documentation obviously open to pull requests and feedback would love it but details about running your first cuddle details around the CLI how to use it but probably more importantly is probably deeper explanations of asserts and errors and what's possible along with details on steps and changes and modifications you can make within there so that's the core of cuddle let's switch back and now I think we probably have an open discussion on the future of cuddling but as far as what to expect we've had our first release in March we have been we have a roadmap in place to do some pretty active development over the next several months to add in features that we specifically are interested in but we're also very very interested in what the community might want or need as a comparison I don't think I have this available yet if you look at the oh maybe I can hunt for it on the fly right let's see here by comparison if we look at what we have and and kubernetes proper and I'm not what would you see what what what people have been doing to do similar things like just a simple get right the simple get I mean this is all essentially bashtrips and I'm not I mean I'll let everybody else be the judge of what's easier to read whether it's just a layout of yaml or this but this is the kind of this to me this feels like the target of hey we could make this perhaps we can make this easier to read and and cheaper ownership so that's that's kind of where I'm going with future of cuddle I'd also I also envision potentially I also envision making a test folder I'm sorry test suite folder an artifact like a tar ball and then those conformity tests or portable and I can use them on any given cluster and potentially hand them to end users to validate things are as expected or maybe even provide asserts as to what I expected and get diffs back as to why it was different than my expectation so with that we did have a release in March as indicated however it's been a year in development and I think we could open up for some q&a the cuddle is available online it's open source we are active we haven't had enough activity or community activity with with cuddle specifically to warrant to its own slack channel we've still been managing that within the kudo channel and so far that's worked out very well but if we've seen if we get an uptick and in demand we're obviously going to go out there and make sure that that we provide the best experience with connecting to the team we do have a kept process I will add that that's more of a kudo enhancement process and all that means is it's a kubernetes enhancement process minus minus it's a reduced version but if you're already familiar with the kubernetes kept process I think it would find it to be very familiar to you and why don't we open up to questions Jared all right awesome thank you thank you Ken thank you Gert everyone from the audience if you have any questions feel free to drop them in the Q&A just one remark from my side believe it or not your release day 26th of March is my birthday so I don't know how come I do that webinar for you guys but it's really really nice so everyone please drop your questions in the Q&A if you have some we can wait for a few minutes one thing I'll add in the meantime while we're waiting for questions you know there's a couple groups that we've been talking to to power this we've introduced this to kubernetes testing as the potential option there to to help make things more robust we've also been talking to red hat a lot and the about ways that the operator hub and and cuddle can work together and they are very active participants in our project as well Orlin yes we can hear you and and then the D2 IQ itself also is using it to power everything from conformance of operators to to other tooling and we have a lot of other users as well who are coming out and and using it for a very simple thing and then one thing I'll actually volunteer out there is there's another great tool out there called conf test and conf test and kudo different a very significant way yeah it's one of the things that I kind of ask myself when I like like I go what tool should I use conf test is really about the the static asserts and then kudo or cuddle is really about testing how how how does this actually run in my in a in a real live environment so if you're looking for a unit testing framework along this conf test is a great option there and then and in in cuddle is more of your integration and and side of that equation so and and and we you know I've plenty of caps to open around you know porting some of the things like rago over into cuddle so that you know these assertions can happen in multiple ways down in the future we've got a got a large roadmap of things we want to do with cuddle yeah that's great and I apologize I can't believe I forgot to mention the red hat integration we are looking at the operator SDK creating a scorecard for maturity which is mostly their effort but integration with cuddle for conformance so it's super exciting all right looks like there are no questions submitted so with this one thank you again Gerrit and Ken for a great presentation that's that's all for today thanks everyone for joining us the webinar will be recorded it's recorded and the slides will be online later today we're looking forward seeing you in the future CNCF webinars and have a great day evening afternoon depends on which part of the globe you are thank you everyone and see you next time bye bye I like so much thank you bye bye