 Hello, everyone. Welcome to CloudNative Live. We are diving into a code behind CloudNative. Paulo Simões, I'm a CNSEF ambassador. Every week, we bring a new set of presenters to showcase how to work with CloudNative technologies. They will build things, they will break things, I hope so, and they will answer your questions. Please join us every Wednesday at 3 p.m. Eastern This week, we have Andy N. Jorgensen here with us to talk about Kipton. I think that's his name. Please correct me, Andy Jorgensen. Also, China's and Kivicon's CloudNative come to virtual Europe during May 4th to 7th to earth the lesson from the CloudNative community. This is an official live stream of the CNSEF and ESSEC is subject to the CNSEF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of the Code of Conduct. Basically, please be respectful of all of your fellow participants and presenters. So, with that, you all hand this over to my friends Jürgen and then to kick off today's presentation. Jürgen, Andy, thank you so much for joining us today. Please correct me. How can I say, Kempting, Kempting, I'm so sorry. I want to learn with you. Let's break the code. Yeah, that's good. Jürgen, do you want to teach him German or not? What's your life? Yeah, so first of all, thanks for having us. And it's really great to be here and to share our story here in this live stream. So it's actually, it's Captain. And for us, we are from Austria. We're German speaking. So it sounds like the Captain of a ship. And two years ago when we came up with this, we really thought that's really cool because our initial goal was to ship applications and nobody does ship applications better than the Captain. So this was our idea and we have a lot of those terms in the project and in the software. So we are keeping up these Nordic terms, let's say. Great, Captain. So, oh my God, why I not remember this. Great, great, great. Could you please introduce yourselves, talk a little bit about you, to us, if you are working this project, our commuters, members, talk a little bit more about this project and you, please. Sure. So I got involved into the project, actually, from its very beginning about, we started about two years ago. I was one of the core contributors and was really writing one of the, let's say, the first lines of code. But then I kind of transitioned a little bit more into the role of leading the ecosystem around Captain. We have a lot of tool integrations. We are heavily involved in all the CNCF tooling. We have the integration with Helm, with Litmos, where you're heavily using cloud events. So we are really picking up all these cloud-native technologies. And this is kind of my background. So I still know a lot of the source code of Captain and still maintaining the project, but now really taking care of the ecosystem and the CNCF ecosystem and open source ecosystem around it. Very big. And then I think Jürgen and I, we started at the same time because we were both kind of there at the inception of the product. My focus, however, is on adoption of Captain, so really making sure that our users are successful with adopting Captain. There's still ways to go. I mean, we have great adoption already, but we also learn something new every day and what else we're missing, what is complicated. As you said, sometimes things still break. And I think that's also expected with an 0-point something release. But yeah, my goal is to really make sure we get Captain out there in all of its different use cases. My background is performance engineering. So I've been working in performance engineering for the past 22 years now and have been trying to get a lot of performance engineering use cases also prominently featured into Captain. Great time. Great time. We should schedule another day to talk about performance engineering. It's amazing. It's very interesting just about the comparing. And the cloud native is a disability company. I like disability company. Great, great. So thank you. Let's go on. Let's see the Captain. Let's see the Captain. And I think Jürgen, I'll just kick it off with a little overview. So I think I will try to share my screen. And I believe clearly I was told I need to give Libby a heads up that she actually makes this possible. Or is this the right one? Ah, it was the wrong. Let me see. Try this again. I got to share this one and then move this over and now we are good. There we go. Perfect. So I can encourage everyone that has never heard about Captain to visit our website. I think that's the best way to get started. Captain.sh is the official website. Now one thing we've learned the hard way is that a .sh domain is pretty cool and nerdy, but not every organization allows to browse to .sh domains. Still, give it a try. If you cannot get there, we are obviously also, you can find us on GitHub. We have four, it's not four, three different organizations. On the one side there is Captain itself, kind of our core project with Captain Core, the specs, enhancement proposals, the website examples and so on. Then we have Captain Contrib with our, you know, kind of core integrations, contributions. These are, as you can see here, very heavily focused on monitoring integrations because Captain heavily relies on pulling data from the underlying observability platform, whether this is Prometheus, whether this is Dynatrace. We also have some other Argo, is a core contributor service, Argo rollouts in particular. We have the contributions in Captain Contrib and then we also have Sandbox. This is where every extension that you build, we call them Captain Services. This is where every Captain Service kind of starts to live. We also have a great template. That means if you want to get started, if you want to build an integration with Captain, you would start here by just following, I think, the really excellent tutorial and using the template that was mainly curated by Christian, one of our core contributors. So I think that's, this is something that you want to know. The other thing is, if you go to the website, you'll find, and I think you can, this is actually, I mean, you've done pretty much all of this, right? And now it feels like I'm taking some of your credits. So excuse me for that. But I think what's also very good besides obviously explaining what Captain does from a high level perspective, the tutorials here bring you to the tutorials that we want you to walk through. We're using CodeLab and we have a couple of different tours. Now you can see here, they're all sorted by version. Currently, the latest Captain version is 0.8.1. So this is why this defaults to it with the previous versions, we have a little more of the tutorials. We still have to level up where, let's say, convert some of these tutorials to 0.8. 0.8 is rather new, and we haven't had the time. But I think most important is that we have full tours through Captain using Prometheus as a data source, full-time Dynatrace. The reason why we feature Dynatrace heavily is because we are both working for Dynatrace and therefore, we always wanted to make sure that Dynatrace has a great integration. Captain in the Box is a pretty cool tutorial from our colleague Sergio. He has built this and you can just stand up, Captain, on any Linux Box. And then, I mean, Jürgen, this is something that I think you will probably show later on around resiliency engineering, where Captain is orchestrating performance tests and chaos tests. So Captain is battle testing your environment and then is telling you how is your system behaving under chaotic situations. So I think this is kind of like how I would get what I would love everyone to know. The other thing to know is we have a Slack channel as well on both on the CNCF. There's a, you find a Slack channel called Captain. We also have a Slack workspace because traditionally, before we kind of donated our project to CNCF, we had our own Slack and you can also get there to Slack.Captain.sh. And there is a meeting each month, et cetera, where a community can chat about and participate with you. Maybe you can. I think this is, I mean, it really feels like I take a lot of the lorries from you. So maybe you want to quickly talk about the community. Can you just let me know what I should share? We can return to the front of it, no problem. No, I think it fits perfectly now. Maybe you can quickly talk about some of the key items. Yeah, I think it was briefly cutting off, but so if I understand correctly, now it's about the community. So we've built the community page into the Captain website and we do have a couple of different channels, let's say. So we do have our own Slack. We do have a mailing list. And then if you scroll down a little bit, we also have our kind of ask an expert session. So that's basically a session that will be me. So whenever you feel like you want to talk one-on-one with a couple of questions, and also for we have our Captain user groups and our developer meetings, we initiated this because we saw that more people want to contribute to Captain. We will say later on how Captain orchestrates different tools that you might already use in your organization, such as Argo Rollouts or Genito for testing or Litmus Chaos or other tools like Helm for deployments. And we saw the urge from the community to see more how new services can be contributed. And this is why we're also coming up with developer meetings. They are each Thursday, 5 p.m. Central European time. Everyone is very welcome to join. And also in our user groups, we are more sharing adoption stories from Captain users that this is shared with the broader Captain community. Sometimes they are more focused in on performance engineering, like Andy explained earlier. Sometimes they are more focused on the quantitative aspect, how to prevent bad builds to reach production. So we do have different meetings there. And one aspect that we also came up with, because we see a lot of engagement from the community, is our Captain Community Rockstar program. And actually we are about to announce our next Community Rockstar tomorrow in one of our meetings, because it's the end of the quarter. And we already awarded three of our, let's say, Captain friends and Community Rockstars. One is actually an organization. And the other ones are individuals that heavily contributed back to Captain. And also maybe you've already seen them somewhere speaking about Captain. So we really have a very loyal community here, I would say. So a big shout out to the community. Thanks to all of you that are contributing here. Awesome, awesome. Good. Excellent. Let's show the Rockstar playing. Exactly. So I'm actually not sure, do you know in rough idea what Captain does and which problem that it solves? I'm asking you. I'm asking you. No, I know, do you have a rough idea what Captain does or not? Oh, no, explain to us, please. So I'm here to learn with you, please, let's go. Okay, perfect. So what we have, what we have seen is that a lot of us software engineers are, you know, trying to figure out how we can automate delivery and how we can automate operations as well, because we are getting more and more responsible for pushing out features. And also, if you're responsible for operations, we also want to automate in case something is wrong in production, how we can react to it. And I think a lot of us are using tools that you're familiar with, whether this is, you know, automation tools like Jenkins, where you can do magic things with Jenkins, I always call it a Swiss Army knife. But what we thought, if the world, if a thousand people have already built Jenkins pipelines that basically do something that everyone wants to do, which is taking an artifact, deploying it into an environment, running tests, figuring out if the tests are good, then maybe reach out to your monitoring tools, your security tool, your log analytics tools and figure out, is there anything else that it might be, you know, going on that prevents us from promoting and then go maybe if everything is good, push it to the next stage. So there's a lot of boilerplate code we've been building. I think all of us have been building in automation pipelines. And we thought we want to provide an opinionated approach that makes it much easier to define delivery processes and also processes around operations. So if you want to get started with captain, because captain, you want to use captain to automate performance testing, delivery, quality gates, the easiest way to get started is really going to either following one of our tutorials or going to the captain box. There is a great easy way to install captain. The only thing you need is a Kubernetes cluster. And then you can just follow a download the captain CLI, and then you just do a captain install. And there's two flavors of installations. Captain itself, the core is a control plane that controls processes. And then you have execution components that can, for instance, trigger a deployment, trigger a test, trigger an evaluation, promote one thing to the other stage and so on. So you can actually decide when you install captain on a Kubernetes cluster whether you just want to install the so-called control plane, which contains the features for quality gates and automated operations. Or if you want to also install the execution plane, which we then call the use case continuous delivery, this includes everything. And you can install everything on one Kubernetes cluster. This is actually what I have done on my machine. Let me just walk over here. So if I clear here, I'm actually running here on an EC2. I have a K3S installed. So I like the Kubernetes cluster. And just to show you that I'm not lying. Andy, sorry. Could you please increase a little bit your shelf size? All right. Are you? Of course. Let's see. Good. Good. Thank you. Is it better? Better. Much better. Thank you so much. So when you install captain, captain comes with a couple of components. As I mentioned, captain at its core is a control plane that is later on for me managing and orchestrating my processes. It's an event-driven system. So for instance, some of the things we have here is we are using nets for the event thing. We are using our so-called shipyard controller that manages our what we call automation sequences. We have a Mongo where we store all of our events. We have a configuration service where captain internally keeps all configuration files. So we are having a Git-first configuration file first approach. So all of our configuration files are version controlled in a Git that we are hosting here. We obviously have an API endpoint. We have a bridge. This is our UI. We also manage secrets. So we have a secret service. What else is interesting? The Lighthouse service is a component that takes care of SLI and SLO validations. So service level indicator, service level objectives for our quality gates. And so a lot of other services that you can then install that then can participate in the event stream of captain. And because we will see them later, I have Chinmeter installed. I have Argo installed for Argo rollouts. I'm using Dynatrace as a monitoring tool. I also have the ability to deploy Helm charts. And I also have a so-called generic executor where I can have captain execute any type of webhook, any type of Python script, or even shell script. So but this is basically what you install in order to use. Sorry. Can I ask one question, Doug? You said that you use MongoDB to store the configuration database, right? No, not the configuration database. It's Git. It's Mongo stores the individual events. Great. Good. One question. You choose MongoDB for some specific requirements or maybe we can, one question. It's possible to change from, for example, MongoDB to another CNCF project named HCD? So we didn't pick Mongo, I think, for any particular reason. I think when we started with the project, we were looking for a document database that meets our needs. And that's why we picked Mongo. We, you know, you can create your own kind of service. So basically what it is, there's the Mongo data store service that is really storing the data. So because captain, everything is event-based. I mean, Mongo is very cool to captain itself. But in general, yes, we should be able to also replace the Mongo store. Oh, great. Great. That's the idea. Great. We have one question here. We can do it now or prefer in the final? No, it's good. I see the question. Does all the components get installed by default when installing captain? So when I go back to my dock page, let's see, when you run captain install, depending on whether you say the control plane only, it will only install the core components, which means everything that is needed to manage the processes, the data store, the API endpoint, the UI, and also the Lighthouse service, which is responsible for quality gates and what we call the remediation service for other remediation because this is part of the control plane. Then on top of that, you can install whatever tools you want captain to integrate with. In our terminology, we call this the captain's uniform. If you think about a captain that steers a ship, a captain wears a uniform, and in this case, the uniform defines what type of additional tools are installed. So you can install any additional tool yourself. There's a couple of those that come with the co-installation. All right. Now, let's get actually into the product because I think I've been talking a lot. So captain is organized in projects. So what you see here, this is my current captain installation, which means this is what we call the captain's bridge. I know I can make it bigger because I know that the screen size is a little limited. We are using captain for different use cases. I want to show you actually the I would say the use case that we initially had in mind when we designed captain because we really wanted to solve the problem that people have to write very long pipeline files for multi-stage delivery where we deliver, we test, we evaluate, and then we promote it into the next stage. So here what I have in a project, captain has stages within a project, within a stage, or within a project, you also have 10 services. So we have a concept of you can have one service, you can have five services, 10 services. This will basically represent your microservices where it can be any type of component within your application that you can deploy. In my case, I have one service in here. So we have a project. The project has two stages, and then I have a service that I can now have captain run through that process. And as you can see here, I have played around a little bit today. What you see here on the left is a list of events that always trigger an automation sequence. We call them sequences. And if I click on the last one that I've done, which is the one that I've triggered about half an hour before we started, I was deploying a particular version of my simple node service. And then captain on the right side actually shows me what happened. It shows me that it was initially deployed into staging. I immediately get an overview from captain of the things that are important. I automatically get the evaluation results because what captain does, very opinionated. It takes my component, my image, it deploys it, it runs some tests, and then it evaluates it against metrics. We call them SLIs. And then in the end, calculates a score. And based on that score decides whether this is good enough to go into the next stage, which is production. So I can see how it's staging, how it's production. Now, the big question is, this all looks very nice from the outside, but what happens behind the scenes? Behind the scenes, if you remember, captain internally holds a git repository. And actually, when you create a new project in captain, the first thing you need to do is you need to say, captain, here are the sequences that I want you to automate for me in each individual stage. And when you create a project, captain automatically also creates a git repository, an internal one. But then what I've done, I have given it an upstream. So I could upstream it to Bitbucket, to GitHub, to GitLab, whatever you want. I have just my own, I'm using Git here as my kind of web UI. And what you see here is that when I created this project initially, the only thing you need in order to create a project is a so-called shipyard file. You may remember now that there was a shipyard service earlier. Now, what I have here is I have to specify how much stage I want to have. I have a staging stage. And then what type of automation sequences I want. And in my case, I specified in staging, I want to have an automation sequence for delivery. And in that stage, captain should first do a deployment, then it should run some tests, then it should run some evaluation, and then it should ask the user for approval. Can you increase the font size a little bit here? That would be cool. Yeah, perfect. Thanks for that. I guess I'm just spoiled because I see the full screen in front of me and not the smaller version. So the point is you specify a stage and then what do you want to automate in that stage? I say delivery, and then you specify tasks. And every task can have additional metadata because otherwise captain has a very opinionated approach or thought what should happen when a deployment. But here I say, hey, captain, later on when you trigger a deployment, whatever tool you have that is consuming that event that is basically acting upon the event, I want the tool to know that I have a deployment strategy with a user managed. User managed in this case means user managed Helm chart. This could also be blue, green, direct, there's different things. Or for testing, I say I want some tests to be executed, but I want the test tools that are listening to captain events know that I want to run some performance tests. Okay, so the nice thing here is we have a complete separation of how we define the process and how that what the tools are doing. You see here, there's no tool definition, there's no hard coded weaving between what should happen in this stage, which tool should pick it up. So what we've implemented in captain is a separation between process definition and then the tools are listening to it through events. And then captain has an opinion what should happen between the tasks. So for instance, it knows that when an evaluation happens, the evaluation will evaluate the results and based on that says this is good or no good and then it continues with the next step. So there's certain things we've built into the automation here. And if I go back to my project now, my demo rollout, what I showed you earlier, and I've shown you on my version number one that I only see kind of like the thing that is important. So this view really just shows me, hey, was everything good? Yes or no? What I can also see behind the scenes is all the individual details that happened. So for instance, if I click on few sequence, I now truly really see what happened in staging and what happened in production. Now this should now look familiar because these are exactly the individual tasks that I have specified that should be executed. Remember the first thing that is that should be a deployment, then there should be a test executed, then an evaluation should be triggered. And then at the end, a human should approve it. Yes or no? I've already done this earlier. And what we also show you here to make this a little easier, when captain is actually executing my sequence, it is sending an event. We actually show you that event. So event we have standardized here on cloud events, which is another CNCF project. It means all the communication, all the events that kept sending out that are then consumed and subscribed to by your individual, we call them captain services, testing tools, deployment tools, delivery tools, notification tools. They simply need to say, I'm interested in a particular type of event. And I know if an event comes in, I can expect a certain data structure that tells me some additional information on what I should do. In this case, captain was sending out the deployment has started event. And in my case, I have my health service. My health service is listening to the event and the health service says, Hey, you know what, I think I know how to do that type of deployment. If it does it, if it says I want to participate in that workflow, it's first sends a so-called started event. So it basically says, Hey, captain, I am the health service. And I am starting the deployment. This is important because so that kept knows is one tool are many tools or is no tool handling a task. And then it's also waiting until that tool is finished. And when the tool is finished, it basically sends back, Hey, I am the health service. I have just worked on that task. I am now finished. And here all of my results and maybe some additional information like, this is the URL that I deployed the new application to. Now, one last thing before I'm pretty sure you have questions, one last thing that I completely missed now, because you may wonder, so how does helm know what helm chart? How does the G meter test know which meter test? How does the lighthouse service know which, which metrics to evaluate? Because none of this information is here. This is just the process definition. But let me show you, when you are creating, when you when you create a new project in captain, not only do you give it the process definition, but for every stage, captain automatically creates a branch at the prod branch and have a staging branch. I have prod here, because if I scroll a little further down, I have here some more additional things for prod, some more sequences. But if we go to the staging branch, the idea is that every single tool that participates, the end user of captain can simply upload the necessary additional configuration files to that Git repo. So for instance, here, we have our helm chart. Our terminology or our definition is if you are using a captain project and you have multiple services that you want to deploy, every service has a unique name. Therefore, every service has a subfolder with its name. And underneath that folder, for every specific tool that you onboard, you add the tool specific configuration files like helm has its helm charts. And so when I go back to my workflow and helm says, Hey, that's interesting, there's a deployment request. There's a deployment request for the demo rollout project for the simple notes of service in staging. The first thing that the helm service does before sending back, yes, I can do it. It basically asks captain, Hey, captain, in your config repo in staging for the simple notes service, do you have information for me that the developer, the architect, whoever that DevOps engineer has uploaded so that I can actually do my job? Because if nothing is there, then I'm not doing anything. And the same is true for other things like here, I have my SLO files, my service level objectives. This is for the evaluation that you saw earlier, kind of like what type of metric do I want captain to analyze for me. In this case, it's a metric called response time P 95 with some criteria. Now this SLO is completely tool agnostic. It doesn't say where the metric comes from. But then I have enabled the Dynatrace integration and therefore the Dynatrace subfolder includes a monitoring tool specific SLI demo where I have specified how Dynatrace, when it is triggered, knows how to query this data. These are just query languages. You have the same for Prometheus, for PromQL and other data sources as well. But this is the way this works. And now, one more thing, you may notice there's no G-meter, but I have G-meter tests. How does this work now? Well, we have also the notion that you can put config files on in a stage or you can also put it into the main branch. And then in the main branch, it's kind of like an inheritance. That means I can upload global config files. So here I have some global test files that can be used by G-meter for any type of service in any type of stage. But the strategy is from any tool is first look at the particular branch where I am right now. The particular let's say staging. Let's see if there is any G-meter files either for that branch or for that particular service. And if not, then look up in the main branch because it's kind of like an inheritance. This also makes, I think, things much easier, reusable. You can share components with each other. Amazing. And if I lost some part, when you create this project, you have this kind of template or that you can feel. Yeah. Exactly. When you create a new project, the only thing you need is a so-called shipyard file. This basically specifies how many stages do you have and what type of automation sequences do you want Captain to work in straight later for you. And you're completely free. This one, what I showed you, I just showed you one little piece of it. But I also have some other things here. But in staging, I have my delivery, which I showed you. Then I also have a rollback sequence. And you can also see here the rollback sequence will be automatically triggered in case the delivery sequence up here fails. So this is also a nice way how you can chain sequences together through conditions. What I also have is similar here in production. I say I have a delivery sequence for production and it should be triggered in case the staging delivery finished with pass. Then it should go on in production, do a deployment, do a test, do an evaluation. And I even have canary phases. So I'm actually using Argo rollout in this example. And I run for different phases of canary rollouts. So you're very completely free. But what you can also do is, and because I told you we want to be opinionated. And I want to show you another project quickly, because I want to show you how easy it can be to get started. A very simple shipyard file could look like this, which is a shipyard file that specifies a stage called quality gate, which is I think our number one use case right now. Quality gate basically means I want to run an evaluation. I may have a system, I have maybe you have your Jenkins, your GitHub, your GitLab, and you have already done some deployments and some testing. And the only thing you want to automate is the evaluation of certain metrics over a certain timeframe. So in this case, this sequence here is doing an evaluation. And I have two tasks, I could even forget the first one, but the first was actually very interesting. It's called Monaco. Monitoring is called. This task is picked up by my monitoring integration to make sure my monitoring tool is correctly configured. So if you're, for instance, assume that you have tool tool X, and you want to make sure that tool X is properly configured that all the metrics are there that you really then later need for the evaluation. And yeah, it's a simple use case. And it's a simple process definition. And one question. You can see that it's a YAML. You plan the project. Captain has the plan to to have a version of this kind of management construction using its portal. It's, oh my God, dashboard. Yeah. Yeah, when you can, where you can, yes, we had here where you can put the stages and the stack steps, tasks that Yeah, we have this here, right? This is all, this is all here the visuals, right? So for instance, here is my staging and production. I also have other projects, for instance, with that with a three stage pipeline, I have dev staging and production. If you go back to, if I go back to the previous project, the demo rollout, and you go to sequences, here, you can't exactly see what sequences are. I mean, if your question is, are we planning on a visual editor? Yes, I'm pretty sure this is something that we know we will do in the future. But I think we have, we currently have other priorities, I would say, then doing a visual editor for the shipyard file. I'm pretty sure it's somewhere in the roadmap already. But right now, there are so many other things that we believe bring more immediate value. Because to be honest with you, you define that shipyard file once when you create the project, you can always edit it, you can edit it and you can add more things to it as you go. But I think from a visual perspective, we want to invest more in better visualization here and more interaction with the UI first than with an edit capability. That's one thing that we have, do you remember FF, what's it called, shipyard? Uniform. Uniform, thank you. So for instance, we already have in the current version a mock-up of one thing in here that will come. So remember, I talked about the uniform, uniform means which services do I currently have installed either on that same Kubernetes cluster or on remote Kubernetes clusters, because we allow now with the latest version of Captain remote execution planes, but then you want to have an overview of, hey, what is currently installed? Which services are out there? Where do they run? What are they listening to? Which events are they subscribing to? Are they healthy? So we are, and how are they configured? So for instance, you can centrally now see what services are handling which events. Because the great thing about Captain is, now think about it, if I go back to my shipyard file, let me go back to the one that I had earlier, the demo rollout. There's no tool definition in here, meaning when I execute this sequence, Captain says, I need somebody that can run a performance test, and maybe today I'm using Jmeter, which works for me, but maybe tomorrow I want to switch to LowCast, which is a tool that Jurgen has just recently worked with the LowCast team, and the nice thing is I can switch tools without having to think of where do I have hardcoded calls to that tool in my pipeline, and because we have taken care of this, right? There's no need anymore in your pipelines to say trigger the tool, parse the results, and then do an if-then statement if it fails. We have taken care of this because we have taken care of the tool integration, but also in handling the results, and the reason why it is so much easier for us to do it is because we have standardized on events. We have standardized on an open standard, which is called Cloud Events, which allows every tool to easily participate and then also easily send back their results. So that means if you want to write your own testing tool or test tool integration and you want to participate in the test step, you know that you will get an event that tells you more about which project, service, and stage, which then gives you access to the GitHub repository, where you can find your test tool specific, maybe test files, and it will also at least contain a test strategy so that you know which type of tests you might want to run. And if you decide to participate in captain, then at the end of the test, you just send back a test start and result. And this allows captain to then say, tests are great. Let's go to the next stage. Andy, can I just forward a question here from the community and it goes, are there any performance test tools for when it is clusters? And the question is from Deepak. I think it just fits to this conversation. Yeah. So for Kubernetes clusters, I mean, the thing is, if I go back quickly to my overview, in my case, in chain meter here, and I think Jürgen, right, you've built, you've worked with the low-cost team and think you're also working with the artillery teams. If you write a captain service, which means a service that is consuming the captain event, that integration can decide where the test is actually executed. In our case, for chain meter, this is a container that receives the event and then will then also execute the tests within that container. So if the question is targeted towards other testing tools that run their tests on Kubernetes, meaning generating the load on Kubernetes, then the answer is clearly yes, because you already have the chain meter service. I think Jürgen, the low-cost service is similar. You're executing low-cost on that container. We have another integration with Neotis load testing tools and they provide two options. You can either run the load in the Kubernetes cluster or you can use the cloud load testing service. So they have both options. I really would like to add one part here, because you already mentioned it and for us, when we started the low-cost integration, what was one benefit of captain to say is that we did not have to change the shipyard file. So we kept the shipyard file. It was saying, I want to do a deployment. I want to execute a test. I want to do an evaluation and then I want to promote it or release it to the next stage based on the evaluation. But we did not have to change anything of this file, we just removed the chain meter integration. Actually, we scaled it down to zero and we added the low-cost integration and the low-cost integration was then doing the job of the performance testing tool. So exchanging tools is very easy. You can just think of this before we're getting this uniform screen that Andy showed earlier, but before until we're getting this uniform screen, you can basically just scale down to zero your deployment and then you can just add another tool. And you can always go back with just bringing the old one up to one or more. And I think this is also very interesting here. So this is actually the definition of your low-cost service that you built and basically a deployment of the captain service. It on the one side deploys the container itself that implements the actual action. And then we also have a so-called distributor that you run as a second container in the pod. And this distributor is the one that is actually subscribing to the captain events. So we have kind of made it easy so that not every service has to write their own subscriber. So you basically just have a second container in your pod and here you can define to which type of events you want to subscribe. You can either be specific, also comma separated can be multiple, but you can also do wild cards. So you can say I want to handle every triggered event. An example here would be we have integrations with Slack where the Slack integration is listening to all triggered and finished events. And then it's just forwarding these events to your Slack channel. I see one more question just in the chat here. I guess something that came in earlier. Lighthouse service has nothing to do with Google Lighthouse. That's true. But funny enough, we've been calling our Lighthouse service Lighthouse for a long time. And I'm not sure how long Google has used that name publicly, but we've been we've been using it for a while. And the Lighthouse service is exactly what we it's basically the service that is reaching out or sending an event, say, hey, monitoring tools that are listening, give me your values. And then the monitoring tools Prometheus, Dynatrace, they would then reach out to the conflict directory, say, okay, which ones do I need? Returns them back and then the Lighthouse service is really then doing the magic here. It is comparing the individual values against your SLOs, where you can either specify pass criteria, warning criteria that could be with a with a fixed threshold, but you can also do relative values where you can do regression detection, we can also calculate baselines across multiple builds. And then in the end, we score at every every line here. So this is the same visual say up here, the heat map visualization on the bottom is a table for every result here, you can see that the Lighthouse service was looking at every metric, comparing it, and then calculating a score, normalizing that score between zero and 100. So that in the end, we'll get a result between zero and 100. And you can also specify what is the objective that you have. And this is also all specified. If you remember earlier, what I've showed, if I go back to my project here. So for instance, in staging, for my simple node service, I have an SLO YAML file, where I specify, these are the metrics, I don't care how I get to the metrics, this is somebody else to care about, but I want to know, I want to specify what's important for me. And then in the end, you can also specify for the total score, how should the total score be compared to pass and warning. And I think if we if we can take a look again at the SLO file, I think what's really interesting is that we also see a little bit here how captains orchestrating the different tools. We do not see directly where the data is coming from. And they mentioned this already earlier. So this is kind of abstracted here. So this file, you can easily reuse for different data providers, you can even reuse it for different services. It doesn't have the service name in it. So the service name is a placeholder then in the API calls or in the PROMQL that are behind the scenes. And what's also really cool is, since captain is orchestrating the tools, and we were talking about performance testing earlier, it actually knows the time frame when it needs to do the evaluation. Because with all the trigger events and finished events, captain doesn't know how long the tests were running. It doesn't know for which test it was executed. All the information is stored in the Git repository, or all the events are stored in the in the data store of captain. And the combination of both really gives you the possibility or gives you the, let's say, confidence that the evaluation is for the correct time frame, for the correct service, you don't get any noise in the evaluation. I think this was also one part that we really worked on hard, that the evaluation phase is really doing a great job here and you don't have to parse the data by yourself. Yeah. Really, really amazing. We have a few minutes to finish our great live. Let's think about people that are starting in starting the community. I saw that captain is in sandbox projects. It's a main opportunity for everyone that are starting to starting community and starting and want to contribute. In your opinion, how can I, for example, how can I, I'm starting now, I want to contribute and it's amazing, contribute to a sandbox project because it's the beginning of the journey, of course. How can you, what you can say to me to the guys that want to start now? How can we contribute with you? My expertise, which expertise I need to have? How do you advocate for me, participate in this amazing project? I think I can take this question. So first of all, we really appreciate if someone wants to join our project and contribute to our project. For this, we have on our Git repository, we have a lot of issues tagged with good first issues. So we really want to have a kind of a lower entrance barrier, really welcoming new contributors. We have a list of good first issues that should not require, here, for example, I think it's a rather new one. Yeah, there is one. So we are maintaining them and creating regularly. That should not require a lot of in-depth knowledge on the project. So all the different core services should not really be important for you, but really to get a hands-on of the project. We have all the documentation, how to set it up, the project, and then it just gets started with coding. Most of the coding is done from the core parts that is done in Golang. And the UI is Angular TypeScript. So I think this is most of the parts. And what we also see from the community, it's not only about contributing with the code, but we do have the documentation and we do have our tutorials. And these are actually also two parts where we see a lot of contributions. Just recently, we had Josh. I don't know him by in person. He found the Captain project and he was contributing a spell checker. So now we have our CLI spell check. We have our documentation spell check. So it's not only about contributing code, but if you have something you want to contribute, if you are very good in writing documentation, if you want to add doc tutorials because you just added your tool to Captain and you want to make it more visible to the broader community, you can start with writing a tutorial. For example, the Litmus tutorial that we can see here was created together with our friends from Litmus Chaos. It shows everything that will be done and achieved until the end of the tutorial. We even have kind of an estimate. We say like, you will finish the tutorial in less than 45 minutes. And you will get a full setup of Captain plus Chaos engineering experiments plus JVD tests. And I can already spoil a little bit here. This is also a talk at QCon that we're basically basing and building upon what we've already done and what we've already seen also being adopted by some organizations that are actually doing their Chaos testing with Litmus Chaos and do their relation with Captain. So all kind of contributions are really welcome. And if you have any questions, you can reach out to us on the Captain Slack. That's the easiest way to get in touch with the maintainers. Let's say. Great, Jorges. And what we can already for these amazing projects for the next year, because I suppose not next year is so long, but next few months or we can reach you again in any present event. You are planning to present in Cubicon. You are planning to present in any Kubernetes community today. What are you planning to do in the next months to be in touch with this project? Do you want to go first or should I? It was again a little bit cutting off on my end. Maybe you can start and I can jump in. Yeah, so I think the question was what are the demands ahead? What are we focusing on? For my side, I will be heavily focusing on showing Captain multi-stage, multi-cluster delivery, integrating and using tools like Argo Rollout, what I'm doing here, also showing a way for, I would say, users of Jenkins, how they can modernize their Jenkins pipelines with Captain so that they don't have to throw away Jenkins because Jenkins is doing certain things really great, but instead of trying to build all the logic that we have here in your Jenkins pipeline and maintaining it, let's Captain do the logic and let's Captain then call your Jenkins pipelines, for instance, for the individual stages like a test or deployment. So I want to focus on this and then another big topic for us is going to be auto remediation, but this is where I want to hand it over to Jürgen because Jürgen is actually leading the auto remediation working group. Oh, great. I want to invite you to present exactly this part that you said about the integration with Jenkins. It's amazing. I want to invite you please to do this. Yeah, definitely. I want to see this more. So it's really amazing moment here today. I need a lot about Captain and it was really amazing to very interesting and they help a lot, so we are finished now, but I will give you the last moments to finish our meeting. Amazing live lunch thing. You can add it. I'll leave it to you. Yeah, thanks. Thanks for having us here. Thanks, Andy, for doing all the hard work here in presenting. I think we covered a lot, but there is even more. Andy mentioned it in the end. We are working also on auto remediation and we are also orchestrating auto remediation sequences when it comes to rolling back, talking feature flags, executing any kind of remediation action in response to a production alert from Prometheus, for example. So this is one part. Maybe we can come back and also talk about this part a little bit. If you want to give Captain a try, please go to Captain.sh. I think we can also see the URL here. Andy mentioned also our tutorials and if you have any questions, please feel free to reach out and thanks again for having us. I will look forward to meeting you again, guys, because it was really amazing. I will invite you, please, accept. We should know more about this project and the other things that we can do with it. So it's the end. So thank you, everyone, for joining us. The last episode of this week in the CloudNative live, it was great to have you again. Andy was amazing. Thank you so much to talk about Captain. I learned that it's Captain. We also really loved the interaction and the questions from everyone that came to us today. We bring you the last Cloud Native code Wednesday at 3 p.m. Eastern time, unless we could have another amazing presentation, amazing meeting with great guys that will show to us the best of Cloud Native at the world. Thank you so much, everyone. See you. Take care and see you. Health and safe. Bye-bye.