 Thank you so much. All right, so let me share my screen here. Thanks for the venue. Thanks Jenkins community for inviting me. Yeah, just some additional information on what Oleg just said. But if anybody wants to reach out to me, Twitter is one option. If you're not on Twitter, you might be on LinkedIn. So here's my LinkedIn link. You can see Grabner Andy with an eye in the end. That's always very important for me. There's a lot of andes with a Y. I always say I'm the one with the I. You can find me LinkedIn as well. In case you wanna reach me via email, you can just use my full first name dot my last name at dinatrace.com. Because as you can see, I currently and have been working for Dinatrace for 12 years and hopefully for many more years to come. So you can definitely reach me on andreas.grabner at dinatrace.com as well in case you don't like Twitter or LinkedIn. Now today's topic is about a project that we've been working on for more than a year now. The project is called Captain. And in case you wonder what Captain stands for, Captain is the German phonetic for a Captain. So we or I and the large part of the Captain team, we are from Austria. So our native language is German, even though I think if you ask German people, they probably say the Austrian is not really German. It's a strange word dialect that we have and we have many of them even though we're a very small country. But anyway, so what Captain does, Captain provides a lot of different things, a lot of automation use cases on things that we have seen in our own organization also the people we work with as part of their continuous delivery life cycle. There's different things. So if you are interested in learning more about Captain, obviously go to the Captain website, go to the GitHub repo, check out the tutorials, follow us on Twitter. You can find all the information. Today I wanna focus on a particular capability of Captain which is automating quality gates based on the concept that Google has kind of spearheaded at least from a terminology perspective around SLIs and SLOs, service level indicators and service level objectives. Now before I go into the presentation and also live demos and discussions, the questions that you have in the polling, these are questions that Oleg asked me, what kind of questions can we ask in the polls? And for me it's really interesting to understand, have you been familiar with the term SLIs and SLOs? Are you using this rate in your company or is it completely new? What I'm showing you today really works well if you integrate this in your pipeline where you have automated tests. So the question for me is also, how does, do you have actually tests as part of your pipeline? The other thing is also, do you build validation already in a pipeline? If so, are you happy with it or is there a chance of doing more? Because that's why Captain can obviously feel hopefully a gap that you have. And the last question is also very crucial for us. We are here today with Jenkins and Kubernetes. So Captain runs on Kubernetes whether it's any type of past Kubernetes that the cloud vendors put out your own Kubernetes or OpenShift. But the question that I also have to you as a community, if you are in charge of your Jenkins pipelines, do you actually have access to Kubernetes? Could you be able, are you able to install a tool like Captain on a Kubernetes cluster or is this something you cannot do for whatever reason? We wanna get a little feeling on which deployment models we need to support in the future. All right, so let me get started. There's a lot of things I wanna do both from an educational perspective, but also live demos. I have two major use cases prepared, but they're all centered around the SLO-based quality gates. So the first one is really, why do we do all this? It's around automating build approvals. And I wanna first kind of come up with a problem statement. So what is the thing that we are seeing in our organization, but also with some of the people we work with? You know, you may have your Jenkins pipelines and you do your build, your deploy, you run your tests, but then very often there's a manual approval stage where then somebody needs to look at results. And in case you have some unit tests or maybe functional tests, you already at least have some data where you can say, you know, this is a good build or not a good build. But then if there is a problem, this is a deal breaker, yes or no, right? This is a hard call that quality gates, yes or no. So that's already good if you at least have some functional tests. Now, if you are like me, I've been working in performance engineering for the past 20 years. So performance is very dear to my heart, performance, scalability, reliability. And the next question is, well, can we add more performance metrics to the mix so we actually get a better understanding is this a good build or a bad build? Will this build break the next environment? And I know there's a lot of great tools and integrations with Jenkins already available, the performance plugins that can pull in data from different testing tools. But still the problem at least I see from the integrations that I know, there's a lot of manual comparisons. You can look at charts, you can look at graphs, but then is this really better or worse, right? If you look at these two charts, now do I know is the current build better than the other one? You know, it takes time to figure it out. So it's already great though if you have functional tests and maybe some low tests. The next thing from a maturity perspective that we see is people that are adding monitoring data to the mix. So in case you're deploying a build, if you run your tests and then you also have either Prometheus as a monitoring option for you or an APM tool like Dynatrace, New Relic, Data Dog, App Dynamics, there's a lot of tools out there. I obviously put Dynatrace on it because that's the company I work for, but still any type of monitoring tool will do. The great thing about this is, yes, we get even more data, but often the people that we talk to, they say, well, now we have even more data and we don't even know what the data means and what the data to look at. And also some of these tools that don't tell you from which particular test and from which particular build does this data actually come from. So yes, it's great to have more data, but itself it still prolongs the process of approving the build. So this is a challenge that we wanna solve, right? So what was the inspiration on how we wanna solve this problem? The first inspiration came, as I said earlier, from Google's SRE practices. So in case you have not heard about SRE, it's a site reliability engineering. There's a lot of great material out there, but essentially it is targeted on three pillars. The one concept is service level indicators or SLIs and really what this is, it's just defining a particular metric that you wanna extract and base an evaluation on. So for instance, it could be the error rate of a, let's say a login test. So if you deploy and run a test and you check your login, your log out, your add to cart, then you can as an SLI define, I wanna know the error rate when I run my test, how many errors do I have? That's an SLI. The next thing on top of that is the SLO or the service level objective. So here it's kind of like a binding contract where you say my objective for a particular metric is a certain target, right? Let's say a login error rate of a new build if we execute these set of tests, has to be less than 2%. Another metric could be the response time should be faster than 100 millisecond for a certain API. So these are my SLOs. The last thing then is SLAs and they might be more familiar, more than they're more known over the years that because they've been used for many, many years obviously in operations to define kind of like a business contract of you as an organization that puts out software to the consumer, maybe your end user or whoever is using your software in whatever capacity. And there with the SLAs you typically say, hey, logins of your system must be reliable and fast. So for instance, you're looking at error rate and response time and throughput and within a certain time window you may even have legal contracts where you say we have to be up and running and available 99% out of a 30 day window. So these are kind of concepts that Google has brought out and really what I can encourage you to do if you wanna read up on it there's a great Google Cloud YouTube video that's called SLAs, SLOs, SLAs and there's here's the YouTube link but you can easily find it. But basically what they say and I kind of extracted this information from them is SLAs drive SLOs which inform SLAs. Now the question is, this is typically done in a production environment so all of these concepts have typically been used in production, which is great, but if we are pushing stuff into production and then we are validating metrics and objectives why are we not using the same concepts early on in development in CI CD to enforce these SLOs already because we already know what is expected later on and this is kind of also what inspired us to build what I'm going to show you. Another thing and another implementation that I've seen actually comes from our organization. So as I said, I work for Dynatrace and Thomas Steinmauer he's one of our chief performance architects. What he's been doing for years is every time he gets a new build, a daily build, he deploys it with Jenkins into an environment, he monitors it. Obviously we monitor our tool with our own tool so we get some monitoring data and every day he runs continuous load and then he validates how does this build compare to the other and I thought he had a very interesting concept. He called it the performance signature. It's kind of how does the performance, how does the quality look like? And essentially he then told me he let me look behind the scenes and he said, you know Andy what I'm doing after every build after we get a new deployment and we run our continuous tests on it then I am looking at multiple metrics is a kind of DSLIs memory error rate and so on for the timeframe of the test then I'm comparing them and validating them against thresholds. Either we have hard coded as thresholds where we know we cannot cross that limit or we just compare with previous builds for regression detection and then what we do, we look at all of these metrics and we basically calculate an overall quality status of that particular build that comes in and that's what he calls the performance signature. And so we thought, this is actually cool. We have been doing this internally already integrated into Jenkins. It was custom built, it was home built and when it worked great for us we said, how can we take this and put it into now the open source project captain so that everyone out there can leverage it. So that's why what we built, remember captain is a larger project but one of the capabilities is the quality gate capability. So the way this works, captain quality gate can talk to different data sources whether it's your monitoring tools, your testing tools or any other tool. It's extendable through an event-based mechanism so you can write your own extensions. The most important thing though is you define your SLIs so your service level indicators. So what are the metrics that are important for you when you push a build through your pipeline typically this is stuff that your performance engineers would define or your team leads, your architects would define they said these are metrics that are important and here is a name and here is the query that the individual tool that can deliver that metric understands. So this is then specific for what we call the SLI provider. So this could be a Dynatris query, it could be a Prometheus query, it could be any type of query against the system. And then you also define your SLOs. So this is another file where you say, okay I know I have all these metrics available but now I want captain to not only retrieve the values but for some of them also enforce a rule. So you can specify pass and warning criteria and here we allow you to mix between static thresholds and also dynamic thresholds where you can compare between different builds and then overall captain will calculate an overall score. So every metric will be scored and overall we calculate a value between zero and 100 and in that SLO file you can also then specify the total passing score. Now the way this works, if you have captain in place you basically say, hey captain, I wanna start my evaluation because you just ran your Jenkins pipeline, you deployed, you ran your tests and then you can say captain for the last 30 minutes for this particular application, here are my SLIs and here's my SLOs. Now you go off and pull the data from these tools, one on many tools, pull in the data, like these are the SLI values that may come in, then captain will compare it against your SLOs, what are your objectives and then every individual metric gets a score between one point, so one point, if it's everything is good, half a point, if it is in warning range or zero points, if it's failing and then captain calculates, okay, how many points did we achieve? Let's say if you get seven out of eight total points that makes it 87.5%, which is good in my case because I specified that this is a good build or if it's four out of eight, it might just be failed. So this is kind of how this works, very extendable with any type of tool. Now to give you a little better example also over builds, if you have an SLI document where you say, these are my metrics, response time 95th percentile, failure rate, response time of my login tests, number of database calls that my login test is executing. So you specify the SLIs, then you specify your SLOs, so what are your objectives? What have you set out to achieve for the engineering team? And again, this is a combination of dynamic values with what you can compare with previous builds, previous baselines or static values. So if you have hardcore thresholds and you define the overall score, but overall goal. Now if build number one comes along and you say captain, do the evaluation for my project, for this service, you will see later on, captain is organized in projects and services and stages, but he said, this is the time, then captain goes off for you, pulls in the data, compares it, in this case, everything is green, you get 100%. Build number two comes along in my example, two metrics are not good, so you're getting penalized overall 75%, that's a warning. Build number three comes along, the first two problems have been fixed, but now the number of database calls that one of the services that you're testing has been going from three to six, but you didn't allow that increase, so now you're getting penalized, which means now you're down at 62.5%, which is a bad build, so you can stop the pipeline, and then build four comes along, everything is green again. So this is the idea, what this really does, it automates a lot of the stuff where you would normally go off to different tools and pull in all the metrics and then compare them. Now, this is the way we explain it in a PowerPoint with a table, and this is how it looks in the captain's bridge, so captain itself not only has an API where you can trigger the evaluation and get a result back as Jason, but we also have a UI that visualizes, hey, this is every single SLI, and like every row here is an SLI, every column is an evaluation run, and then on the top you see the overall score, so it's easy to see how individual builds have done from a quality perspective, and kind of taking this number and aggregating it to pass, warning, or fail, and then we also have a chart view, so as a performance engineer that I am by heart, I also like to look at the numbers and trends, so you can also then look at, okay, when did this particular metric try to start creeping and getting worse before it tripped over? So this is also great. All right, so how does this now fit into Jenkins? Remember earlier I talked about the manual approvals, at least what we've seen, it just takes time and it's manual, so what we can now do with Jenkins and captain, and we have a Jenkins library that if you have captain installed, then you have a Jenkins library that can make all these rest calls to captain because captain has a rest interface, we can trigger it and get the results. The first thing though, and this is kind of a little bit outside of captain, but it's important, it's a best practice. Remember one of the things I said, if you have an environment where you deploy and then you run different tests, I think one thing we also need to solve is whatever tool you have in the backend that is collecting logs or metrics, you need to add context, or you need to give context to these tools, like when do you actually execute which particular test, we call this tagging, and depending on what monitoring tool or logging analytics tool you use, make sure there's ways how your testing tools can tell these tools, hey, this was the time and this was the moment in time when I executed this particular test because then you can also get more specific metrics back from the monitoring tool, so tagging is important, but then from Jenkins now, I just say captain, here is my SLI, here is my SLO, captain goes off, pulls in all the metrics from the, let's say the monitoring tool or your log analytics tool, so instead you could still look at the dashboards, but it's automating that, it is then validating them, it is then giving you the option to obviously look at the heat map, but more importantly, you also can automatically pull in the result, which means you can fully automate your pipeline, so if a successful score comes back, you can let the Jenkins build pass or if a bad result comes back, you can let it fail, and this is now completely automated, right? So instead of spending 30 to 60 minutes on looking at different reports, it's just fully automated, right? So that's the point, a lot of time savings. All right, so how does this work? Let me just quickly go through this, setting up captain, there's a lot of tutorials, now I'm going to do it today, I'll show you how I do it with Dynatrace, but we also have tutorials out there for Prometheus and other tools, but in a nutshell, captain itself runs on some flavor of Kubernetes, so that means you need some type of Kubernetes, we also have a cool installation option for Ubuntu Linux, where we have installed micro Kubernetes, so you will just need a Linux machine. And in my case, I need a Dynatrace 10M because I'm fetching data from the monitoring tool, but then really what it is, you install, you download the captain CLI, then you say captain install, there's two installation options where you can get the full feature set of captain or just a feature set for quality gates, but essentially what captain installs for you in your Kubernetes cluster is different components that captain comes with, the ones that we are interested in is the evaluation component, also what you might be interested in optional is notifications, so captain is an event-driven system and you can notify different tools like Slack and stuff like that. Also central GitOps, captain comes with a Git repo pre-installed because every time you are giving captain a new SLI or SLO, it will also version control it. Now itself, captain exposes an API endpoint and also a bridge that's the UI. So that's what you basically do is a captain install, then you configure your monitoring tool so you just give the captain basically credentials on how to communicate with the monitoring tool and that's it. Again, another option, if you wanna install it on Ubuntu and on Kubernetes or OpenShift, there's a great tutorial, it's called captain in the box, but this link gets you there as well. And then every time you wanna use captain from your Jenkins pipeline, then you obviously need to ensure that captain understands what you wanna do. So captain has a concept of a project and services and stages, but we can also fully automate that. But normally we would use the captain CLI to say create a project, create a service and then add the resources, but the Jenkins library that I'm going to show you has this fully automated. But just so you know, there's also a CLI or an API for that to make this happen, right? And every time when we create a project and add resources, then captain will automatically push this into its GitHub repository. So it's all GitHub spaced. All right, now the first thing is so you need to install captain, it's a one-time thing. Now, in order for this obviously to work, you need a system that is that when you deploy something and then you run tests, you can also pull data out of it. So in my case, what I have, I have a CI CD environment where I deploy my new builds. This can be any type of environment. In my case, I have installed my monitoring tool on it. I have, it's an agent-based solution, binary trace to automatically get infrastructure insights. Then every time I, for instance, use Jenkins for deploy, whether I deploy a Docker container, whether this is a Kubernetes cluster and I do a Qubectl apply or whether I deploy Java, whatever it is, the monitoring tool will automatically pick this up. So I already get metrics and then this is the sample app that I'm using. I always say I'm not very proud of it because it's not pretty, but it does a good job in explaining what we're doing here. Then the monitoring tool also gives you insights into what's happening within that thing and it gives you things like response time, failure rates, gives you memory information and all that stuff. All these SLIs and metrics that are important. Now the last thing is you obviously need to have some tests because when you deploy, you wanna run some tests and this is the critical point here where whatever test you use, make sure to look at what monitoring tool or log analytics tool you have so you can combine these. So in my case, I will be using a custom developed curl script and also chain meter later on and they have basically, I just integrated them. So every time chain meter executes a test as part of the HDB request, it adds a little token to it and says, hey, this is my homepage call. This is when I check version. This is when I do an echo. This is when I do invoke. So these four different use cases of my sample app but as part of the request, it is adding that context. So my monitoring tool is picking this up and the monitoring tool can then later give me metrics on what's the response time of homepage version echo invoke? What's the memory usage of homepage version echo invoke? How many database calls were made by each individual thing? And these are now meaningful metrics that we are interested in, right? Especially as part of our work that we wanna do here, we do not only wanna look at response time and failure rate but I think in the more distributed world we're going, we need to look at metrics like the number of database calls that are made, the number of exceptions that are thrown, the number of service calls that are made to the backend because then we can also detect architectural regressions meaning if your developer is pushing a new build and whatever that build does, it's super fast but all of a sudden it makes 50,000 new database calls to the backend, that's something you wanna flag. And this is why whatever monitoring or log analytics tool you use in the end figure out how you can connect the testing tools with it. All right, now if we have this, right? And if you have data that is interesting for your form your monitoring tool, from your log analytics tool, then you can take these metrics and instead of having it in a nice dashboard we can then convert them into, remember these SLI documents and SLO files so that instead of just looking at it at a dashboard and then still manually doing it, we can let captain do the work. And so I'm going to show you some examples later on, right? But the idea is, right? Once we have done this, yes we can after every build manually go into all these reporting tools and all that stuff and manually compare it but this is the reason why we built captain because it's about extracting data, comparing it and then giving you a result. All right, so now I'll come to my first demo. The first demo, this is part of the, you can see the GitHub link on top, github.com slash captain sandbox slash captain Jenkins tutorial. There's a couple of Jenkins pipelines that you can just run. And the one that I'm going to show you is this captain quality gate evaluation and when I run it, what it does, it just reaches out to a particular environment that I have running that is already under load and it just pulls back metrics and then it gives me the result and then hopefully I will see something like this, right? So let me do this, let's go over. So here is my captain quality gate evaluation. I have built with parameters. And again, these are just very focused pipelines. You can obviously build this into your pipelines but here I'm saying I have a captain project that is called GQ project. I have a stage and services. This is the way captain is structured in project stages and services. I can also specify what type of monitoring tool I wanna use. I'm choosing dinotrace, what type of SLI. So I've pre-configured a couple of SLI definitions and then I'm saying what timeframe do I wanna evaluate? So in this case, this is the most simplest use case where I'm lazy. I don't wanna go to different results, different tools and then pulling the metrics but I let captain do this for me. So while this is running, quickly, you can obviously look at the source code as well if you go to that Jenkins tutorial but what the pipeline actually does, there's a captain init stage where my captain Jenkins library that's also on Git, you just call init, you give it the project name, the stage name, you give it the name of the service and the monitoring tool. And then the library will automatically make sure that your captain installation has this project created. If not, it will create the project. It will also automatically configure monitoring. So everything is fully automated. You don't even need to touch captain. It will create the service, as I said, and it will then also upload my SLIs and my SLOs and also my monitoring specific configuration. This all happens out of my Jenkins pipeline. This means these files, SLIs and SLOs, in my case, they come from my source code repo. That means SLIs and SLOs, they sit next to your source code, next to your tests because they also belong together but now I hand it off to captain and say, captain for this build, I want you to use these files. Then I'm triggering the evaluation. There's a library function. It's called send start evaluation event for a particular timeframe, let's say 600 seconds. And then what captain will do, it will take that timeframe. It will reach out to the different monitoring tools and it will basically execute these queries that we specified is passing in the correct context and it's also passing in the right timeframe. It's transforming these queries and then it's pulling back the metric. And then at the end, I can also, because this is an asynchronous function, depending on the tool, it may take half a minute, a minute. If the data is there for that timeframe, then you can say, wait for evaluation done. So I'm pulling and then whatever comes back, I can either let the pipelines exceed or I can let it fail. So if I switch back now, I can actually see the pipeline failed. Why did it fail if I move over? Captain score was 60, result failed. Now why is that? Let's have a look at this. I can click on it. So these are the artifacts. There's a lot of, in my example, I put everything out. There's a captain HTML and this is something I wanna improve. There's a link to the captain's bridge. So the captain's bridge is now the UI. And the nice thing is I have a link here that directly gets me to this particular run. So every time I execute something in captain, I get a context ID, we call it the captain context. And this context ID keeps everything in place that belongs to this execution. So with this link, I get to, hey, this was a captain quality evaluation. You also see your labels. I am passing in context from Jenkins to captain. So I know it was built number 24 from this job name and this was the job URL. So I started the evaluation at this time. Then captain started retrieving the SLIs. When this was done, it did the evaluation. And here I see all the results. And as you can see here, these are all my old results and I can go back in history. And it seems my build is really crappy, but it's basically data that comes back that is pulled in from, this is the app that I have running, right? So this is the one app that I'm running. Here is the monitoring tool behind the scenes that is monitoring this particular service. So let me just show you. This is my service that I've taken with evil service. Here we go. So again, I could go in here and click through all the reports and get the data, but why, right? This is what we've automated. So that means, where are we? Here we are. I see the results, every single SLI with the actual value of why it failed. And I can also switch over to the chart. So now I can, right? So this is, you can click on all these individual columns here, metrics. And then you can see how our individual metrics kind of evolve over time, because if you wanna see trends. Cool thing though, this is fully automated. The next use case I wanna quickly show you, and I know we have about probably 10 more minutes. Let me go here. The next one, so what their first use case was, you have an environment, you have deploy, you have run some tests from your pipeline, and you know which timeframe the test ran, and I basically kept and go off and analyze the timeframe. The second one, part of the tutorial is, I actually built in a little testing script. So this is a very simple pipeline that just executes curl comments in a loop against different URLs of my app. Very simple, poor men's load testing tool, as I always call it, but it does the job. So in this case, right? If you have a pipeline where you're also running tests as part of your pipeline, let me go back. This is the, it's called simple load testing with captain quality gates. I can do build with parameters. So here I specify again, captain project, stage and service. The pipeline will automatically create this construct. I say monitoring. Now I want to switch to a different SLI proof test, and I will show the SLIs because now this will run a little longer. I basically say, what is the URL I want to test? This is my sample app. My simple load testing tool, I can pass in here some end points it should test, and the load test should run for let's say three minutes. Go off. What my pipeline is doing, it's initializing captain. It is then running the tests. So this is just a curl comment, right? And then at the end, it triggers the quality gate again for that timeframe. Now while this is running, let me show you a couple of the things here. You remember earlier, I had SLI basic and proof test. So I have two SLIs in my tutorials. One is just basic. These are five metrics, throughput, error rate, response time. So these are the logical names. And here's the query behind the scenes that gets executed against the tool that you wanna get the data from. Basic means just the basic metrics. The proof test metric, this is now extended by particular metrics for for metrics that give me data from a particular test run. So I still have my five from the top. What's the overall throughput, overall error rate and so on. But now I have, what's the response time of my test case invoke? My echo, my version, my homepage, right? Because I'm executing these different tests. And so you can add as many service level indicators here as long as your tool can also extract that data. And this is where at least in Dynatris perspective, I can say Dynatris, give me the response time of a test case where the test step was version or homepage or give me the number of service calls because my service is making backend service calls to other services. So I can get the number of backend service calls it makes and pull in the metrics. So you define what metrics you want for your particular project or service or pipeline stage. And then you also need the SLOs. Remember the SLOs are those I showed you in the slides earlier where you list, this is my SLIs, this is my SLIs, my SLIs. So you can actually say, hey, captain, I not only want you to pull the data out, but I also want you to compare them against a pass and warning criteria. Now here's some cool things. What I didn't mention earlier, you can also wait different metrics. So you can say error rate, for instance, is more important than others. By default, every SLI gets a weight of one, but you can change the weight. Also what you can do, you can do something like this. It's called key SLI true. If you say key SLI true, that means this is a key SLI, if this fails, the whole test fails. So maybe you have a metric that is so important that if this fails, everything should fail. So this is also there. Yeah, and I have the basic version and I have the perf test version. So you can see here, this contains more metrics. Also you can just reference an SLI without having pass and warning, then captain is just showing you the value. Why would you do this? Well, maybe you don't yet know what a good threshold is. So that's the option here. Good, let's go back to my pipeline. So the tests are running now and they see it earlier, it took three minutes and then it triggers the quality again. Then it takes about two minutes until all the test, the data comes back. Instead of waiting here, let me just go back to the previous run from build number nine. Let's do this because I executed the same thing just earlier to save some time. It gives me in here. So this was build number nine and you can now see here, we're getting all of these metrics. So it seems this was actually a pretty good build except the throughput didn't work but overall I received 92 out of 100 points. So that's why I was considered a good build. I can get all the results here for every single metric. That's also pretty cool. And yeah, I think that's fine. So last thing I wanna show you before we then open it up for Q&A. Oh yeah, it's just in my slides that I'll also go into share you. I just explain how my Jenkins pipelines actually call the library. So it's just in this case, I do my captain in this before, if you run your test out of your pipeline, you basically, I have a helper function that says mark evaluation start time. So it just stores the start time frame from this on. And then when the test is done and you're done with the evaluation, then you say send start evaluation event and then this function will basically take now the current time stamp as the end timeframe and then the start time that was stored with the mark evaluation time frame. So you have the exact time frame that should be evaluated. But the rest is the same goes in and then here we go. Perfect. All right, let's see. Last use case that I wanna show you because you know I'm a performance enthusiast. One thing that we also address with captain is we enable you to make performance testing very easy to come up with a performance driven culture as we call it. So I'm sure some of you out there and I know I saw some names earlier in the attendee list. There are some performance experts. So if you wanna integrate performance testing in your pipeline, there's a lot of questions you have to answer. You know, where do we run it? What type of tests do we run? Where do we store the data and so on and so forth. And that's all very hard questions are some harder than others and it's you can all build your own testing tools. You can all use commercial tools or you can do it, build it yourself like do there's a lot of do it yourself material out there. But what we also said with captain we also wanna orchestrate text test execution. So one thing that captain can do it is it can trigger other testing tools. We've won integration for instance with Neolote. They provide a cloud-based service and also service where they can launch their load generators in Kubernetes. But we also have an integration with Jmeter. And this is what I wanna show you now is if you wanna integrate testing but you don't have thought about test infrastructure at for load testing but you wanna run some simple Jmeter scripts for instance then you can change your Jenkins pipeline from the way it is on top, right? Instead of trying to figure out how to stand up your test infrastructure execute the tests. The only thing you need to do is you need to tell captain, hey captain, I just deployed my app. Here is my URL of my app. Now you go off and you run the test scripts. The only thing you need to give captain is the test script itself. Then again, whatever tool it is and then what captain will actually do it will execute the tests after the tests report back to captain that the tests are complete it will then go through its regular process of querying the SLIs comparing them against the SLOs and giving you back the result, right? So that's kind of the overview here. All right, so let me show you this as well. I have another pipeline captain performance as a service that's the way I call it and it's again, this is just screenshots on how it should look like. So let's try this. If I go to my captain performance as a service and I can do build bit parameters and then in this case again I always reference a kept project now in this case I wanna do again Dynatrace's monitoring tool. Here I'm using Jmeter because that's an integration that comes with captain and what captain can also do you can give it the Jmeter scripts and you can define different workloads. So I wanna run a quick test. I think performance 10 is 10 virtual users very quick but I basically say please here is my URL go off, run this particular test use these SLIs for evaluation and then wait 60 minutes in maximum until the results are completely done or let the pipeline fail anyway and that's it. So let's try this out and let's see how it runs. While this runs, I think what I completely have not shown you is let me just go into my Jenkins tutorial. This is where all the stuff comes from in case you wanna try it out. There is the tutorials and samples. I also have another video link here and all the different use cases that you basically just saw. Important because I know a lot of you are interested in the actual Jenkins files. So for instance, the quality gate here is the captain evaluation Jenkins file. It is really straightforward. I'm including my captain library then I have my captain object here and I know there's probably other ways to do this as well but I include my captain library and then what I do is I kept it in it. I give it the project name, the service name, the stage and the monitoring tool. These are all parameters that I, all of my pipeline parameters and then what I do, I give captain all the necessary files. I say captain, captain as resource. So here is my dynamite configuration file, my SLI and my SLO, right? And then this is my initialize captain and then in my trigger quality gate, I just say, hey, captain send start evaluation event where you can give it a start time and an end time. And my library supports different ways. You can either specify a full timestamp if you note start and end. You can only specify a start timeframe then it will go from start to now. You can also specify just numbers where you can say, let's say 60 minutes in the past until zero until now or 60 minutes until 30 minutes then it's kind of like a time window in minutes this also works. And the nice thing is what you get back is the captain context, that's the unique ID that then allows you also to query later on more data status, right? It also allows you to open up the bridge, right? Is that the captain context is also something that allows you to go into the deep link and then at the end wait for result, I just have a function here that is called wait for evaluation done where you can say, hey, wait until a maximum timeout because sometimes it can, maybe something happens and you don't wanna wait forever. So you can specify a timeout and you can also say set build result true. In this case, this function will also set the build result of the Jenkins pipeline, successful failed or unstable depending on the result that comes back. So this is this. All right, as you can see, these tests may run a little longer but I hope I've shown you pretty much most of the things I wanted to show you just to complete the slides and then very happy and to open it up for Q&A. In the slides, just what I showed you in the walkthrough, I always explain what type of calls I make from Jenkins to the kept library to initialize the project to upload the right resources. In this case, for when captain should also execute my tests, I have to upload my test scripts obviously like in this case, Jmeter. And then instead of saying evaluate, I want captain to actually do the test. So there's a special function that is called send deployment finished event. I basically say captain, I have deployed, my app is deployed, here's the URL. Now you go off and do your thing. This is where then captain triggers Jmeter, executes the tests based on the workload configuration I gave it. It also sends events to the monitoring tool. This is also a nice integration. Captain also automatically sends events to Dynatrace, to Prometheus, or to AutoTools, informing that tool about what is actually happening. Then the SLI magic happens again right in short. And then at the end you get the result back. Okay. So this is, I think, this is, this should show you how this can be done, all the different use cases. For the Jmeter integration, we also built the option of not only running one Jmeter script, but you can upload multiple scripts and then you can also specify through workloads what type of workload you wanna execute. Remember in my drop down box, I specified performance 50 or performance 10. So captain will then execute whatever I've specified here and pass in things like loop count number, virtual user. So these are parameters that go into the script. Yeah. And to wrap it up, if you are interested, the Jenkins shared library, right? That's the one that my tutorial is using, is all on GitHub. And it's hopefully easy to use on the GitHub page. If you look at that GitHub page, then where is it? Here's the captain library. I tried to do a reasonably good job of showing some examples on how this all works, how do the quality gate evaluation, how do the performance testing, also how do the progressive delivery because captain can do much more. So there's a lot of examples in here. And in order for Jenkins to know to which captain to talk to right now, I'm expecting global properties, global environment variables with these names. This is obviously something we wanna improve. We wanna also allow credentials, but that's it, yeah? And that's actually the current plans. So we wanna support credentials versus the environment variables. We wanna potentially create a Jenkins plugin to also visualize some of the data from captain in Jenkins directly like the heat map would be great. And we also wanna provide some callbacks because one of the capabilities that captain also has, captain can orchestrate the complete end-to-end delivery. It can do deployment, it can do testing, it can do evaluation. And for the individual pieces, we can even say captain for deployment call this Jenkins pipeline. For testing do this, and for evaluation obviously do this. And if you are calling a Jenkins pipeline from captain to do a particular test and that pipeline also needs to report back to captain. There is already a way, there is a service out there that connects these two, but we wanna make it even easier. And with that, I wanna say thanks for listening. I really hope there's a lot of questions now. And hopefully this was useful. And yeah, Oleg or Mark, I wanna hand it's back over to you. Yeah, thanks a lot for your presentation. Yeah, it was definitely useful and it's great to see performance testing at scale. So, yeah, I was using Jenkins with Jimmitor and other tools for a while. Well, one before Kubernetes was created and one before other tools were created. But yeah, still. So if anyone has any questions, please use ZoomConnay. Again, we can grant voice permissions if you want to ask. And yeah, I have a few questions in my list. Unless Mark has questions first. I have questions in mind. I have a page full of scribbles that I took. Well, Andreas was talking. Thanks very much, Andreas. So, but if there are participants, I'm happy to defer to the participants. I just don't see any in the participant Q&A right now. So I'd like to go with mine. Oleg, are you okay if I ask? Yeah, just a little bit. All right, so Andreas, I'm deeply interested in how to accelerate these kinds of evaluations. Are there any insights you could share on things you've learned as you've developed, Captain, as you've interacted with it? How do you accelerate the throughput of getting these kinds of results? They seem like performance benchmarks are probably expensive and hard to do and hard to repeat. What guidance would you offer us for how to accelerate? That's a good question. That's a good point. So I agree with your performance benchmarks for some organizations depending on the project is hard. But one thing that I hinted to, it's not only the response time and the throughput, but it might be some basic metrics that you can also look at when you just run your functional tests. So for instance, a metric that I love because I've seen so many applications fail is things like how many database calls do we see after we execute this particular functional test? How many log files, how many log statements are written? How many exceptions are thrown? How much memory before and after the test is the JVM consuming? And the thing here is you can see very easy. First of all, you can benefit because if you can immediately detect, hey, somebody changed the logging strategy and now we get a thousand log messages and before we only had five. Or a code change, we went from five database statements to 50 database statements. And you can find these things with very simple tests and you just need a single metric. But the thing is you need to look at the metric and you need to get this metric. But the nice thing with this now is make sure that you automatically get this metric from whatever tool obviously that provides it and automatically compares it, stores it for you in history, you see trends. But if you don't, so to answer your question, if you don't yet have a performance best practice, well, first of all, get started with it because it's not that hard to write a Jmeter test or what I did, my poor man's load testing tool, it's a curl script, come on. Everybody can write a while loop with some curls. There's no, just you look at my example, what I wrote. But if you have some API tests, some functional tests, I'm pretty sure there's some of these, I call them architectural metrics, number of database calls, number of log messages, number of exceptions, memory, some of these metrics that you can extract and that gives you immediate value. Thank you, thanks very much. Frantically writing it down, thank you. Yeah, one question about Jmeter. So does captain support load from multiple instances? For example, if I want to run load from multiple machines. So to write now the current, so the current thing is the Jmeter extended service. And that's the service we have here. This will basically just run Jmeter from a, so it deploys Jmeter in a container in Kubernetes where captain runs and then just runs it from there. I would encourage anybody out there to help us extend that service to then maybe reach out and run it from different locations or I know there are solutions out there that can already deploy Jmeter in different places. And then you just have an API call that you need to make from captain in order to trigger it. But right now the current thing that comes with captain itself, it just deploys a container and then runs your Jmeter script in that container. That's what it does. Yeah, got it. Yeah, so for Jmeter, the main problem is not to trigger multiple workloads. It's quite straightforward. The problem is to aggregate the reports, especially in runtime. So I'm not sure whether captain supports runtime reporting and processing. We don't do runtime reporting, rather remember what captain really does, captain is an orchestrator. So with captain, we are triggering different things like we are triggering, somebody needs to deploy something, somebody needs to run a test, somebody needs to evaluate. So for evaluation, the built-in captain quality gate comes in. For test execution, you can use the Jmeter service, but you can also trigger whatever else you want. So for instance, one of the things here is the generic execute the service of captain. This is also a cool, I would say easy to use thing where you can either define an HTTP file or a shell script that gets then executed based on a certain captain event. So for instance, you can say every time captain makes a configuration change, please call the shell script because you wanna do something or if captain sends internally a start test event, you can execute this. So that means you're very flexible with what you do. But remember, a captain is event-driven and then one thing we do is with the quality gate where we pull in metrics from different data sources, but we don't do it live. You could build a component that does it live, but I believe that this is the job of your monitoring tools and your log aggregation tools and in your test, yeah, I think that's a different job. Yeah. So it's basically what we were doing. We were pushing data from GMeter directly to Nages. Yes, I'm a bit old, but yeah, after the Nages was aggregating statistics from FOS. Exactly, or I mean, obviously, again, I'm a little not biased here, but I'm lucky I have Dynatrace and so all the data ends up in Dynatrace anyway. So my only data source I need for my stuff is Dynatrace because I get the log metrics, I get the infrastructure metrics, the process metrics, the test metrics, I get everything in one tool, but we built captain agnostic because we wanted to know there's a lot of people out there that need different data sources and that's just a real world. So we did receive a question, just came in from Carlos Panolzo. How can I train my team about it? So do you have some recommendations on techniques to introduce these concepts to a team and help them be successful? Yeah, so I can encourage you to, if you're interested in captain, then I can encourage you on the tutorials page, tutorials.captain.sh. In general, the concepts though of performance engineering. So I've been doing performance engineering for many, many years. So there's a SRE block Dynatrace. I just wrote a guide to automated SRE driven performance engineering, which again, I know it uses Dynatrace, but it explains all the concepts of SLIs and SLOs and how you can start building an SRE driven mindset. So I think that could be an interesting block to start. And another one that I wrote actually three years ago is trades of a performance engineer in 2020. I wrote this 2017, I believe, and I believe it still holds true. So what does it need to kind of start with a performance movement within a company? And there's a lot of other great resources out there as well. I really love my friends at Perfbytes, Perfbytes podcast, Mark Tomlinson, James Pooley and Co. They are great. We've been talking about performance engineering for many years. Alexander Podelko is one of the people that I also know. He's been writing a lot of blogs. Also, I have another podcast. It's called Pure Performance. It's on Spreaker, where a colleague of mine and I are talking about performance-relevant topics and not only like how do we low test, but also how do we get a DevOps mindset, a performance mindset into an organization. So that might be interesting as well to get started. Yeah, thank you for the answer. Yeah, I had a question about the pipeline libraries, et cetera. So what is your experience with them? Did you experience any significant obstacles and how you work to solve for them? So I gotta tell you that I've been trained as a software engineer when I started. So I think engineering and coding is easy for me. It always takes getting used to a particular type of language and a particular type of terminology, obviously. Overall, I found it pretty simple and straightforward. I mean, I did a lot of trials and errors. I'll try the why does this stupid thing not compile? What's wrong again? Where did I miss something? But overall, it was super easy. I found a tutorial I don't remember, I don't remember the tutorial that I found, but it was a tutorial, kind of like my first Jenkins pipeline library. And I just took it and started from there. I think from a troubleshooting perspective, it might be sometimes a little easier so to not work with too much log outputs to figure out where like, I don't know like a debugging capability would sometimes be cool. And maybe something like this exists, I don't know because I am, you know, Jenkins is just something that do like, let's say on the side now because I want to integrate KEPM with as many CICT tools out there. So maybe there's a debugging, I don't know. Is there debugging capability of debugging through the pipelines? Not exactly. Okay. So it's one topic which we will continuously discuss at Jenkins contributor summits, et cetera, that you really need pipeline debugging. Yeah. But yeah, it's not very in common means. Yeah. I mean, one thing that I would wish, and maybe again, maybe this is a question for you, maybe it exists, but I would love to display, right? Some of this data here in Jenkins. And I think the only way I can do, to do this is to really write a Jenkins Java plugin. Yes and no. So in principle, you'll have to write a plugin. At the same time, there are plugins like plot plugin or something like that which take data and can visualize this data. Okay. And yeah, also recently there was a plugin, for example, plot plugin. Oh, the plot, okay. Yeah. So plot users are gonna plot to build the data. So obviously you can't build awesome visualization there, but it's just something straightforward. Yeah. There are other plugins which do more with regards to graphing. And right now, yeah, maybe you've heard about warnings and G plugin. So it's a plugin basically which relates all analysis and reporting tools at the moment. And there is a lot of enhancements there in terms of visualization and reporting tools. So last week we had the UI UX Hubfest and Uli Haffner was presenting how to do better reporting with Jenkins, basically using the standard JavaScript libraries. And I believe that it would be possible to create plugins which would again take generic YAMLs or JSONs or whatever and build fancy charts. Yeah, like you already have. Yeah, like a high chart. Like it would be cool to use a high chart plugin than I can just, yeah. I think it's something we could improve and it would definitely help developers of libraries because libraries are a low cost way to make an extensible functionality in Jenkins. Yeah. And to share this functionality. So yeah, having such a generic floating plugins would definitely help. Yeah, but overall, right? I felt it was super easy. I mean, as you can see I'm using visuals to decode for my coding. Yeah, it's not too bad. Excellent. So now I had assumed in order to get throughput that you were probably running many of these tests in parallel. If they are running in parallel, how do you associate a change in, how do you associate something at the back end like database hits to your, which front end test caused or do you not? Is it that you typically don't run parallel? No, you do run parallel tests. And this is, let me just quickly go to my, excuse me, my chain meter here. So this is what I mentioned in the beginning is tagging of requests. So when chain meter executes that request to a URL, then what I've done, I have a bean gel preprocessor where I'm adding an additional HTTP header on every request. And that header, I call it xDynamicsTest includes name value pairs, LSN, load script name, TSN, test step name, LTN, loyal test name. So that's the script name, the step name, the test name and the virtual user. So I can basically add context data to every request I send to the application. Now, in my case, my app is monitored by Dynatrace and we do distributed tracing. So that means what I can say, I can say the diagnostic tools, right? I can now say I wanna have, let's say a number of calls to our services not split by request name because these are URLs, right? So here on the bottom, you see the URLs, API, blah, blah. There's a lot of stuff. But now I can say I wanna have it split by the request... I need to tell you the request attribute, it's the test step name. And I only wanna have it, I only want my monitoring tool to look at data from my... Where from, let's say, where actually the test step name is coming in. There we go. So now I can see the data that came in and I see the number of service calls, right? In this case, invoke is the only one that is actually making service calls to the backend. And it's because of distributed tracing because in my case, again, my monitoring tool is doing end-to-end tracing, but it also captures the HTTP header from the monitoring tool and then extracts these bits and pieces. So if I look, if I show you what's behind the scenes, and again, whether it's Dynatrace or, I don't know, New Relic, AppDynamics, DataDog, I'm sure they do it in a similar way. But in our case, I look at every single request that my monitoring tool captured. And on every request, we call them PurePass, our end-to-end trace, I can go on every single request and in my case, right? So this is the stuff that Dynatrace extracted from me. This is information that the load testing tool put on the request. So this is the X Dynatrace test header, but Dynatrace is extracting the individual pieces and therefore I have this metadata on every trace, but then Dynatrace also gives me the ability to say, well, I'm not interested in every trace, I'm interested in metrics based on traces with particular piece of metadata. And now this is then also the data that I can put on my charts, right? Like in this case, I have my dashboard that says top CPU consuming test steps. So look at all the requests, look at the CPU consumption but split it by a test name or top backend database, top backend calls. And now as we have a metric, I can also put these metrics into my SLI. Where's my SLI? Here's my SLI, number of service calls of my particular test case. This is the way I query it from Dynatrace. And this is also how captain can automatically query it and then compare it with previous builds. This is how this works. That's great. Thank you, thanks very much. Okay, so that is another question about I'd like to get captain use cases for DevOps activities. So what would be the main use case? For DevOps activities, yeah. So if you look at captain at the website, right? What we are, the problem that we try to solve is we've seen a lot of organizations that are starting building pipelines and these pipelines start from small and to grow and to grow and basically you have process with tool integrations all bundled in in your pipeline code. And then you start from one pipeline and then you copy it over and you have 10, 15, 100 different copies. And the problem though really is that you don't really have a separation of concern. You don't have the process clearly defined somewhere and the tools that should do this. So what captain does, captain is an event-based orchestration plane or actually we like to call it now a choreography engine where captain, you, again, if you look at the tutorials, the way captain works, you define a so-called shipyard file. So instead of defining pipelines and stages and then writing pipeline code, you say, I have two stages. I have a staging stage and I have a production stage. In staging, I wanna do a direct deployment strategy and I wanna run performance tests. In production, I wanna do a blue green deployment and I also wanna run performance tests and I also have a remediation strategy. So this is on the one side where you define the process and then you have another option where you can say, okay, which tools should then actually be called or triggered when there's a new artifact to be deployed. So captain is really an event-driven orchestration engine where you can give it an artifact whether it is a container or a jar file and then based on that process definition, it then internally triggers events and these events are then picked up by tools that can do a particular job. So it could be, I have maybe a Jenkins pipeline that can deploy my Java app into a certain environment. Then I can use, for instance, the captain Jenkins service, captain Jenkins service on GitHub to call a particular Jenkins pipeline. So I can say, I want for a configuration change event when somebody wants to change the configuration you build, then please call this particular Jenkins pipeline. When this pipeline is done, then captain knows, okay, now deployment is done, that's great. What's the next thing I need to do? Okay, I need to run tests. So now it sends an event. Hey, who can execute performance tests? And then maybe my G-meter script picks it up and then the G-meter runs the tests. So it's decoupling, first of all, actions, processes from tools. And the beneficiary I think is the developer, but you can still use all your existing tools that you want to do individual builds so we're not replacing anything which is orchestrating it. This is for continuous delivery and for operations, this is where the remediation strategy come in. We all have a lot of people that are running large systems in production and when a problem happens, when they get notified by an alert, they wanna execute certain actions. And it's also an event-driven model. Problem comes in based on the problem, execute a certain task, wait for the tool to say, did it work? Yes or no? Really evaluate that it worked. If not, do something else. So captain also has an orchestration engine for auto remediation. This is what captain does overall. And the piece that I've shown you today is one sliver. It's the quality gate evaluation that we use between stages and it is the test execution. But we greatly believe for DevOps, that was the question, how can it help in DevOps? We believe it's the answer. It allows you to deliver software in a way that your developers actually writing software. Your developers write event-driven microservices, they're writing componentized services that are orchestrated through events. Yet your pipelines are large, micro, logic, monolithic pieces of code. And I think captain tries to help you to use your pits and pieces that you have. Maybe you have a Jenkins pipeline for deploying, for testing and for doing something else, but then it orchestrates the whole thing, event-driven. Thank you. Hey, is evaluation of a pull request in this kind of environment or a potential change any different in the captain environment than doing work off of the master branch? Are there things that are unique to thinking about many pull requests arriving that cause you to configure captain differently or to have it think differently about the problem? So pull requests, right if you wanna integrate captain with your GitOps with your pull requests, then you would again make a call to captain, then trigger captain as part of, let's say, the pull request validation, like the quality gate. Do you need to think about it differently based on branches and master? I would say you could model it in captain based on stages. So we have the concept of a captain project and then different stages. And then instead of having depth stage production, you could say I wanna integrate it with my depth environment, but I allow every developer to have their individual feature branch and then if pull requests come in and I wanna separate the data, I may have kept internally organize each branch in a stage. So not just this makes sense, but let me show you something here. I have a project here. It's the simple node project. And I have staging and prods. So you can have in an end-to-end delivery environment, you have multiple stages. And if you wanna have multiple branches, you will probably, one option would be to have, you take the stage concept in captain and apply it to different branches. And then you have it clearly separated. You see all the data, all the quality gates that happen in this branch versus this branch. The other option would be to use the tags and the labels. Remember what we had in my, was here, right? If I go over, I, every time when I called captain, I gave it labels. So you can clearly differentiate the data with, hey, this was triggered by this build. This was triggered by this pull request on this particular branch. So you can also separate it through the labels through metadata here. Not sure if this answers the question, but kind of- That did, that answered it very, very well. Thank you, Andreas, thanks very much. Welcome. Thank you. So we are slowly running out of time. There is no questions in the queue. So probably I'll share the poll results from the beginning. They will be great, yeah. Yeah. So I guess every participant should be able to see them now. Do you see them? I see them, yeah, yeah, yeah. So familiar with the concepts, but not using them right now. Mm-hmm. Yeah, that's consistent in what I've seen in the past. There's only a small group of people that are really using it. Billivariations work right now partially automated. I like to fully automate it. That's a 33%, that's great. We want to automate. That's actually the bigger group, right? To a kind of on top, yeah. Hardly any performance tests or no tests even, right? That's interesting. And what I'm really happy about is, yes, we have existing Kubernetes cluster to deploy to. And there's only one no answer, because we have received feedback from some people that said while they have Kubernetes clusters, some people might not be able to deploy something into these clusters, but I assume the audience here that are maintaining and managing pipelines and obviously Jenkins in the combination with Kubernetes, you obviously have Kubernetes available and you can deploy into Kubernetes. So that's great to see the number one answer here. Kubernetes and Jenkins is training. So we hit something like 10% adoption of Kubernetes from what I've seen in statistics. Still a long way to go, but yeah, Jenkins is heavily used in Kubernetes nowadays. Very cool. The polls, the results. Can I, you know what I'll just do? I just took a screenshot. I have screenshots, I will send them to you. Okay, perfect. Because otherwise I would have just opened this one here. Just wanna make sure I keep it here as well. Yeah. So it's good. Yeah. So if there is no more questions, yeah, thanks a lot for this presentation. It's awesome to see such use cases and to learn new things about innovations. Yeah, and if anybody wants to follow up, right? Again, here's my material. I'm also very happy to go into a direct conversation if you wanna know how this works. Also with a Slack channel. I forgot to mention this. Slack.caption.sh would be great if people could sign up. Slack.caption.sh, yeah. Yeah. We'll share a link in post meetup communication. Yeah, and also if you develop more complicated pipelines for captain and Jenkins, and if you decide to develop a plugin, just let the community know because we have a lot of use cases to share. We also organize regular meetings. For example, we have special meeting for cloud native seek, which is focused also on various integrations with Kubernetes environment. We have a lot of discussions about quality assurance in our channels, of course. So definitely we can find a lot of opportunities to discuss that. And chart tank plugins. Yes, I think that we would really discuss creating a new chart tank plugins based on the new technologies. Yeah. Because we have a lot of new UIs. So making them generalized and making them available as pipeline steps would be quite an interesting project. Okay, any additional feedback or comments before we stop the recording? Not from my end. Okay, then thanks everyone. And yeah, I'll see you at the next meetups.