 I am Poole Lansmar, CTO at TestCube, as we heard. Thank you so much. Been in the space for a long time. If you're familiar with OpenAPI, I was the chairman of the OAI OpenAPI initiative, board of smart bear and other companies in the API space. TestCube is an open source project, which is a, TestCube is also a member of the CDF, but the project itself has not been donated yet, something that we should talk about, Lori, perhaps. I'm gonna, this is probably the most technical talk you're gonna hear today, so for the developers here, this is gonna be very hard on, hard on, and it's also gonna be very, the slide design is very Nordic, I'm from Stockholm, so it's very minimalistic, Ikea, not a lot of memes or colors. There's some purple here. Okay, so let's just start with CD events. Also, I wanna just say, we've heard three talks now, I think, saying how important testing is, so I think that's great to hear. I think everyone kind of agrees with that, but we've also seen a lot of changes in testing and testing practices, and that's what we're trying to address here. Just as a quick recap, CD events is, was born out of the need that, realization that different CI and CD tools and platforms need to communicate, as we're building out more and more complex pipelines, we want those to be kind of not, we want them to be loosely coupled and that we'd preferably be through an events, eventing architecture, and the CD events protocol was one of the projects then that was created, Andrea, who's sitting here, is the lead for that, so if you have any technical questions about CD events specifically, he can answer all of those. It's built on top of cloud events, so it's, which itself is another project and a pretty established standard, so this is something that's kind of well rooted in the space and it's incubated in the CDF website, there's a website you can go to. The CDF defines events for a bunch of things, this is just from the website itself, so there's events related to source control, there's events related to continuous integration, there's events related to testing, which is what I'm going to talk about, continuous deployment and continuous operations and cloud events, binding, et cetera, so it covers a rather broad area within CI CD, but I'm going to be focusing on specifically on testing, and this is something, obviously, TestCube, the project I'm from, is about testing, which you can probably guess by the name, something that we've very enthusiastic about and when we discovered the CD event spec, we kind of quickly realized there's maybe a need to extract what was at that point a very small support for testing within, I think it was the continuous integration events, there was a couple of subjects and predicates, so why did we think that there's a need for new events specifically for testing? Well, I think as we've already heard a couple of times today, today in a distributor architecture, tests are now run not just as part of your build jobs in Jenkins or GitHub, but often run asynchronously, they can be run manually, they can be run as a response to events happening in your clusters or from your incident management, there's people doing testing production, we have people who don't even run tests as part of their CI CD pipelines because they take too long and instead they schedule their tests to run every 30 minutes and use that as a way to quality gate their systems, so people are not running tests maybe as strictly as they were 10, 15 years ago, and so to kind of cater to that and to all those use cases and the decoupling of testing from your traditional static build jobs, it felt to us that the need for events related to testing was kind of pretty obvious. So what we did then was we defined a couple of events and as I said already, these replaced the previously defined test suite and test case subjects that were in the continuous integration category, all the def documentation, the JSON schemas and the examples, they're all on the repo on GitHub, so it's all there and this was released as version 0.0, it's part of 0.3.0 CD events, I guess this was in June something, I don't remember earlier this year at least. Okay, let's just dive, this is gonna be very techy, lots of monospace fonts, which means code. So what we've basically done is we've defined three subjects, a test case run, which models the execution of a test, a test suite run, which models the execution of a test suite and the test output, which models an output from a test case run specifically. So important to note here that we're not modeling the test case itself or the test suite itself, it's about the actual execution and we heard similar earlier today about pipeline runs and task runs, et cetera. So these are defined as separate subjects and then for each of these, we've defined predicate, skewed, start and finished for the first two and then a published and I'm gonna walk you just through what that looks like and I'm gonna show an example at the end, of course. So test case run and if you look at the seed event spec, it defines a couple of common fields that all objects have, we're all subjects. So some of those are inherited from there but specifically the last three here for each test case run, there's an environment so you know which environment it's running in. You can provide a test case that is kind of an abstract reference to a test case somewhere and this is very abstract. The test case could be a J unit test, it could be a post-manic collection, it could be a Cypress test, it could be a whatever testing tool you're using and a test case run can also be related to a test suite run and this is when it comes to more complex orchestration of tests which we're seeing more in integration test scenarios where you might, as part of an integration test run both API tests, UI tests, security tests, load tests, either at the same time or in parallel just to kind of see how everything works out and then you orchestrate all of those into a test suite run and maybe a little bit complex but that's how usually things usually end up if you're doing this kind of thing anyway. So just looking at the queued predicate for test case run, so this is an event that would be emitted when a test case is being scheduled to run, it hasn't actually run. So, and it might be waiting for some applicable constraints to come into place, right? It could be resource availability, it might be waiting for something else to finish, it might be waiting for something else to pass or maybe not fillable, maybe, but something to be fulfilled before executing. So, for example, Jenkins could potentially emit a test case run queued event when it's starting a build job where it knows that it has a test case or a test step later on in its build, right? But it hasn't actually run the test. So on this receiving side, at least you'd be knowing, okay, this might be coming up. This is not a required event. So these are many of these events and we'll get back to the heuristics about which events would you actually expect. And you'll see that a trigger object is also commonly used here because you might wanna know how is this actually being triggered? Is it a manual trigger? Somebody might, clicking, run this test button and you might, in the end, wanna ignore manually triggered tests. Maybe you're only interested in tests that are triggered by a pipeline in Jenkins or some other mechanism. So, trying to give some opportunities on how to, alternatives on how to handle these events. Test case run started, not surprisingly, is emitted when the actual test case starts. Once again, you'll have to know which environment it's running in, which test case it's related to, which test week run it might be related to and what triggered it. And the queued event that I mentioned earlier is not mandatory, so many times you'll just get this event emitted from your system and you probably won't get a queued event. And the only thing that's actually mandatory here is the ID and the environment. So all of these others are optional just to kind of give the receiver more context. Test case could definitely be interesting because you might wanna aggregate all test case runs for a specific test case over time. You might wanna track how is this test case performed over time when it comes to status, path fail, et cetera, et cetera. And not surprisingly, there's a finished event at the end which then adds once again the ID so you can kind of go back to which test case run you're talking about, which environment, and then the outcome of that test if it pass, fail, cancel, error. And we have a pull request now for adding a skip opportunity alternative here. As always, small things can result in big debates so it hasn't been approved yet. You can go to the GitHub issue and weigh in. We've also added severity and this is obviously, or not obviously, but this is a little bit of a stretch because severity is subjective. If a test fails in one context, it might be really, really bad, but if it fails, if your load test doesn't pass in production, that's really bad, but if it doesn't pass in testing, that's maybe not so bad because you know that's a constrained resource. So severity is something that is not mandatory and reason is just a string that tells you, gives you more context on the receiving side. Once again, the only thing that's required here is the outcome pass, fail, or because that's at least the least thing we thought would be helpful. Now we're coming to test suites and test suite runs once again, not surprisingly, test suite run models, the execution of a test suite and it also has reference to the environment that it's running in, so once again, staging, production, whatever, and a reference to a test suite object that could be defined in an external system. And here we have the similar queued, I'm not gonna read these tables to you, or at you. It's the same kind of queued, started, finished events that we saw for test cases and they have similar properties as you would maybe expect. The last one is the test output subject that I mentioned earlier and this is interesting because often a test emits an output not just as pass, fail ratio, right? So if you're running a Cypress test, you can get a video of the recording or if you're running Postman, you can get a hard to read log output or whatever testing tool you might be using and it would be nice obviously to get a reference to that and there is a test output published event and note that a test case run can publish multiple outputs, right? So you could get both a log, a video and a PDF containing some kind of report. It's totally up to this testing system to decide what it can kind of produce. The things that are required here is the output type, obviously somewhat subjective of what we decided on initially as the valid values here and the format would be a MIME type, so application PDF, et cetera. Nice to hear the sources you can see is required so here we thought it was mandatory to help the receiver of this event actually retrieve the artifact because if you know there's been a PDF produced or a video, you might wanna know where to actually get it and instead of, okay, because that's probably why you're interested in the first place so that's why the sources is mandatory, the URI is not mandatory, which maybe should have been, which is more straightforward reference to the actual output that was published and a test case run allowing you to associate the output with the actual execution of a test case. Please note that this is not mandatory so the only thing you might be receiving on the end is test outputs, not very helpful, but it once again depends on the implementation of testing events that you're using. Also define a couple of objects, I've mentioned test case, test suite trigger, I don't know if we need to go through these in detail. Test cases is a test case and a test suite is a test suite and a trigger is a trigger. For the triggers, maybe a little bit interesting to know if, as I mentioned, triggers can be for queuing and starting test cases in test suites so these can be manual, pipeline, and event, a schedule so if you have a schedule trigger, like every hour or every whatever, and other which are always references and then there are URI reference and this is I think for all these objects they are URI references optional so if the system that holds the test case or the test suite or the trigger can provide a URI or you can actually look at it in a web interface or an API call to retrieve the definition of that object that's what you use. Okay, slightly more colorful and confusing slide. So just trying to model all of these. So we had the test suite run subject with the predicates queued, started, finished. The bolded tags or prop fields are required so as you can see the only thing that's really required here is where it's running and the ID of the test suite run and then the outcome when it's finished. Same thing for a test case run, the only thing that's really required is the ID and the environment and the outcome and for the test output what's required is the format. And then the sequence or the heuristics of kind of if you're on the receiving side and you're gonna build something that reacts to these events, how you're gonna want to know which events am I gonna get, all right? And this hasn't really been defined in the specification and that's maybe a shortcoming and maybe something we should at least attempt to write down because you probably wanna know that if I have a test suite is there always gonna be a started event? Will I, if there's a queued event will there be a started event after that or what if it doesn't start? What if it just gets queued? Is it, and it cancels? Will there be a finished event with a specific outcome, et cetera? And I don't think we've really, it's maybe something for us to, we've talked about it a lot but we haven't really formalized it. Specifically also around if a test case, if there's a test suite run, that test suite run can contain multiple test cases and as I mentioned earlier, those test cases can run in any order, they can run in parallel, they can run in sequence. So the order of those events will is totally undefined. Well hopefully it'll get the finished before the started for a specific test and the queued before the started although that's not guaranteed. So just a little bit of an attempt as heuristics here, a test suite is an orchestration of multiple test cases, a single test suite run event can contain to, I'm not gonna read this at you but I think you can all get it. What's maybe important then is that the queued events are optional, the test outputs are optional. So I think the only thing today that you could expect is started and finished events for test suites and test cases. But of course it's up to the person or the projects implementing these events to implement them as much as they want. So how would you use these events in your CI CD pipelines? So a couple of use cases, right? So one is notifications, right? You wanna be notified in Slack if a test fails, it's kind of an obvious thing. Maybe you wanna even, you've had users who wanna automatically create Gira issues for example, if a specific thing fails or if an incident management tool you might want to get notifications into that. The other is aggregated or centralized test result management. What if you could pull all these events coming from all your different testing tools, dump them into one system that calculates your, creates reports and creates quality metrics and pass, raise, pass, rate, fail ratios, et cetera. And you might have application lifecycle management tools like Captain or Spinnaker or others who might be interested in listening to these events and then deducing from them if they can promote releases from one place to another. So there's a bunch of different ways you can use these and just for some little bit more color just to kind of visualize that. So today you'd have CICD systems, testing tools, test orchestration frameworks which would all kind of publish these testing events to a CD events broker and then you could have notifications, reporting, application lifecycle management all listening to those events and then acting accordingly. So obviously your test orchestration framework could be listening to other events and using that to run testing tools. You could have testers or DevOps people run your tests ad hoc, as I mentioned earlier which is actually not that uncommon. And then kind of weaving this all together to hopefully kind of build out more dynamic pipelines as you deploy your applications. And I'm gonna do a little bit of an example because you may be curious about what do these events actually look like. See if this works. So TestCube is an open source project which is if we look at is one of these test orchestration frameworks. I just have a simple curl test. It has support for CD events that you can configure a webhook. I'm using webhook.site here where I'm gonna receive these events. So I'm just gonna run, start by running a simple test which is a curl test. This is running locally on my machine hopefully. And as you can see how we first got a test case run started event with some URI which would actually take me to the dashboard in TestCube to look at that test and then we got a corresponding finished event. We didn't unfortunately get a test output event which is in the backlog. I'm gonna nag the developers to do that but I could go in here and then here see the log that was produced by curl and this is ultimately what you would maybe want to have a test output event for so you could retrieve that log for curl. And then correspondingly, TestCube has the concept of test suites so this test suite here only contains that same curl test but if I go back here and run that now it's running locally. You're gonna see all these events coming in. So there's a test suite run event and then there's a test case run and then there's a test case run finished and then a test suite run finished at the end. So all those four events that you would expect to come in sequence come here. So TestCube allows you to orchestrate tests both in parallel and in sequence. So if I had a much more elaborate test than this one you would have gotten a much more elaborate sequence of events that you could then react to on the other end. However you might want to if you had plugged this into some kind of CD events bus or broker and that was really short. So I'm gonna stop there. As always these standards really depend on people and getting involved. And it's easy for me from a TestCube project and Andrea and others to kind of think what people will need based on what we've seen but ultimately it's based on what you guys do and how you do testing and how you wanna make testing part of your pipelines that drives this. So please get over to GitHub and open an issue or just talk to me or Andrea or anyone else. And I'm of course happy to discuss. I'll be here for a couple of days and I'll be at CubeCon too and you can get these really nice plushies. They're like this big. That's like the most popular thing. That's it. Any questions? Nope. Okay. Thank you so much.