 So, welcome everyone to adopting CDEvents and embracing interoperability. Thanks for joining today. My name is Andrea Frittoli. I work for IBM. I'm an open source advocate. I live in Wales, UK. It's very windy, so I couldn't find a picture of myself with my hair straight until I decided to cut it. So, I'm a CDEvents maintainer and co-chair of the events, a special interest group we have within the CDEvents and I'm also involved in the CDEvents and the chair for the technical oversight committee and member of the governing board. So, yeah, today we'll talk about CDEvents, an introduction to what it is, the project. I will present what's new since last update we did, I guess, in CDEcon last year or, well, no, in Detroit last year. We'll discuss about the adoptions within tools and I'll show you some roadmap what we're looking forward to implementing CDEvents next. And then I'll thank you. Okay, to introduce CDEvents, let's start with a pretty standard continuous delivery pipeline. Sorry about that. So, where you have started with an SCM, like your GitLab, GitHub, where you have your code and then you go through build test, signing, you produce some artifact that is stored in an artifact registry and then you have some deployment phase that finally goes into production where monitoring kicks off. And this may be the pipeline simplified for a single artifact, but if you have more than one artifact that you're producing and more than one team, so you may end up in a situation like this where you have maybe more than one SCM tool being used, you certainly might have multiple build steps and design, sometimes with different tools, you have different deployment approaches. And if you have to build integration between all these different tools, it becomes easily too complex to maintain or you don't want to have all these integration point-to-point between the different boxes. So, there are a couple of approaches you can use to simplify this. So, you could think of using some kind of orchestrator, central orchestrator. So, this orchestrator can drive your different tools through events or through API and then you only need to have this interface between your orchestrator to the different tools. Still, you have some complexity then that is centralized into the orchestrator because the orchestrator becomes kind of the bottleneck. It needs to be aware of all the different tools and if you have new tools, you need to build this interface to the orchestrator and you need to, yeah, basically define the whole logic there. Another more distributed approach that you can use still combined with events is something more like choreography, so where you have the different tools sending signals about what they are doing and the other one reacting to that. So, in the case of orchestration, you have, for instance, a director that is sending signals to all the musicians and they all react to that. In the case of choreography, every dancer will send signals, will do certain movement and the rest of the ballet will move accordingly. So, you can do similarly with tools. And that's kind of what we try to do with city events. If you look at this scenario of choreography where you have different tools talking to each other, it makes it much easier if they speak the same language. So, with city events, we define a common specification for events in the continuous delivery space so that tools in this space can talk to each other. And if you introduce this kind of concept, then the architecture will look more something like this. So, where you have the different tools, the different stages that we had before in the diagram, like your SCM, build test, design, deploy, monitor. But instead of each of them talking or triggering each other directly, they will basically declare what they are doing for events. So, your SCM system will say, well, there was a PR merged into my code or your build system will say there was a build started, a bit finished successfully or unsuccessfully. And they can do that all using a consistent and shared language, which is city events. And they can all send these messages to a broker. And then you can have components that apply certain policies to these events and allow to trigger the next step in your workflow. Right. So, this is basically switching from integration to interoperability type of scenario. But there are still some issues when you do this kind of setup. What about observability? Because when you have this event-driven approach with different components and declaring what they're doing and applying policies, but in this distributed format, it becomes harder to answer questions like, what is running right now? Where am I in my workflow? What steps were executed? And if something goes wrong, where did it go wrong? So, you need to pinpoint where something or where things went wrong. It becomes harder to answer questions like, how long did it take? So, you may want to evolve the picture before that some more blocks and the bottom. So, starting with the store one. So, you can send events from all the different tools and then send them to the broker and the broker will collect them all in a single store. And that has multiple benefits. First of all, you're building an incremental state of your workflow in that store. And when you take decisions in your policies, you can also look at that to decide what to do next in your workflow. And the other thing you can use these events that you're collecting to do more things like providing, for instance, a view of your workflow across the different tools in your workflow. So, you can build some kind of interactive view that will display your build or your the lifecycle, for instance, of a change from when it was committed to when it was built into an artifact and then deployed to production and eventually any incident that might be associated with it. The other benefit of collecting all this data and navigating a consistent format stored is that it allows you to crunch the data, do analytics and build metrics. So, some metrics have been quite popular in the past years like the Dora metrics where you measure a certain aspect of your CD processes like how much time it doesn't take for a change when it's written to get to production or how often do you deploy to production and those kind of metrics. So, if all your tools in your tool chain are able to produce events in a consistent language and you can collect them, it becomes much easier to crunch, to calculate this kind of metrics out of this data. Other things you can do is like some notification, for instance, to improve visibility, observability for your users. So, to summarize again, why would you want to consider CD events? I mean the main idea is interoperability. So, in the two use cases, even driven workflows that allows you to build like more scalable architecture, the coupling, the different components, the different tools, it gives you a lot more flexibility. Consider the case where you have certain logic built to calculate metrics or to visualize your pipeline to take decision based on certain information and this information is coming from a certain tool that you're using for building. Now, if another teams comes in that use a different tool for building, but the tool generates events that speak the same language as the other tool, then you don't need to change anything else in your system around it. So, it will still produce the same format of information so you can continue to use the same policies and the same visualization tools to get everything working. And the second use case is observability. So, the ability to have an overarching view of your workflow across the different tools to build metrics and notifications. Okay, a little bit, taking a step back, a little bit of history where CD events comes from. So, we started discussing about standard for events within a special interest group within the CDF. So, we had one interoperability special interest group and then an event special interest group was created out of that. So, we discussed about the need for standardization in this space. And out of that, we created a CD events project. So, the first commit was October 2021 and then with the help of CDF, the project grew and it was incubated in the CDF in 2022. So, last year. And also last year in November in Detroit, we announced the first release, the release 0.1 of CD events with supported events for orchestration, software configuration management, CI-CD and also it included things like a go-long SDK, a cloud event binding and support with some DevOps metrics. So, I want to discuss what happened since then. So, since last time, since November last year. So, we worked on a number of few things. We made a couple of new releases. So, 0.2 and 0.3 was just released last week with some interesting features. So, we expanded the scope of CD events to continuous operations. So, we introduced incident type of events. And so, the idea is to extend this automation to when software runs into production. So, not to stop from like deployment and job finished because your changes that you created in your SCM and it was built and deployed actually continue to live in the production environment. And so, you may have incident associated to that. So, we wanted to expand the data modeling to that space as well. And that allows us also to then provide enough information to calculate the remaining two Dora metrics that were not covered in 0.1 release of the protocol. Incident events also help with automation for remediation because if something goes wrong, if you have an incident then you could do things like automatic rollback or you could use this information in general to trigger automatic remediation. We also had new contributors to the project, the test cube project. So, they contributed a revamp of the test events which can be used for test related automation. We introduced event in the area of software supply chain security for artifacts signed. So, the idea here is that for instance, if you're building a container image and you're signing it like you can do with cosine or with tecton chains or some tools for signing container images to send an event when the signature is actually produced. So, you can react to that. For instance, in specifically in tecton, we sign our releases and we want to produce the release notes and publish the release finally only once all the artifacts are signed. So, that's a use case that was interesting there. So, we did a few quality of life improvement as well, improved the ability of the spec. We added examples for each of the events in the specification and we refreshed the website as well. We did a lot of work on our SDKs as well. So, we started with a Golang SDK. We added features like JSON validation. So, you can validate all the incoming events through JSON schemas and you can also validate produced events. For the Golang SDK, it's now generated also from the JSON schema directly to improve reliability and we improved the testing as well. So, we have the Golang SDK version 0.3 released and we had a lot of work happening also on a Java SDK and a Python SDK. So, we have an initial version of the Java SDK that was published onto Maven Central and so also here we had a lot of help from our fidelity that joined the project and helped us a lot there with that. And similar for the Python SDK, so we have a first release soon to be released and to be deployed on PyPy. So, the other thing that we worked a lot and we made very good progress on is adoptions within the tools. So, CD events as a project is basically a specification in the collection of SDKs but it's only as good as it is adopted by several tools in the ecosystem, right? And so, adoptions that we have today now with the new version is with Jenkins. So, there is a Jenkins plugin which is published to the Jenkins official plugin repository and thanks again to our fidelity community members for contributing that. So, it is possible now to produce CD events with Jenkins. We have experimental support in Tecton so you can produce CD events from Tecton deploying a specific controller for that. We had discussed, we worked a lot with the Spinnaker community so we have an RSC approved and implementation is starting that. And as I mentioned earlier, we worked with the TestCube community as well so they, we just released now the new version of the test events so they are starting to implement this in their tool. We are having a lot of discussion with other communities as well so we talked to Argo, Flux, Harbor, Harbor they announced in Amsterdam a couple of weeks ago support for cloud events and they said that they would be interested in them extending that to CD events specifically as well. Other projects interested like trace tests, JR Releaser, we are discussing with Shipwright, Jenkins X, so things are moving quite a lot in this space and I wanted to provide some more details and examples from some of the tools that we're integrating with. Sorry. So, for Jenkins today we can produce then CD events through the Jenkins plugin so combined with the pipeline plugin we are able to produce events from the core group so when you're running a Jenkins plugin you can generate a pipeline run, queued, start and finish type of events that can be delivered either to Cislog for testing or to things like Kinesis or HTTP sync. And the next step for the plugin will be to then support ingesting events as well. And so once you're able to do that then you could have different tools, different teams or different type of triggers sending CD events to trigger a Jenkins pipeline and then generating CD events so you could even have Jenkins triggering Jenkins in a kind of flow like that. Spinnaker instead is focused more on consuming events as a starting point. They have a very good infrastructure for webbooks, incoming webbooks. So we started with the RSC today covers consuming events, CD events as input to Spinnaker pipeline execution basically. And the next step there will be to produce events as well. But yeah, we built like some POCs with combining Tecton, Jenkins and Spinnaker where Jenkins and Tecton are producing CD events and Spinnaker are consuming those to start a pipeline. For TestCube you could imagine producing test events, you could take decisions, apply policy based on what is happening in your test suits and decide for instance that you want to notify someone that has to look at certain tests that is failing or if you're running some smoke test in an environment after a deployment you could use that and send that to your deployment automation again to do some more automation, decide whether you want to keep that deployment or roll back. It is also useful, it can also be useful for collecting metrics again if you want to have history of a time about your test execution. And in terms of Tecton, well Tecton provides a component called triggers and that kind of natively supports ingestion of cloud events although any events that is adjacent over HTTP so it kind of natively supports cloud events as well sorry CD events as well. The pipeline component from Tecton also can produce cloud events today and we have an experimental controller that can be run next to Tecton controller to produce CD events specifically. I also built like a toy project that I called CD-eventor that basically allows you to get incoming event from triggers using the trigger functionality to extract information from that and then produce a CD event as an output. So you can use triggers together with CD-eventor as a kind of adaptor layer to transform events of a certain format into another format which is really nice and useful for building POCs. So what did we learn working with all these communities and integrating into tools? Well one of the questions that I think maybe the one that we I got asked more is okay so we can get events into the tools but how do we combine the tools so what is the reference architecture so this is one of the many questions that we got a lot and also what the event broker looks like so for the POCs that we for some of the POCs we built we use something like canative eventing because it can broker cloud events directly but not everyone has canative eventing their system or want to run it anyways so these are kind of areas that we are discussing and addressing today. Another area which is was easier for Spinnaker but harder for other project is like responding to events so okay I get this incoming events but what do I do with them right so you can store them in a storage you can try to take decision in real time on them based on them but there is no guideline at the moment on how to do that so we have some implementation there are some implementation happening but we don't have again a reference architecture of that and there might be space for some component new component to be to exist in open source that would allow us to standardize the way we do that so if I go back to this diagram basically where I got the policies and triggers kind of component we could think of including something within CD events that standardized provides a common tooling for doing those those parts lessons another lesson learned is doing incremental adoption do not try to get the whole CD events back into every tool at once or receiving ascending events do it one piece at a time and also work with the community so it's always good for us from the CD events project to go and join working groups from other community or meet people at the conferences or and discuss about the use cases and what makes sense for them to why it would make sense for them to include cb events and work in the initial step of the adoption in terms of SDKs we well we made our nice experiences of having documentation not consistent with JSON schemas or not consistent with the SDKs or what is produced because it we started doing things manually so we learned that we should really do generate things automatically and that's what we are working towards so generate the SDKs from the JSON schema and generate the documentation as well from the schema so I have a single source of truth and also it's been interesting having more languages supported like JavaScript and Rust so hopefully we'll see those SDKs in future um yeah so we got feedback as well on the specification so it's important to have examples and we built a lot of examples now in the specification but as CD events start being used in the wild so we'll we want to create more like more extensive catalog examples of CD events produced or that can be produced or consumed by specific tools to make it easier to adopt it um yeah in terms of community we are collaborating with several community I'll talk a bit more about that in a moment so what are we going to do in terms of roadmap what we're looking for the future so for version 0.4 and and beyond so we definitely want to work on more supply chain security type of features so today we added the artifact sign type of event but we want to be able to capture more information about artifact and software to supply chain security aspect of artifacts like S-BOOM attestation provenance and those kind of information attached to events the other bit which I think is probably quite important for CD events to implement is the ability of sign signing events because there are a lot of use cases where you may need to trust or you may want to trust the content of the event before you take some any kind of decision based on those events you need to be able to trust them and so we will need the ability to to to assign events features that we've been working on that hopefully we'll release in 0.4 it's native support for correlating events with each other so in CD events we have this concept of subject and predicate so you can have a subject maybe like a build and a predicate so it's like a build is started or stopped and so forth and subject of IDs that can be used then to be to correlate different events within each other but we want to have more explicit support of this kind of correlation to make it easier if you have a data store with a number of events to let's say extract all the events that are specific to a certain workflow that happened in your CD system or if you want to trace everything that happened with a certain code change or everything that happened with a certain artifact and so we're building features like yeah links work flow of IDs a composition and concept of releases to enable us to do that in terms of software features as I mentioned we want to extend the range of SDKs that we have we're working on some adapters like for translating existing events into CD events as a way to while CD events adoption grows and we want to build more proof of concepts and possibly as I mentioned earlier maybe do some work in the area of like providing some guidance and maybe some software in the area of the broker and the policy other than that from the documentation point of view we want to work on the collaborate with reference architecture initiatives and provide more example events and implementations I mentioned a few collaborations that we are doing so within the CDF we have one special interest group which focuses on best practices and they have this reference architecture initiative that I mentioned a couple of times so we are collaborating with the with the best practices seek to get CD events part of the reference architecture there also we met with the cncf tag up delivery in kubecon a couple of weeks ago and they are very interested in this kind of standardization as well having a kind of standardized data model that you can use across the the delivery of application and that's a very good overlap with what we do in the cd space so we already had an initial meeting with them and so we are trying to find out where how we could best collaborate so one of the things that the up delivery group produces is this potato at a simple application so perhaps one of the one of the discussion was maybe to produce a version of that that relies on on cd events to use it as a showcase of how to use events in this kind of context we are also having interesting conversation with the vsm i group it's value stream management interoperability it's a group within the oasis orc they care about interoperability between tools that work with value stream management i don't know your gva or yeah kind of type those kind of tools and again their scope is a bit wider than we have in cd events but they don't want to reinvent the wheels for areas that where a standard already exists so we we're trying to work together you know so they could potentially use cd events and help us grow it as a standard yeah i think i mentioned most of these in the community i mean in terms of contributing company we have ericsson ibm reddad apple vmware fidelity sas and more people contributing so it's growing and everyone is welcome to to join and contribute we are growing but we are still also zero dot x version and you know early stages so it's a great opportunity to also join us and kind of contribute and influence the direction of the project right and there are so many areas where we're looking for contribution from the specifications we're like more like high level discussions where we want to go with the specification but also like building sdk is tooling proof of concepts and so forth and of course adoptions and yeah that's all i had for for today so thank you again for for joining me today i got some references here and yeah if you have any questions it seems like cd events is focused on application delivery open telemetry is focused on on observability but there are overlaps any any insight um yeah that's a great questions and we have some threads going on also through the up delivery tag because they're also involved with open telemetry so we've been talking about this um i mean cd events is defined as a specification it's kind of transport agnostic today and we have one binding that we've implemented in our sdk which is cloud events cloud events is a cncf project that standardize the format of the payload so you can basically transport a cd event in a cloud event right so and the cd event is going to be a json blob that you put into the payload of a cloud event but um from a cd event point of view we are not really constrained to cloud events so one idea possibly could be to you could transport cd events on top of open telemetry type of protocol as well um or you could um yeah discuss what can what part of the data model can be shared how they could complement each other but yeah that's a very good questions we definitely don't want to create like competing standards in in these areas so we we are starting this conversation any more questions all right but thanks for coming again and yeah enjoy the rest of your conference