 Hello. Welcome to CIG Events, using Cloud Events to create an interoperable CI CD. Before we dive in, let me introduce myself. My name is Andrea Frittoli. I work as a software engineer and developer advocate at IBM. My focus is on open source and DevOps technologies. I'm a member of the Technical Oversight Committee at the Continuous Delivery Foundation. I'm also the co-chair of the event Special Interest Group, as well as a maintainer of the Tecton project. This presentation is the result of the work that we did together with Mauricio Salatino. Unfortunately, it was not possible for both of us to present together, so here I am. I'd like to start today with a little story. So, two DevOps engineers meet in the corridor of KubeCon and CloudNativeCon North America. They've both been working on setting up their CD pipelines for their respective teams, and they are now sharing ideas and experiences. The first one I've been working with Tecton. His team is really happy about how quickly they could set up build and deployment pipelines using existing building blocks. The second one I've been working with Captain. Her team had a positive experience with quality gates and remediation flows. Now, they are thinking that they don't really want to choose. They'd like the best of the two worlds. Oh, and they would have to integrate the pipelines from the security teams as well. That run on Jenkins infrastructure. So, is that possible? How many point-to-point integration would they need? One way to integrate the different platforms might be through events. That way, they'd achieve a coupled, scalable, and full tolerant integration. Some of the platforms already speak cloud events. However, the issue is, even with cloud events, there are no consistent abstractions or shared semantics between the different projects. And this is the kind of issues that the CDF wants to help solving. So, CDF stands for Continuous Delivery Foundation, and that is a place where all the projects related to delivering application and software can gather, share their ideas, and collaborate. The CI-CD landscape keeps growing, and there is a real need for best practices, examples showing how to use multiple tools together, and documenting what each project is about. The CDF Foundation is home for Jenkins, Jenkins X, Tecton, Spinnaker, among others that are in the incubation stage. If you want to collaborate and join the Foundation, visit the CDF site and get in touch. So maybe we can use events, but we need some kind of standardization. Inside the CDF Foundation, there are several special interest groups, and one of them is focused on events, the events as IG. This group was born to define a shared vocabulary about CI, NCV, and software supply chain and software lifecycle in general. The main idea behind creating a common language with well-defined terms is to enable tools to share information about what they are doing. So, they can be wired together in cases when, to solve a problem, you need more than one tool. We are focused on creating this vocabulary in a protocol-agnostic way. Just a set of definitions that can be implemented on top of every programming language. We also have chosen our first binding, which is cloud events, because cloud events already provide its own bindings to different protocols, such as HTTP or NATs. So we can reuse and extend cloud events for CD purposes. We are also developing a Go SDK for clients to consume and produce cloud events related to continuous delivery. More languages are coming. Our goal is to avoid the need for one-to-one integration between each tool. So rather than using synchronous prescriptive APIs, we chose to rely on descriptive events with shared semantics across projects. To create a shared vocabulary is quite hard, and every project and initiative will define their own terms, concepts, and semantics. We focused on four logical buckets for our events. Each of these buckets encapsulates a set of entities. The core buckets contains pipelines and tasks, which are quite common, well understood, and there are many implementation of pipeline engines out there. The source code version control bucket is all about where do we store source code and how do we manage changes. Continuous integration is all about feedback loops. So here the entities are builds that needs to happen, tests associated to those builds, as well as the output, which can be considered artifacts. Finally, we have continuous deployment. In this area, we could co-broad and focus on every possible scenarios, embed system, cloud, traditional deployments. We chose for now to focus on cloud-native apps, at least for the first iteration. So we talked about events, but what are events? An event is a data record expressing an occurrence and its context. In other words, an event is something that has happened and information that we can gather at the time when it had that happened. Events can help us to understand and externalize what tools are doing and provide a very decoupled way of creating integration by listening to these events and reacting to them. We are defining the vocabulary in an event format, meaning that instead of creating entities, we want to represent things that happen to those entities. For example, pipeline started, task completed, branch created, chain submitted, etc. We are focusing on events because with events we can create loosely coupled integration between these different tools in the ecosystem. We have chosen cloud events, which is a CNCS specification as our first binding. This means that we will be extending the spec with events that are specific to CD scenarios and reusing all the tools and existing bindings for the cloud event spec. So the two engineers from our story would be really happy about a common event format that could use it to integrate Captain and Captain, and maybe others, and benefit from both platforms. However, they are concerned that by distributing the workflow across multiple platforms, they might lose the ability to observe, visualize and measure the overall workflow. This is a problem that we care about deeply at this SIG. We want our common vocabulary to enable users to create visibility across distributed workflows through the shared format and semantics. And we actually built something already to help our two engineers. Quite early in the process of kick-starring the discussion around common vocabulary, we realized that we cannot just define a shared language without having a real use case and real tools talking to each other. So we built a POC, a proof of concept, with a concrete scenario in mind to test our assumptions. We're going to be looking at a demo running on Kubernetes of two different open-source projects talking to each other to build, deploy and monitor the release of a web application. We wanted to highlight how these tools were not designed to work together, and if you wanted to make them work together, you'll need to make each of these tools understand each other. We wanted to avoid creating this very specific integration point, and that's exactly where the CD events play a very important role. In this demo, you'll see three open-source projects in action. Tecnon is a CI CD pipeline engine that extends Kubernetes with a concept of pipelines and tasks. Tecnon is very known opinionated. That means that you can create any kind of pipeline to do really whatever you want to. It doesn't need to be related to releasing or delivering a new version of services. With Tecnon, you have the freedom to create your own automation in a very cloud-native way. We're using all the Kubernetes tooling that already exists, and we're using the catalog of existing tasks that you can benefit from. Tecnon can listen and emit cloud events, but its own format. Tecnon, on the other hand, is a much more opinionated tool to do lifecycle management of our cloud-native application. It basically allows us to orchestrate different tools to cover common use cases when we're building and delivering cloud-native applications. There's building a way that captain is only in charge of the orchestration, and other tools will do the job. Captain at its core is an event router that does the orchestration between different tools, and it uses cloud events by default, but again, its own cloud events format. Captain has very interesting features that are possible because captain keeps tracks of the context of this orchestration between different tools, so it has all the context that it needs to do, for example, quality gates before deploying new version and auto-remediation policies that will kick in when things go wrong. While Canadian eventing is not a core part of the demo, we are using it because it provides the right abstraction to route cloud events between services. If you want to integrate two or more systems, you might want to rely on an eventing mechanism that provides flexibility to move events from one place to another. With Canadian eventing, as you will see in the demo, you can create triggers that are event subscription at the end, allowing the event producer to send just the one message to a known broker, and whoever wants to consume these events can create a new subscription to it. This comes really handy for scalability purposes, if you want to scale your integration and also to make sure that you can create the coupled architectures. So if we put everything together, this is how the overall demo scenario looks like. So on the top left side, we have Tecton. We support for in and out events. We developed the Tecton cloud events controller specifically for sending CD events. On the right-hand side, we have Captain, the blue box, which also supports input and output cloud events, and we developed two translation layers within Captain to translate between CD events format into the Captain-owned format. In the middle, we have the cloud event broker provided by Canadian eventing. This broker would always receive events in the cloud event format during the demo, and we used events from both the CI as well as the CD bucket for this. Finally, on the bottom layer, you see we have ViewMonitorMajorLayer, and for the demo, we are using the cloud events player that is able to visualize events that are sent. And triggers are used then to subscribe to events. So when Tecton, Captain, or cloud events player want to receive events, so they create a trigger, a subscription, and they can select which events they want to receive. Cloud events player will receive all the events. So the flow of the demo goes like this. We run the build of our application in Tecton, which sends an event about the demo starting to do the build, the build is finished, and this event is sent then to the broker and received by Captain. Captain decides then to start a sequence. There is a new package that has been produced, a new artifact that has been produced, so we want to deploy it, so it starts a deployment sequence, and it sends events about this. Because Captain is not actually doing the deployment itself, the event about deployment starting is picked up by Tecton that runs the deployment pipeline. Once it's finished, again, it will send a notification of an event about deployment done, and this is picked up by Captain and used to finish the sequence. Right, but let's see this in action now. Okay, I got my local cluster, Kubernetes cluster, it's running on Kubernetes in Docker, and I can use the Tecton client pipelines, and I can start, let's do this again. Okay, let's see the demo in action. I have my local Kubernetes cluster, it's running on Kubernetes in Docker, which is nice. It's all quite with a small footprint, so it can be executed locally. So let's first start my Tecton pipeline. I can use the TKN CLI client, which is the Tecton CLI to do Tecton pipeline start, and then I run the build artifact pipeline. There are a few parameters required, but the default value for them, they're all valid. So let's just get this running. And while it runs, we can stream the logs on the console, and you see it's cloning the repo, and I wanted to have a look at the pipeline while it runs. So this is a description of the pipeline, and you can see there are input parameters, as I was saying, the Git repository, the Git repository and revision, where the app to the build is, where to push the resulting container image, my local kind registry, and so forth. I'm using Kaneko for the build, but I wanted to highlight here that there are free results, Tecton results produced by this build, the artifact ID, artifact name and artifact version, and these are going to be used by Tecton Cloud Events Controller to generate a CD event of type artifact that is then letting any listener know that a certain artifact was produced. We can also have a look at the pipeline running here in the dashboard. Oh, and it's complete. So image was built, and results were exported. Right, let's switch then to the Captain Bridge. So in the Captain Bridge, we see SPOC service in the production environment as a question mark. So if we click on it, we see that we have a sequence that has been started, that he very started. The way we set it up for the demo, there is a manual approval required before the deployment is complete. So let's go back for a moment to our diagram. You can see Tecton executed the build, an event was sent here, it was captured by this trigger, and it was sent to Captain and started the sequence. And we can also go and look into the Cloud Events Player, and we can see a few events. The CD pipeline run queued, started and finished, were all sent by Tecton, and they are about the pipeline lifecycle. But there is also another one also sent by Tecton, which is the CD artifact package event. We can look into it, but we can see the three values there. And those are then picked up by Captain. So we want to check, is this what we want to deploy? Well, it's the SHARB 42A ending with 492D5 looking into the pipeline runs from Tecton. Right. So these are where the results, and it's the B42A ending with 492D5. So, yeah, that's what we want. So let's approve this. We want to deploy it. So now Captain is not going to do deployment itself, but he's starting a deployment sequence, sending an event, which is then picked up by Tecton. And you see the deployment here is marked as handled by Tecton. If we go back to our Cloud Events Player, we'll see that there is a CD artifact published event that was sent by the Captain CD Translator Service. So Captain generated the event, Captain Translator Service translated it into the CD events format, saying, okay, this artifact that was packaged by Tecton is now approved. It's available, so we mark it as published. The event goes back to the broker and Tecton picks it up and starts a new pipeline. And you can see from the events here that actually everything already happened. Let's go and have a look in the Tecton dashboard. So if we look at the pipeline runs, we have a deploy artifact pipeline run that was executed in the CD events in space. And we can see here that deployment was created, Ingress was updated. And then here are the events, queued, started and finished. And once Tecton is finished doing the deployment, it sends the CD service deployed event, which contains all the details about the pipeline run that was executed, as well as service environment ID, service name and service version. Service name, POC here matches what we have on Captain's side. And the service version is the version of the deployment. We can actually go and look into Let's look it from the console. We can check pipeline runs in the CD events in space. And we can see that there are a few results sent here. The environment ID name version and we additionally have this Captain context and trigger ID generated by Captain and they are transported by Tecton as results. So right now these are sent as part of the body of the events in the CD events protocol. But this is something that we're working on for future version of the protocol. We want to have a general facility for different application to transport application-specific context and information that might be required so that can be done in both generic as well as plug-able way by the different application in the protocol. Okay, so we go and have a look at our diagram here. Captain did the deployment. Sorry, Tecton did the deployment. Captain picked it up. And now the sequence is marked as complete. And if we go on the Captain bridge deployment is marked as successful and there is even a handy link you can use that takes us to the deployed application which is Hello, KubeCon, Cloud Native, Con, North America. And this page was built and deployed by Tecton and Captain working together through the CD events. And for now we are just using the Cloud Events Player. It allows us to view all the different events. But the idea is that we could build a tool that takes the information in all the events and correlates it together to give a nice view of for instance the entire workflow that happened here between Captain and Tecton. And for that event to pick up all the messages it's very simple you just need to create a trigger like the one that is used by the Cloud Events Player. We specify the event broker subscriber URI which is where the application is going to receive the messages and if you look into one of these triggers you can see simply specifying an empty filter is enough to receive all the messages. And that was our demo. And as I mentioned earlier there are a lot of activities that we are performing at the CIG events from producing the spec as well as developing the SDKs and tooling around and building the POC that you have just seen in action. So if you're interested in contributing please join the CIG events channel on the CDF Slack. We hold bi-weekly meetings for the CIG events and as well by meeting specific to the vocabulary work and they are a contribution welcome in many areas. And also you are welcome to implement the events that we are specifying in your own project or join existing initiatives like the Cloud Events plug-in for Jenkins that is producing CD events as well as the Cloud Events controller for Tecton. So we like come as well use cases, ideas any kind of collaboration really. Code positive vibes are very welcome. So in these slides I included some references from the SIG website the code for this POC and the event vocabulary draft that we have in place links for the CDF, Tecton, Captain and NKNATIVE and I also wanted to thank the team from Marisa Salatino and the entire SIG events team for the work of these. The captain team for the support they gave when building the POC and for VBAP for the work is done on the Tecton Cloud Events controller. So thank you for listening today and we could have some time for questions.