 If there are some very precise specific questions about these topics, or should I just try to give an overview about what we did and the events that we have in there, and then we can talk about how can that be included into the Jenkins Cloud Events plugin as well. I don't know, Karadai, you might have more context. No, I think that sounds great. Thank you all for being here. And really, we just want to ask you more questions about the event protocol that you worked on. It's come up quite a bit in the last couple of GSOC mentoring meetings that we've had, so we really appreciate you giving a warm-up discussion here today, but Shruti has a lot of very good questions. Good stuff, good stuff. So let me just share my screen quickly. And let's go here first, right? Like, I'm guessing that you folks know kind of like this repository already, like the CIG events. That's kind of where we are doing the work. And what you will find here is this kind of directory called vocabulary draft, where you will find kind of like the events that we are just aiming to have, and there are like four different categories here. And here, just it's basic description of the events. And again, the idea of the specification here is not to define the cloud events per se, but kind of like the semantics of the events and the terminology around the events. Because that's kind of usually not enough, and people usually wants to see something implemented. We created kind of like this POC using Tecton and Captain, and they are just exchanging these cloud events that are part of the CD working group, right? And I think that this is kind of like this image summarize something similar to what I think that Shruti is trying to achieve with the Jenkins cloud event plugin there, because it is related because we are using Tecton pretty much in the same way as we would use like Jenkins in general, right? And again, the idea here is that we will have events producers and event consumers and in this case of Tecton and in the case of Jenkins, kind of like they are both, right? Like they are producing cloud events and they are also consuming cloud events, right? So we created a demo that basically it's showing interoperability between Tecton and Captain for doing different things. And we are sending cloud events and we wanted to avoid sending cloud events from one service to the other one, right? Like we didn't want like Tecton pipeline sending cloud events directly to Captain for a couple of reasons, but the main reason is that when you send events, you want to make sure that you use some infrastructure that it's ready to handle events and for example, handle re-delivery and sequencing. And for that reason, we use for the demo here like the K&AD event broker, which is already designed and it works with cloud events like out of the box, right? So it allows you to filter cloud events and it allows you to, yeah, just move clouds even around between different systems and you can just define how you can subscribe to these events and send these events to whatever system you want. I am expecting to be able to do something with Jenkins. So if we have the cloud events plugin like ready at some point, we can do something pretty, pretty similar with Jenkins or even like add Jenkins into this demonstration, right? We are running two pipelines here with Tecton and we can easily swap one of these pipelines to be run in Jenkins showing kind of like again, like now three projects interacting and working together trying to do something, right? The demo wasn't that difficult to build. The difficulty here, it's always understanding each of the projects and knowing exactly what are the things that you need to change or what are the things that you need to adapt in order to consume and emit the right cloud events. And when you cannot make the tools to emit and consume the right cloud events, then you usually create like ended up creating a translation layer, right? And for Tecton, I think that this was kind of the translation layer that we built and for Kepton, we needed two translation layers, the inbound kind of plugin or service, how they call it and the outbound service as well. And again, the idea here is that if we start having kind of plugins like in Jenkins, we start removing those translation layers and we make kind of like in some way, we make the projects to adopt the cloud events that we are defining as kind of like the standard cloud events to interoperate with other tools. If you look at a little bit more like a creative eventing, which is a project that I'm getting more deep into it, like into how it works and how it was architected, you will see that it is not difficult to use, but again, it's one of those things that you need to understand why you're using it and why and in which situations, right? And for kind of like what purposes? If you keep it to the most basic concepts, what you do is basically you install creative eventing in a Kubernetes cluster and then you basically create a broker. And then what you can do from your applications is send HTTP request to that broker and forget about the cloud event basically, just send it there and then you said, okay, whoever wants to consume it, they need to create a trigger, which is subscription just to filter the events that are going to the broker and just get only notified when there is an event that it's for a certain type that I'm interested in, right? So in general, that's kind of how people tend to think about like, KNATIVE eventing, you just install it and then what you do is you create brokers and you might be wondering kind of like, okay, but what's implementing the broker and then you can go to KNATIVE like the documentation and you see that there are like different implementations. So you can put Kafka in there, you can put RabbitMQ, you can use kind of like the cloud provider specific implementation and that's definitely offering a great amount of flexibility that you can start simple, for example, in a local Kubernetes cluster with like an in-memory broker and then if you want to make things more and more serious, like for production environments, you swap the implementation and put a different broker without changing any of the applications and any of the producers or consumers, right? So in general, that's the main objective of the project is to abstract these brokers which are basically in charge of receiving and filtering events and sending to different systems. I'm not entirely sure. I mean, I would love to see like a demo with the Jenkins Cloud Event Plugin using the KNATIVE eventing stuff and I can definitely guide people on how you can configure that and what kind of infrastructure do you need in order to run it. But I'm not entirely sure at what stage the plugin is and I would consider maybe more interesting to start working on the adoption of these new cloud events kind of like formats defined by the group instead of like worrying too much about the infrastructure and how do we connect systems. Can you folks kind of like share a bit more about like where are we right now and what kind of like questions did you have about like these projects and the POC or the cloud events that we have in the group? Yes, I will go ahead with what we are doing here right now and also thank you so much Mauricio for sharing all of this and going through this. So in like the last two meetings we were discussing about an architecture where you know, and if we have like a Jenkins sync it would basically be the sort of fall would have full tolerance, would have recent capacity and all of that stuff. And then we were in meeting with the UNSIG team and Andrea was there and then we were going through the KNATIVE and the broker and trigger infrastructure. And then the last meeting we were talking about this and with Havan Khar and everybody else suggested that we let's design a system similar to what's there in the POC with Captain and Tecton and what we're gonna do for now is replace Captain with Jenkins as a Jenkins outbound. So Jenkins would be sending cloud events to the KNATIVE broker and then we would have Tecton listening on to the events coming in from the broker for KNATIVE. So what I have right now is actually I've been working on it and I have developed the sort of the core components for it like separately and I've been trying to connect it in the morning like today but I'm having like a bit of a trouble connecting like the Tecton event to listener. So it's everything is there and it's designed but when I'm sending anything to the event listener it's not triggering a task run. So we were just testing with a simple task run. It's not really specific of extracting parameters or specific information from cloud events per se but more or less first establishing that structure of Jenkins will send an event to KNATIVE broker and then there's a trigger which is basically have the subscriber URL for the Tecton event listener. So that's an issue that I've been having with KNATIVE or with Tecton actually not triggering anything when it's receiving an event on the event listener. Let's see if I can, like I was working a while earlier just having an issue spinning up the listener again it's giving me a timeout on the port. How are you running Tecton and where are you running it? So it's running on Kubernetes. It's actually running on AKS so Amazon's Provision Kubernetes and like the clusters they are of sufficient size if there was an issue with volume or anything like that I think that wouldn't be an issue I can. So I can also share my screen with what I had for Tecton running. Let me... So you can create and the other question is can you create like a task using kind of like the Tecton command line tool and seed run for example? Yes, yes. So let me actually go ahead and think it's present now. Okay. Okay. All right. So here was my, like my Tecton running and so I have to change the YAML quite a bit from basically it was, so it's the same it's most of it is the same but just change the triggers to be more specific to you know just like simple triggers. So I just like change it for one to have everything together have like a trigger template just a very simple of whenever it's receiving something on the event listener just maybe like echo something you know, hello Shruti, hello world, whatever. And yeah. So the namespace and service account is the same and then here's the event listener and I have the URL for it. So right now a while ago it was working but there's a connection socket error so I cannot exactly like trigger it but it gets so when I'm like when I post a curl request to it let's see if I can see, yeah so this is a point where it was working. So if I have like a post request sent to the event listener for Tecton here's like what I'd get. So this is, I also have components ready for the cloud made up for the Knative broker and the trigger with the subscriber URL for Tecton. So this was me just like testing if it is actually receiving an event and it's actually triggering something on like Tecton. But I don't know I'm not really sure because I tried a lot of like changes inside of inside of my triggers on YAML to make sure that okay I don't know what might be wrong. So I did try you know like having simple task here other than like the trigger template instead of a task run or like putting this together I also tried running like task ref. But again I'm not sure why it's not working but if I'm on the UI, right? If I'm on the UI and if I create a task run it has the definition here. It doesn't have it because I'm pretty sure that like something went wrong in like an average and I'll test it again. But before that it had like the task here which I had defined inside of the task ref or inside of like the template, the template binding on the time. It had that here if I would like go and create so it was working from the UI but as soon as I've been sent a post request nothing really would happen. Okay, so there are a couple of things there you can just check the one that it's receiving the event in the cluster. So you have access to the cluster, right? And you have access to the posts that are running in the cluster for receiving the events. Yes, yes. So looking at the logs there might be the first thing. The second thing is that you need to send a cloud event, right? You don't need to send like only a post. I don't know if you're sending a cloud event. Yeah, I also tried that. So that wasn't working either. So then I had like tried both like a simple JSON post and I also like tried sending a cloud events from the Jenkins event plugin that we have running. So I also tried that, so that wasn't working either. So let's see, where is our Jenkins? So at this stage, we pretty much have for Jenkins we pretty much have that part of Jenkins sending cloud events to an external system and we tried playing around with Sochi earlier. So it was working, it was receiving events. So if I go on to configure this was the URL for the event at the time when it was working. Time when it was like it was receiving requests and it didn't really happen. So I did try doing this as well. This wasn't working either. All of these events are basically like cloud event, compliant events. So I'm pretty sure. Where did you get that URL from? So right now actually when I'm trying it's right, okay. So I basically had like a, okay, let me go back to the URL. I'll just copy paste it. So right now the stuff I'm getting a service unavailable and this might be like a port issue. So I might have another port running I'm not really sure. But before all of that when I was sending cloud events I like basically would get the same sort of reply from the server which I was getting when I'm sending like a simple. So this was the reply that I was getting even for cloud event sort of like cloud event compliant event and also for like the post event. But it wasn't like triggering the task run. Yeah, because in order to trigger, so that's the other thing like how did you install Tecdon there in the cluster? Did you follow kind of like the POC instructions? Yeah, so like first like going on to Tecdon pipeline. Okay, so, you know, like Tecdon pipeline trigger I have the dashboard on then applying obviously what was inside of, you know like service role and bindings and all of that stuff. I mean, I do have the controller, but like again as a point that we are not going to need that here so we can move away from that because we don't need that here. And then in place of, you know, like obviously like install the whatever resources we are going to need, but replaced the triggers with just specifically relating to like a very simple trigger of an event or a very simple trigger of a task run. Can you, but that's the thing that you maybe can do it's like list the triggers with QCTL, right? And see if they are like applied and they do not have any errors in the resources. That's something that I will try first. And also I would check that the broker is running, for example. Yeah, I think I might have, I might have like tried looking at like if the resources inside were running specifically like even when I had all of those things, you know like I would have tasks mentioned here when it was like a different format of a task rather than like a task spec inside of trigger template. So I had the task mentioned here. I had the event listener, trigger binding triggers, but it just, I'm not really sure why I was having, but I think what I might do is set it up again because it's like it is an AWS like EKS running cluster and there are a couple of other things that are running on it already. So I might try running it on another cluster because it could be an issue of, I'm not sure about like cross forward binding or something. So I might try doing that. But in general, if you have like the components, if you have the tecton component that it's waiting for cloud events, it should work, right? Like that shouldn't be a problem that you should try to solve. And if it's not working, of course, we can reach to Andrea or I can try to help debugging. In general, if I can access the cluster, if I can take a look at what's running in the cluster, it's way much easier than looking at the UI because the UI will not tell you, I mean, if something is failing it, you are not going to see there like the entry, right? So it might be interesting to see that and also see if the broker is running, for example, that might be the next step to check. And if you have the broker running, you can always use, you remember that in K-Native, if you have the broker running, you can send events and then you can create a trigger for sockeye, right? So in that way, we usually debug these applications. If we don't know if the event is arriving to the broker, then first we send a request and we expect to get, for example, a 200 in response. That's something I would do. And then if I do not know if a trigger or a subscription is working, I will create one to sockeye and I will just check if in sockeye that's arriving. You are thinking that you're trying the most complex use case, which is a tecton trigger, which is going to create a tecton object based on an event. So I will just try to split up the problem on checking that the cloud events have been moved around correctly. And then we can basically just create a task definition and trigger it manually. And then we just trigger it by the cloud event. Yeah, I think the problem is with tecton communication to tecton specifically, because I also have sockeye on the cluster for the Kubernetes. And then I did try first directly triggering from Jenkins to sockeye, then we are getting events there. And I was also doing a cross, maybe having a middleware, which would route events to sockeye. Maybe one thing we can try doing is shifting the Knet of broker in between Jenkins and sockeye to make sure that that part is working. But even if we take away Knet for now, I'm not really sure why the tecton specifically is not working because as you said, like trigger a definition manually and then try and triggering that from either like a post request or a cloud event. I did try doing that as well. Like I did try first triggering it manually both on through the command line, then also went on to the UI and did that there and then tried doing like a post from Jenkins cloud events, making sure that it's receiving something. But yeah, that didn't work. I'm not really, again, not really sure, but I feel that it probably is, might be some issue with the trigger style YAML because there are different like alpha version, beta version. And inside of that too, there's like the friends and how they're defining their YAML definitions. So I don't know that a lot of, like a lot of, sorry. Yeah, I think that you are on the right track. You just need to spend more time just figuring out the details. In tecton, you will see that there are like a lot of like namespaces filtering and you have like the event listener that needs to be configured correctly plus also the triggers, right? Like you need to make sure that the triggers and the event listener are in the same namespace. They are pointing to the right place and that you have kind of like everything in place to make sure that the things are connecting. That's why like looking at the resources, for example, and looking at the trigger, for example, trigger status and to see that's okay. And also looking at the event listener status and making sure that that's up and running and in the right namespace, I think that that's super important. So again, it's like checking your configuration and maybe I think that's something that will definitely help is creating something like what Andrea created like that script that basically does install all these things maybe in a kind cluster, right? Like you're running on AWS. That's a pretty complicated environment on its own, right? So maybe what you can do is just put Jenkins, just start Jenkins in a container with the cloud events playing. You already have kind of like that script bootstrap tecton and Kennedy broker in a kind cluster. So this will kind of like enable you to just do it faster and like just to reproduce that very complex environment just with a single script. So it might be kind of like one idea to look into it. If you are in Windows, that's a different thing. That's a more complicated stuff, but it can be done in the same way, right? Like you just need to create like a script from scratch. But I guess that creating kind of like that sense of, oh yeah, I can reproduce these in a local environment or I can reproduce these in AWS will help you a lot to just to avoid making mistakes of namespaces and stuff like that. All right. Yeah, yeah, that's a good idea. And I might go on to like an EC2 instance or something because I don't have like Linux on my machine or like a WIM or somewhere because I feel like it might be easier to do. So you can put Windows and you can spin up the next containers on Windows. And to start off, I think you should just run TechCon by itself and play around with event list nodes without doing anything with the POC in the beginning because I have a feeling because of the POC some things here and there might be confusing right now. So just kind of like get started with running a simple task run and then turning an event list node which triggers something like a basic task run. I think you should start with that if you're on the TechCon side and like rest of the stuff, I think it will just fall into place after that. Yeah, that's like a fair point. And again, like the thing with TechCon that I feel it's probably the task run definition itself rather than like how the event listeners configured because the things with event listeners which could be wrong is yes, maybe the wrong port has been exposed for the URL that I'm using or it's in a different namespace or something. So I did check all of those things and it is receiving the request and it doesn't sort of give me an error right away or it doesn't give me an error in general even when it was running earlier. It's just the triggering part of the task run from the event listener. So which makes me wonder if they went like the task run definition itself which I have might be incorrect somewhere. So the thing is I think there might be UI bug with the task run and the task stuff because in a task run you can even define the task inside the task run or reference it. So that might be an issue there with the UI. So try using the CLI for that because it will become more clear that way like what exactly you're doing. So whenever you do like tech, TKN, task run LS or something like you can see the status of the task runs or even the task has been triggered or not. So I think that that's probably, like probably move to the CLI for now because the dashboard, yeah, I think just using CLI is more, is kind of better at this point. So when you say that the dashboard might not reflect, do you mean that it does like there are bugs with whenever we are trying to push it from external systems or they're just bugs? No, just kind of like formatting stuff. I mean, I'm not a UI guy as such, I prefer to use CLI. And in that way, like right now I saw the UI, I couldn't see, I could probably only see task reference over there. So I couldn't see a place where you could mention the task in the task run like the whole of a task. So yeah, so I got my attention and I think what you're doing is you're giving a task with a task from the task definition inside or is it referencing a task that is already out there? So I did three different things. So the first thing was having the task. So I'm basically first is like using a trigger template and inside of that we have like one of the resource templates task run. The one thing the first thing was like you, it's specifying a task ref and then having a task outside of it. So as I was saying earlier when I had that then over the UI it would specify like when I would click on trigger a task from the UI just to make sure that it's there actually. So I would go on to task, then go on to task, then define that particular task which I had defined in the YAML. So it was present. The other thing that I tried doing was inside of the task run have tasks spec which is just defining all of the steps inside of the task run rather than outside of it. And I also had something similar to what the POC has which is just defining a task outside of like a trigger template and that's just sort of the trigger. So I don't think that has the trigger template but it's more just the trigger which has the pipeline run. So something similar to that. And yeah, like when I was on the UI it was working so I did try running that like steps from the task run going on creating a task inside of the inside from the definition in the YAML file. So that worked as well. But it was just- I have a question. So did these tasks when you ran them individually without the trigger template, did they run properly? They ran even inside of the trigger template as well. When I had outside of them, they ran when I had inside looking like something like I can share my screen, let me see. Like basically inside of you have the spec, you have resource template kind task run and then task ref. And then it's defining a reference to that particular task and then task is also another sort of CRD inside of that same YAML file. Oh, maybe I need to pull the components out into different YAML files because right now my trigger template, my like event listener and also my like the template binding or the trigger binding, they're in the same file. I don't know why that could be but it's like maybe worth trying if that can be an issue with different like sort of versions. So maybe. Everything should be V1, Veta one. I'm not sure what that goes. Yeah, that's the other thing. Like task references are something like it's in alpha, I guess. So that's why in like the POC we're enabling that like in TecPon specifically with the flag. But yeah. Anyways, I think that you're now like discussing the specifics and I would definitely again, like suggest the same as without like simplify the setup. Like if you're running in AWS, that's complicated for us already. Just to keep track and just to help you to divide in that. And also like from the UI perspective is that sometimes the UI will not show you things that are like available in the resources themselves. So you definitely need to get used to do that and use kind of like the Qubectl tool in order to see the details and figure out what's failing. But again, if you have any of those issues, just let's create there a thread in Slack and we can definitely help you or at least try to guide you if you, you know, if you get stuck for more than a couple of days. Yes. Yes, that sounds good because definitely this is not creating. So that sounds good. Thank you. And well, like one or another question is sort of more specific to the entire POC. Just would be enabling that sort of infrastructure where agnostic, sync and source binding can work. So, you know, like how you mentioned with captain that you had to create systems as a translation there for moving like cloud events between different systems. So how do you see that agnostic binding with maybe when we have Jenkins and we really want to go into a doing a lot of filtering and also triggering based on specific parameters and stuff. So how is it like? I don't like more issue. This is in relation to like filtering in the, like filtering the cloud events as well. And we were talking about the other day and your thing about doing a matching of cloud events cloud events, like we give a world and it kind of finds a substring in the cloud and through a match filter or something along those lines something that there is in soccer and something similar. So in TechTone triggers, they have a CL interceptor the common expression language interceptor. So do you have an idea we could do something similar in Java in this cloud event kind of scenario? Okay, so in general, like the infrastructure will allow you to filter events, right? And TechTone, both like the filters there and sockeye also like in the UI. But usually in order to filter things, yeah, I mean, as soon as you can just parse the event then you can do whatever you want with it, right? So I'm wondering kind of like, what's the specific question in there? You have certain filters in Kennedy and certain filters in TechTone, right? And then you can write your own filters in the cloud events plugin in Jenkins in Java because you can parse the event and then just filter in any way you want. Yeah, but the filter is something that the user will give in the configuration. So like, is this something, some library or something we can use for filtering these events? Because in the CL interceptor for TechTone, you can basically do something like header dot, I don't know, like header dot something like header dot artifact name is equal to something, something. Like, I don't know, I'm not being very articulate here. I'm not articulating well here, but you kind of understand what I mean here. I think that I do, but I don't know if there is any specific library to do that, right? Like I don't know if there is like any helper that will allow you to do that in Java itself. Besides like the cloud events SDK that I think that you're not using, right? Well, you're using the Java SDK, right? Yeah. Yeah. No, I don't think that there is anything in there to like easily filter cloud events by just sending kind of like a regular expression or something like that. I think that we would just need to do that in Java for now. It might be a kind of like a good feature request there for the SDK. But like it doesn't have to be cloud events as such because at the end of the day, we can read these cloud events as JSON object, right? So something that, something to pass JSON objects and the user can give like a string which will help us parse it. Yep. Yeah. And like also like super complicated JSON with JSON arrays inside of objects and other JSON objects inside of the array. It usually if you know, you and body can be complicated if you're talking about cloud events but also in general just JSON. So if something was to exist like that, I feel it would be easier for even the user to mention but definitely for us to also get that information parsed from the event body. I think that's the more sort of tricky part because the headers are straightforward. I wouldn't worry too much about like going and parsing the body because no framework is providing those filters. I think that if you want to really pass the body then you just need to use something like, you know like a JSON framework to just go and parse using the path basically. So yeah, I wouldn't worry too much about that for now because I know that that sounds like that's something that people will use and probably they will at some point they will require that but no framework is actually providing that for now. So I might push to kind of this scope that from the demos and I know that sometimes we are filtering based on the metadata, right? Like the cloud event type or the source or like metadata level artifacts but not going into the body and just filtering there because that's usually a much more complicated thing to and it will not work for all the transports in cloud event and making it, you know just making that you're going to support that for all the transport makes it way much more complicated. Yeah. Data which is coming in from different sources is going to be very different and the structure probably isn't going to be the same except the actual cloud administration. Exactly. Folks, I need to drop to another meeting in a second. So but definitely I do see that there is a lot of progress in here and now you are just trying to tackle much more complicated issues and I think that that's great. So feel free to open a thread there and just mention me in Slack and I will just try to help to unblock if you keep hitting kind of like these issues with kind of like installation or just even sending cloud events to Tecdon that should work, shouldn't be such a problem. Thank you so much Mauricio and everyone else for the time and appreciate it. Yes, thank you for all being here. Great. Thank you Viva, thank you Kera for organizing. See you soon. Ping me there in Slack if you need me, please. It's okay, it's okay, nothing, I'm doing nothing here. No, that's good, that's good. Yeah, but thank you so much Kera, I'll see you on Slack. Take care, bye bye. Bye.