 Would you prefer that I turn the recording off? No, I thought you were going to do the introduction. Oh, okay. Welcome to today's GSOC mentor meeting for the Jenkins Cloud Events plugin. And we are speaking about Shruti's work in the last week and moving the plugin towards a first release version. So I did review the pull request and got it merged and just one thing, one thing I wanted to say was the read me would be nicer with the TOC table of content. And that's it. And I think we can move on to doing the first release. This week we can catch up on Wednesday or even tomorrow to do that. Yeah. So, what are your next steps of the plugin? We also, what is that about read me? Did you say TOC? Table of content. Okay. Okay, I'm sorry. So, the, the past week. So we talked with the events team just to get a better understanding of how they have been developing their like events. And so that's it for the captain and Tecton sort of integration. And I was able to clear some things in terms of how we can design a sync which is agnostic and also it can deal with different kinds of events in different structure and coming in from different sources. that if we can talk a bit about that, so we have a better understanding of the infrastructure that can be in place for the stage where Jenkins as a sink. I think that will be a bit helpful in moving forward and just thinking about everything that's needed for Jenkins as a sink. The last time we were also discussing when Kaira and Ivy were discussing about designing the sink, which is an actually cloud native sink which can deal with fault or just like network and transient failures and also can deal with retries, which retries is also applicable to Jenkins as a source, but that's something that can be accomplished in an easy manner. But for Jenkins as a sink, so when we were talking with the events, say team, they specifically had sort of a middleware, I'd say, which is dealing with events coming in from Tecton and captain. So they had like captain inbound and Tecton inbound and like outbound for both of them. And what they're doing is basically like transforming the cloud events, which is going out and then they have a middleware in between, which is like a cloud events broker which deals with receiving and sending of the events. And if our sink is totally like an HTTP request response system only, like if you're designing or if you're, for example, like receiving events only in that manner, only like a post request to the endpoint where it's present. I think that it's really going to tightly couple things with that infrastructure of Jenkins server or Jenkins node, whichever, whatever is dealing with the events always being available. And then it also leads to the possibility of losing events, some of which might be crucial. So I was thinking if maybe we can start out by like maybe testing with Knative because that's sort of the whole, you know, like the product is actually something that might help us, but also not really sure if having Knative, like having Knative integrated with this sink, how can this work? Like it's going to be two different tools, but also when someone is configuring Jenkins and Sink we wanna spin up Knative so we can, you know, configure like an infrastructure as code or something similar to that. But I was actually curious in hearing your guys's opinion on the Jenkins as a sink infrastructure specifically going on using a PubSub like infrastructure going for a different protocol for both Jenkins and the source and a Sink, for example, if we wanna introduce Kafka both for Jenkins as a source and also as a Sink, you know, like because Cloud Events does support like the protocol for using Kafka and also if we wanna go with like a queuing infrastructure so there's Revit MQ would support the kind of system that we might wanna work with in a full tolerant and sort of like a full like transient failure tolerant system. So I resist, that's a very, that's a question that's everywhere I resist. If you guys have an opinion on that or what do you think about it? So I think we can try implementing retries probably with HTTP, but if you want to start playing around with the Kafka protocols for the Cloud Events, what you could start doing is you could just set up like Kafka, I haven't worked much with Kafka but you could just set up the infrastructure for now and then start playing around with it. And then along the way, you will see problems which come in some things which could be better but in terms of implementing Pub-Sub and everything from Jenkins site directly, I'm not so sure about that. We probably, it probably would be good if all of that stuff like even retries and all these things actually would be managed by, you know, some kind of middleware, some eventing middleware like Kafka. Yeah, and that's what for Knative, that's what the broker is doing. So, you know, you have events coming in from Tecton and that's using sort of like a Tecton outbound and then sending a specific kind of event which Knative is expecting over to the event broker and the event broker or the Cloud Events event broker is the one who is implementing the retries and also implementing sort of an asynchronous way of handling all of the events and the messages which are coming over. So with the Jenkins as a thing, that's like my meeting send with, you know, because they said, I like remember them saying that, you know, since all different sources have different data or different way that they have structured their Cloud Events, I don't remember them saying that any specific solution for designing an agnostic sync specifically has worked because everything is different. So we might want to think about just implementing, when we're thinking of like implementing a genetic sync, also just thinking about having that middleware which can handle different kinds of both like protocol and also the different structure of data that's coming in and then design it in a way that Jenkins should receive it. I don't, I am not really sure if that can work though, but I feel that might be helpful. Sure, G, from my understanding, but you can correct me if I'm wrong, the way that they're using the broker, this middleware, it is what enables the sync to be agnostic to the underlying technologies. So this is what provides essentially like a translation layer depending on whatever underlying if you're doing RabbitMQ or if you're doing Kafka or whatever underneath. Yeah, I think it's, so for them, what I understood was that the responsibility of creating sort of, you know, both the events which are understood by the sender and the receiver, it's the responsibility of both the source and the sync. So I can actually go back and pull out there the POC that they have. And also there's a PubSub Lite plugin for Jenkins, which is a Lite version of PubSub, not necessarily something that can be helpful, but definitely, like I've looked into it and this definitely looks like something that we can use. If you think we can use the existing logic and the PubSub plugin which you're talking about, could you send a link to that plugin? Yeah, it's over on the GSOG section. Is it PubSub Lite plugin? Yes. If you can use this, oh yeah, okay. So I will say if you could use the existing logic and the PubSub Lite plugin without having to write something new, then it's something, maybe it's a good idea to experiment with it. So initially, why don't we try like playing around with those K-native brokers and see how that works? Yeah, that sounds like a good idea. And also like another thing is if we do wanna support just K-native, where or how should it be inside, like as a part of the plugin. When we try that with K-native, does it mean that we support only K-native because I think cloud events are agnostic that way. Anything that reads cloud events, cloud, sorry, I mean cloud event. So anything that reads cloud events will like read a cloud event, right? Whether or not it is supported by K-native. So even the broker, I think is such that it will just help prompting the events with anything. It necessarily doesn't have to be supporting on the cloud events for K-native. Yeah, so the K-native, like the K-native broker specifically works with cloud events or this is like, it can work with other structured event data, but I think like the K-native brokers itself, is only with cloud events. But if we wanna substitute this with something, so can you guys see my screen? So there, the POC that they had, they had like tecton cloud events, which was sort of that tecton outbound, which is like sending cloud events over and then it's receiving it and then they have a trigger. So it's just like, if any event is coming and looking like it's from tecton. So that trigger, it's basically, how we were mentioning setting filters. So CE type C, that's what the trigger is going to do is basically set filter. So the trigger is gonna be like, okay, if any event is coming of the type C dot tecton or whatever, then send it over to this captain inbound. And the captain inbound is what sort of transforms it to be used by captain. So as Kara was saying that in a way that cloud events broker, it is like agnostic because this is only actually dealing with the triggers, which are looking for a particular type of an event. But both of these services, tecton and captain here have their inbound and outbound, which are sort of dealing with that conversion of events into a format, which can be used by each other. So if tecton has this outbound, captain has this inbound and what this is going to do is basically transform the events coming from tecton into something that captain can use and understand. So that was one of the thing which was, I like the thing about Knative, which is really interesting is, it is working with cloud events as are. So we only like, we can set, or a person can set triggers for Knative service, which will be running inside of, or maybe like as a part of our plugin. It's sort of that interoperability also between different operating, I mean open source tools operating together. So the user would have to set up these triggers, but those triggers won't be inside Jenkins as a thing, but they would more be inside like the Knative service, which is running as part of this plugin. But the only thing again is about these in-mounds and out-bounds or those are specific, the how you mentioned the direct interoperability system. So again, having those agents and that kind of system. So that is one thing. So it will still be indirect, like in our case, with cloud events, it will still be indirect interoperability because what will happen is, so imagine extending this cloud events broker to like one more row, sorry, one more column, okay? You extend this to the right, you copy paste this captain thingy to the right and just replace captain with Jenkins. Now it's easy, but I'll tell you what it will look like. So the captain bridge and tecton dashboard is the front end. The CLI is there, forget about the CLI, but that will be the Jenkins front end, like that will be the Jenkins port 8080 from where we can check the Jenkins UI, okay? Captain inbound, so tecton triggers are basically webhooks, okay? And these are event listeners with which we can use, which tecton can use, and then these are created on tecton side and then consumed by the broker where a trigger object is made by the Knative cloud events, by Knative, and this trigger object is created with the webhook URL given by tecton, okay? So that's what is triggered. Same thing is with captain inbound. Do not look at the, so these two are technically the same because they are both webhooks, okay? Now here tecton cloud events is actually, so Andrea, I cheated most of the work on this, but I did work on this a little bit. The cloud events, what do you call it? Controller for tecton. So what the cloud events controller for tecton does it? It just sends cloud events. So you configure or sync where you want to send cloud events and you send the cloud events and the cloud events are sent to that place. That's all that's happening. So what is happening here is that all the events are just sent to the cloud events broker over here from tecton and the trigger is the one which is actually doing all the work here. Now to translate this into Jenkins, the captain inbound, captain outbound or tecton triggers, tecton cloud events, what it looks like instead of inbound, you would have so inbound and outbound can be translated to source and sync directly and which the cloud events plugin will handle. So right now, what we are done with is the outbound, right? You will see that we are done with the outbound and we are kind of working on the inbound stuff. So and we are trying to make the inbound stuff a little better. So in terms of the inbound, which is Jenkins as a sync. So that's the inbound stuff that we still have to work on and make it a little better. That is our next step in this process. But for now what we can do is, and I think it will be a good exercise as well, but just play with the same architecture and forget about this, I don't know if this cloud events player is very frankly, I think it might be a UI from which you can see. Yes, this is like similar to Sokai, but for a creator. Okay, so it's basically like a more sophisticated Sokai so the cloud events player view monitor and match up cloud events. So what we can do as a good exercise is we'll just replicate this entire architecture for Jenkins and then we will remove captain out of the picture and then slide in Jenkins. And then obviously we won't be able to do any inbounds right now, but what we can do is we can do outbounds. So what we'll do is we'll just start a job, a Jenkins job, simple job, and then we'll do an outbound, we'll send an event. You will have to configure a K-native trigger to say that once this Jenkins type of event comes, trigger a tecton pipeline which does something. Okay, so I think this would be a really good experiment which you can do the next week. And in this way, you can probably gain an idea what the POC will look like. And also you can showcase this to the events sake that you have done a POC with Jenkins interoperability with tecton. And obviously if that works with tecton, it'll obviously work with captain as well because in the trigger you probably have to create a similar trigger. The only thing you'll have to change in that trigger is the web hook which it's calling and the payload that is there. So I think that's the good next step you could take but I don't know if I'm just blabbering or I've answered any questions. Is this something that I'm missing out or not haven't answered yet? No, it does make sense. And like I am following because you have work insights on this and you have work with this system. It does make sense. The only thing that I'm just like looking at right now is sort of understanding that if we are, because I remember them mentioning how the inbound and the outbound is for both sort of like tecton and captain is also doing a bit of manipulation to adjust the event and the way it's coming over. So I think that it's a yes, it's a very good step and we can try with both tecton and captain as just having captain and tecton both as sort of like sending or like receiving events. For now we can just have them as inbounds. We don't have to need to take the outbound from them. We can just have them as inbounds and just the setup should be fairly simple but the part where you will have to do some learning would be the tecton triggers and the CRDs and stuff that we are using. I can help you set it up if you want and we can kind of do like a POC with Jenkins and tecton. Yeah, that sounds like a really good idea. Oh yes, I was going to say that also if we do decide in the future to sort of move with this architecture I also was wondering again going to the question of how can we enable this thing in the middle, you know, like the K-native system on Jenkins, like the cloud events plugin for Jenkins. So I'm just thinking a bit about that of how, you know, if there needs to be a script that can also start this architecture but that will also need like a part. So this is a fairly decoupled system. So you actually don't have to think about, you know, making this a dependency. Is that what you think of that, this cloud events broker would become a dependency to the Jenkins cloud native plugin? Cloud events, cloud events, cloud events plugin? No, I'm not thinking necessarily as a dependency but more or less like if we are, you know, like if we need K-native as sort of a service from inside of Jenkins plugin as sort of like this always needs to be there. It's like, yeah, maybe we can say dependency but also like it obviously can work without it. You don't always like need K-native but if someone happens to do or happens to initiate this system running, you wouldn't want to go back and set this up on K-native but they like need a system which is, you know, asynchronous and which can similar to like K-native we can handle all of this. Like have that cloud events broker. So if we end up using like after the thing, you know, some other pub sub mechanism or some queue we still have to, we might like have to make any structure or any format like cloud events because we don't have the trigger inside of Jenkins is the same for a source, right? We only have that inside of K-native. So for that filtering and then sending it over because the idea here. We'll do one more example after this. So once you are able to achieve this bit of exercise where you replace captain with Jenkins and test with Jenkins outbound and inbound, we'll do one more exercise where we replace this middle bit of cloud events broker and we replace it with something else. And we could probably discuss that next time. I'll try to figure something out, but maybe we replace this with Apache, you know, and see how we can make that work because the point of indirect interoperability is that it should all be fairly decoupled. And you're just using data packets to tell each other what's going on, what is happening and your focus is that. So from what you're saying right now, it seems that you are considering this cloud events broker probably as a hard dependency at some point but that won't be the case. So at that point when we do the second exercise, we also get a better idea of the event filtering or something if you wanna do in, on our side, what that should look like, what's important, what's, or just some simple filtering even, maybe use regex or something. We could do that on our side, but I think it's a good idea to do some, like test architectures, like in this case, like this is actually a very good idea now. If we just replace captain with Jenkins for our first go, and on the second go, we replace the middle bit, the cany it a bit with something else, like Apache or something, like a different broker. Keep this, I just checked some documentation. They have their own Kafka broker as well. Maybe you could just extend instead of completely replacing to use the Kafka broker to send things to Kafka initially and then remove after that, remove it completely and then just connect Jenkins to Kafka to TechTown or do something like that. I'm not sure how well that will work. I don't have a great experience with Kafka, but initially I think it's a good idea to go with just the Jenkins replacement, then extending the cloud events broker to have a Kafka broker as well so that you can trigger some Kafka jobs. And I'm not sure Kafka topics. My coupling was not on point. And then we can do an experiment where we remove the cany to base Kafka broker and distribute that with a normal Kafka broker. You could? No, so we tied on in like giving users the ability to choose the protocol binding. So do you guys think that like the protocol binding should sort of like the user is entering it or if like a person is selecting a particular kind of broker it just automatically gets selected? I think that's the more easy way. I think that's like the better idea. Yeah, it should be fairly easy because like you can say something like protocol binding and then in a dropdown they can select Kafka or HTTP or whatever. And then when you're creating the, yeah, it should be fairly easy. I think just like when you're creating it you have a switch case should be simple enough. Yeah, and like the UI right now it does like support sync type. You know, it has that HTTP sync and other sync. So we just need to go on to other sync and then implement the. So my like my computer battery is sort of it's like it's not really super amazing. I know I'm starting my idea. I'm my intelligent and it might crash but I'm going to try and sort of like show around. That's all right. Yeah, so, you know, like like you suggested having an abstract like a sync and different kinds of sync are just extending from it and then like developing over it. So I think that can be really helpful here because we have an HTTP sync which is extending from the abstract cloud events sync. And then we just have different kinds of sync which is designing events. So we might, we might also not need. I'm actually thinking if abstract makes a lot of sense with my, but the final product is going to be a JSON, right? Like a JSON object, like a cloud events object. The final product is a cloud events object, right? So I'm just thinking instead of doing the abstract thing, would it be better to have a sort of switch case which manages all this? Because I don't want you to waste too much time on the UI. I think that this is going to be like very, I think in terms of just making different sync classes because in that way we can sort of just separate the kind of sync. So the UI has a sync type, right? So a person can sync type and what this can do is also just sort of separate the UI sync type with the implementation sync type. So the things are kept separate. And what this will really need, if it'll just look at the place where we are sending events and see if the sync type is HTTP, then use HTTP sync class to send events. If it's a different format, the new stuff, format so we can use switch case there and then just go into classes. Because we still would have to do the, the design of the cloud event is going to look different. I can try, I can like try making either whichever would make more sense. We can go forward. Yeah, there's, I am just, I just think that right now I have a gap in knowledge because I'm not sure how the, I was just thinking, do I really even know the protocol binding and what it means for Kafka? Even I would have to just check before I say anything really because I even have to figure it out to tell you the truth. But what we can do is we can sync sometimes this week once you gain enough info on what that is. And on my side also I'll check it out. What we can do, but I think to start off with the Kafka bits, to start off with that, we can just use the Kafka broker for, which cany tibias, and we can start off with that before moving on directly to Apache. So sounds good. We may try this, like the whole thing with the power issues and the power cuts here, it's just really derailed what I had planned. Should have been more flat. That's all right. It's fine. Anytime, like when the power cuts happen, I know they must be very erratic. I'm not sure if it's the same problem that is happening in Punjab, that's happening in UP, but... People here are just lazy. There were four people who were fired over this yesterday, like in the bars in the department, because they weren't able to... Yeah, like we had a call for 18 hours straight. It went from like 12 p.m. And it finally came at like 5 a.m. in the morning. And there was one day, that was like one day and then kind of the same things happened over because they weren't able to find what had happened in that 18-slash-whatever-hour cut. So they had to fire four people. That's just this state. Yeah, firing people is not gonna help. But, yogi, right? I can't say much about that. But I hope you just get light on a more regular basis. Are there any kind... But when are you free this weekend? Or you can just actually just ping me. I'm pretty much free this entire week. And you just ping me when you're free and we can catch up on the release bits. And then after that, if you wanna sync on the architecture bits, I can help you with that as well. Yeah, that sounds really helpful. Thank you very much, everyone. Side note, you've already been able to deploy a SOC. So you're kind of halfway there with the setting of the vocals. I don't know what else might be needed, but for all the CRDs and all the CRDs that you need, you were able to install on K-Native is something you were able to install in MiniCube. The only next thing of you, the next thing you can do is, you just have to figure out what triggers are, how to set up triggers and all those little bits. And then after that, you'll have to figure out running Technon locally on your MiniCube and using the Cloud Events controller for it. I don't know if you need that. And setting up the triggers, which you will definitely need to fill. The trigger template, trigger bindings and whatever things are there in the triggers controller, you'll have to learn a little about them and then just create the event listeners and then give it to the Cloud Event triggers and then once, and set Jenkins as a source on one of the broker, I think. So we can talk about that sometime this week. Just let me know whenever you're free. And I'm pretty much free every week again, just bring me, let me know. And have you also like worked with Yaktan so we can also sort of try to get out just to make sure that these things are fairly similar for different systems? Not exactly. I did deploy it, but I couldn't get to running things with it. I was just, I just got a little confused with the abstractions and the wording they used for certain things. But I will probably have to check it out at some point and we can just have a hacking session and figuring out what Captain is about because as far as I know, they are like the, they really embrace Cloud Events and try to work with them as much as possible and use that as their main medium of communication. Kara, do let me know if I'm going wrong here because I feel you know a lot about Captain compared to what I do. I don't have much of an idea about it. Actually, I was thinking that Mauritius would be the person, would be a really good go-to person to ask about. I think in his new work, he will be working with it and focused on it. So I feel like he actually, of all of us probably has the most context and knowledge. Yes, yes. So maybe in the next meeting, what we could do is we could have like a captain session after the hacking session maybe and figuring out stuff, whatever questions or anything you might have about Captain, we can ask them to Mauritius when he comes in the next meeting. You can make it a point to call him for the next meeting for the captain discussion. I'm not sure, but do we have events in meeting today? It says August 16th. Let me check that. Yes, it's August 16th, the next one. Oh. Have they changed to monthly format? I don't think it's, I don't know what I'm just gonna go on with, let's say that's the summer. I'm probably, but I'm not sure. You might know more about it. I think that is exactly correct. It's due to the summer. We did the same thing with the interoperability segment. So I also like I have like the, started with Jenkins as the same again, the one, again, like the one thing is this setting up the, maybe I would say triggers because we have been talking in terms of triggers for K-native. So triggers here. And that's why when I was, you know, like mentioning like that dependency with K-native, that one thing was leveraging triggers that K-native has because yes, it will be easy to implement it inside of Jenkins. But if we are already working with another sort of system that is natively made to trigger and filter based on a cloud events metadata. I think my question here would be that what should be our sort of dependence on using any form of, or designing any form of triggers inside Jenkins as a sync. And what I mean triggers like more specifically like filters and just triggers both included. Specifically cloud event triggers though. Yeah, that would be Jenkins as a sync, right? Mm-hmm. So, yeah. Wait, no, I'm, yeah, yeah, yeah, yeah, yeah. Jenkins as a sync, yeah. Yeah, so probably in that one sync, what we can do is we can have a object called triggers, which the user can set up and then certain, like if you, if in the sync, you say you want to create three triggers with two, three different things and the first one would be something like, you know, it's just start job one, second would be like start job two, probably start job three, but you could do that like, and I think before we restart, we'll have to kind of like redo the UI, move it to manage Jenkins and do all that sort of stuff. But yeah, we can start off with that. When you're gonna redo it, we can, while configures sync, we can have like these boxes for triggers and each trigger will have, you know, a particular project endpoint or like a build job or run on Jenkins pipeline, like do any of those things. Yeah. So yes, to work on that, I would say that most of it is like sort of moved out, but it's not like very clean because it's like just rushing over and like running it down. So still like I would move probably like everything and then design the idea with Jenkins as a sync, specifically aligned with triggers and filters. Yeah. I feel like there's a lot of stuff that we need to do with Jenkins as a sync. It's probably a better idea to do it in some time as we understand, you know, the architecture base itself and how the sync would be used and getting started with the architecture with the cloud events broker and everything is a good place to start off. Like once we start understanding the one flow, which is Jenkins to TechNorm, then we can start understanding the other for this TechNorm Jenkins and what that would look like. And we can get some idea of, there was a plugin, I think the events plugin or something which was mentioned on the design document, initial design document and we can gain some, we can get some help from that key whether how we can, you know, design that part or if we can just use the implementation that they've already done and kind of refactor it for cloud events, I think that would be a good step ahead as well. Yeah, yeah, that sounds good. And most of the triggers for this, like with the gainator broker, they're usually set on the header fields or not the headers, I would say the metadata about the event. So do we still wanna sort of have that structure, probably like in this introductory and the starting out phase, wanna keep the filtering option for the body or the data specifically of the event as well? Yeah, we should allow the user to filter everything. So by fields, we should allow them to filter each field individually with a different filter, each part of the header, each different header with a different filter and then the body itself with a different filter, like the body contains this, this, this. I think we should allow the user to do that. It'll be very, yeah. I'm sorry, go ahead. Yeah, because if we just allow only a few kind of filters, like only the header or some parts of the body, I don't really need to know that, but just the header, I think it'll be good for to start off, but we should allow them to do everything. I don't know, I don't think it would be that hard even to just do the body also. So, I don't know, yeah. So if you guys can see my screen right now, we are looking at, so we're still inside of the POC and we're looking at the designing of the triggers for depton, so like depton outbound, yes. So like the events which are going out from depton, and so, you know, they have this, let's see, so they have like the bindings, they have like header.matc type, this is the C type that an artifact was published. You know, they're doing like bits with expression, replacing the registry, whatever, and then they have like these bindings where they have, okay, the name, this is the name like SH Captain Contacts and body.shCaptainContacts. So this is like, this is the binding with like the event data, whatever is present inside of the event body. So like obviously here, it's like that they are aware of the kind of events which it's triggered from depton. And so like the entire body and the entire type, so I think that we can take reference, some like somewhat reference for this, but for us it's going to look very different because we obviously can't just like give them like bindings like, okay, or just like filters like, okay, this is going to be inside of the body and then body dot, whatever, because we don't know where it's coming from. So that's also one thing that we have to think about of how we can allow you there to go very modular with whatever filters they're setting on the body because our sink is agnostic and it doesn't know how to par, it knows how to parse it, but it doesn't know how to find that particular piece inside of the body. That's one thing that I've been thinking about. Yeah, if you remember we discussed this last time with the CEL interceptor. So we discussed actually, we probably did though. So if you can go back to the YAML for triggers and you go back to the filter that was there, I can show you. So what is happening with that they are using an interceptor which is a common expression language which we saw last time and the language that they're using over there with the header dot, match, something, something. So that language is the common expression language and I don't think they have a Java library so that we could use, which we could use but it is definitely an idea with which we can go ahead. So instead of CEL we could use something else for filtering and we can obviously start off with something else, like just simple matching, we don't even need any kind of library, but in due time what we can figure out is what we can use for this sort of, data extraction from the cloud events which the users can use ahead in whatever triggers they're using. So in this space what is happening is that they are binding certain variables to certain, so they're binding certain values to certain variables and then to run in their pipelines. So what this allows the user to do is that the dynamic nature of the pipelines which is changing the variables and running them, the changing the variables part it's completely done by the triggers themselves so they don't have to, like the DevOps guy or the whoever it is, he doesn't have to think about it. So this is definitely like some feature which we should have in some, whenever we are done with the initial implementation of the same thing. So once we reach the trigger stage and we are able to do match, at that point we should figure out what do we need to do for this value. Yeah, because I don't think I'm understanding correctly but the interceptor that they're using is more specifically for the matching but like the bindings is like when we were talking about extracting maybe the job name from the event body that's coming for Jenkins, you know. So Pecton is like, okay, I have completed this artifact and this is now what I want, like I wanna start a job with this name. So that's something like maybe body.jenkins job name and that would have the name of Jenkins job name and then we would trigger a job like using that particular variable. So the interceptor, I'm not sure if they've used it inside of, I mean, yes, we can use that. Obviously it'll be used inside of the event body but that's modularity of extracting specific values. I think that's one thing that the system for the POC with Captain and Pecton, yeah, like the inbound is sort of helping with that. They know what's coming over but we don't. So as we move forward, we will obviously have to give examples of how to use this plug-in with different systems. So at that point we'll have to give examples like what it would look like to use a good Pecton and there might be an examples directory where we can keep this stuff and we should do that. And if you notice here, this part of the eventing is non-dynamic, like they know where certain things are like the trigger ID, the context and all. So these things, we need to be sure of. These are some things we need to be very sure of. That's why the format that we're using for the event in the cloud event also matters a lot. But I think we can actually take this conversation ahead at a later date as we, you know. So right now we're just kind of getting acquainted with this. So I know like there's like ideas flowing and all, but slowly as this kind of, you start getting used to these ideas, you will figure out the right path today. So it's good that this is simmering in your head right now. But once you start working with the architectures and stuff, I think you'll get a better idea of what stuff should look like, what it should be because you would have done enough for, like you would have seen enough things to know what needs to be done ahead. Like right now you're looking at these tecton triggers, like you work on setting them up. At some point you'll figure out, okay, captain inbound outbound, this is how it works. Then you'll get a better idea of like how the cloud events side, cloud events plugin side of things should look like. Because users here are using these things, once they switch over to something else, they'll have a similar concept in their head and they'll also be able to work with the same concepts more easily. And it will be easier for us also. Yeah, that's a good idea. All right, well, thank you and thank you. Well, everyone had to go, but also thank you everyone and thank you, Kara. Let's move forward with the stuff we've talked about today. Let me know if you're free this week and we'll discuss some of the stuff and I'll be setting in some of the stuff that you need to go forward. Yeah. And congratulations on getting first part of GSOC done. Congratulations. Thank you. It was again, it was like hell from you guys. So like, thank you for all the work you put in and for being amazing. Thank you. Thank you for being amazing and working on all of the stuff. Where's Beth and I? Yeah, you've done fantastic work, Shruti. Thank you so much. Thank you. And we will continue doing this. Yeah. We surely will. All right, good meeting guys. And I will be in contact over the week and thank you very much for being here and thank you very much for your work. Thank you everyone, bye.