 Last week, there were a few things that also we've had and I also met on Thursday or Wednesday, and then we were able to talk that we are going to be really putting the first weeks for the plugin. So getting things ready for that, which is an important sort of goal here to you know, make sure that everything that's being emitted from cloud events and can as a source is the data that we need. So one of the first thing was going back and making sure that all of the data and the fields that we have on for all of the cloud events. Data field, I guess it's it's updated and it's also being. Like it's sort of changed between different listeners, I can show you an example of I wasn't really sure what all data we're going to need. So I didn't look into the older plugins already there and got the data for the cloud events attributes and just comments headers from there. And also, let me So let me actually go back to So there were a few things for the Q model and also So what this is doing is basically the Q cause that we see here. It's only being set for Q entered or Q moved from one step to another step. So going back into Q cause model itself. I thought that it'd be easier if some of the fields are not if they're not they're just not being sent along with the data. So, for example, if it's I'm ignoring some fields here, but in job model and including them only If the fields are not like not null. So, for example, the build or the created date to the created date is being used only when a job is being created and I'm using this similar model for also when A job runs and changes the state from job started or job failed or job completed. So I didn't want, you know, created date or updated date or build the field. So build field is containing the build model, which is Information about the build of run itself. So things like the the timestamp when something started or when something ended the display name, the parameters, which are being passed. Another thing I Just give sort of a demo about The data itself. Also, I did try running this on Kubernetes. So should I just sort of start that up and display the data being sent from Kubernetes or should I Do guys want to see on my local setup or with just checking on Kubernetes would be a better idea. I didn't get your last remark, Shruti. Um, so I said that I have it running Kubernetes. So Instead of have it displayed from my local Jenkins server. I'm going to Display the plug in from Kubernetes itself and then we can look at the data in the fields which are being emitted for different events which are emitted. So just Just keep keep that in mind that this is not my local system, but this is running on Maybe an external server. Yeah, that would be better actually. That would be very nice. Yeah, we could do that. Are you also going to show this demo during your 19th demo? Um, yeah, just I also the recording I have done it with Kubernetes and local both, but I'm going to go back and check which is better in terms of which has more clarity and information and then I'm going to share Okay, so The Running inside of AKS, Amazon's Kubernetes, I'm just going to make sure that this is the So I have, I did an earlier version where I had two replicas running on three different nodes, but these all of these nodes are actually sort of like high High charging machines, so they do charge me a lot for the VMs that you're running, like the nodes that you're running. So I just tuned them down to two and I have a single replicas running on both of these. Why don't you use MiniQ? I just, I had AKS running earlier, like before, but before this, like the project I just did not have, I had a single node which is like a scale node so I just thought I'm going to spin that up again. It's also sort of fine. I do have credits for it. It's kind of fun using credits. I'm not going to use them otherwise. Judy, if you run low on credits or running low, do let us know we may be able to source something for you because I would not want you to be spending out of pocket for this. That's not necessary. Oh, thank you. That sounds, that sounds really nice, but I feel like I have sufficient credits or I don't think that that's going to happen, but either way, it's really amazing. Okay. I'm also going to restart this. So this is going to, since you know all of the events are selected, I can start with a few clauses that I was talking about. Entered waiting. So the two causes will be inside of the event data only when it's either in the entered waiting stage or it has moved from one stage to another. And this is the entry time. I, so this is still like UNIX time stamp. And I think that I'm not really sure, but I think it might be better to keep it this way because it's much easier for, it would be much easier for the sync if they happen to use the entry time or the exit time to, you know, convert it in whatever format that they would like. So I kept it in this particular time format, but we can change it if you guys think that that might be a better option. In terms of the header, we have the specification version. So it's 1.0, the ID, which is, so it says it's UID, the C type, the source. So the source is, and this, so this is the yeah, so this is like the, the resource. This is going to be the job and then the job name itself. Q left, it has the entry time and the exit time both, but it does not have like Q causes or it does not have any sort of information about what happened to that particular stage that the Q is in. So it's just not present. So it did not have the need to put it here. And so this was one thing in the build or like the job started or job ended. I did try putting in SCM state, for example, like from GitHub or I tried it from GitHub. And what this was doing is every time a job had started, it was pulling the version of the this, the commit ID or the commit shot, the older version and it like job started would be the older version. For example, that the previous shot is 051. So whenever a job is started, it would be 051 and when it moves from one stage to another completed or finalized, it would pull the newer version. And then when I can also also show that going to struggle. And I'm not really sure why this is that this is doing that. But but that was something that was happening in terms of the parameters I can test site that up and also so this was the job updated event right from here. So this has a big file added to it. And I do think that this can be important if someone is trying to get information about a particular job or a particular project from integration files here. And also that we have also have the updated date and whatever a job is created that would have the data data instead of the update. So, so I just thought that if having started data as no, I just maybe not put it here. And because I was looking through events from different other system and tech on and other systems. I just found that not a lot of those systems have information being in the channel. So we can maybe have something like okay started data is not an updated data is whatever or similar information that I have over here. So if I was to go visit parameters. So the parameters are here and I also going to go back to how to set up another job. But do you guys get an idea of the the SCM state that I'm talking about it was doing this really weird thing. So every time you go to the to build this with parameters, so the parameters are here, and I'm also going to go back to kind of hard to set up another job. But do you guys get an idea of the SEM state that I'm talking about? It was doing this really weird thing. So every time I suppose push to the main branch, that is the branch that is being tracked, it gets a new commit ID. And so this is the updated commit ID. So whenever a job would start, so this particular event right here, SEM state URL branch commit, the commit would have the older branch for job started, and then moving to job.completed or job.com finalized it would have that updated new commit ID. And then if that job was started again, maybe by maybe I triggered it from the UI or maybe was just started by pulling the SEM, it would have, again, the older version of the commit ID than what actually is the new data version. Does that make sense? And I'm not really sure what that is happening. I tried it with different systems, and that still was giving the same sort of edit. I don't know if that's an actual letter, or it's just how it is. But will you guys happen to have an idea on that? Not really sure about the SEM state. So could you repeat the question? Yes. So I had SEM added to the version that I was running on my local Jenkins server, and I had like whole SEM and whenever something is pushed to the master branch, just triggered that particular job. So can you see this SEM state like field on the unlike we asked code to the right of the screen? Yes. Yes. So I don't have it added to this particular job, but let's look at job.com. So this is the job.started. So whenever I push, whenever I push to the main branch, and this is the branch that's being trapped inside of Jenkins, and then it's pulling this particular branch, so whenever I push to it, it would have a new updated commit char commit ID. And so whenever it's pulling, and then a job would start, the SEM state for job.started.com it would be older ID, the older commit ID instead of the newer commit ID. And then it would move to job.complete it or job.finalize, and here inside of the job.com completed or job.finalize SEM state, it would have the new, the recently pushed commit ID. But for the job.started, it was still giving me the older ID, and every time it would, it would refresh every time like newer commit is pushed or newer job is started, but it still was not the one which is currently the most latest commit, which pushed to, which started this job. It was the previous version. So like commit ID minus one of whatever started. So that is pretty weird, and I don't know why that is happening. Yeah, I don't know off the top of my head either, unfortunately. Yeah, that can be a problem. If you can show an example. Yes, I'm trying to do the commit ID is showing us like the one older version. Yeah. Okay, so that is like a syncing issue. Like in a way, like it's showing the older commit. That is actually don't know because every time like it's only job, like it's only for every time job as it started, when it moves to job completed, it is that particular new version. It's not like the older commit. Would the newer one need a checkout? No, it's all added and it gets triggered. The only, it's just inside of the payload that the problem exists. I don't know why I'm not able to. I also looked at, I sort of googled it and like maybe try specifying your branch as originally something similar. And I did try doing all of those things. It just did not work. If it seems like it's an issue with Jenkins, like the payload is not being populated correctly. I think we can raise this issue with the other other members in Jenkins. Then it would be a good, good idea. Yeah, I think it would be, yeah. But if it doesn't make a video after the meeting and I'll send it to you as just to sort of illustrate the process. Yeah, that sounds good. I think for now what we should do is what you have here is pretty good stuff. What we should do right now is we should focus on the upcoming demo that we have. So for that, so on Friday we had a meeting in which we discussed that I think it would be better if we release a new version of the Cloud Events plugin so that along with your demo, we have a new version as which people can use and then we can go into a feedback mode for the second half of the project so that we can improve along those lines as well as what we think we should probably do and that would help us give a better idea of how people will end up using the plugin itself. So for this, I think for now, let's not get hung up on this one small thingy. What we'll do is we'll just focus on the release and then as we discussed and I would love to have your opinion on this. So guys, so I was thinking initially we should just release for the sync implementation and make it perfect. I think it's already perfect actually but we just maybe need a little more documentation, a little more tests and after we release that then we can continue working on the moving the configuration from global configuration to the Cloud Events under managed Jenkins and then after that we could probably do Jenkins, sorry Jenkins as a source I mean not sync Jenkins as a source then we work on moving the configuration and then after that we can work on doing syncs in Jenkins. That makes sense to me. So that I'm clear what what you've just said is that for both the demo and for the first release it will be Jenkins as a source not as a sync and we will do those in further iterations. Is that right? Yes because so that you know people can get started with the you know. Yeah. Yes and we had questions on how exactly Jenkins would be how to implement the sync so actually if we have a version out that people can reference it will help I think get more feedback as well. Yeah I agree and I think if in the second phase we go into this feedback tool with users I think it'll be much better as as like experience and GSOC as well and like because the first phase you know kind of like build it up and second phase you can get more touch with the community maybe do like some focus more on kind of like evangelizing a little bit of the plugin itself so I think that'll be that'll be a good step forward. And yes that's a really good idea and I have added more tasks I like I think that yes documentation is really important for it and I'm creating I have it's just a text document where I have pooled all of this event data and you know how maybe Tecdon has or even how other cloud event systems even Knata has good documentation on this and what type of event gets sent to it and here's what it looks like alongside headers and we can this do you think that putting all of that inside of just maybe the main read me doc that's that's a good idea or maybe we can you know move that into each specific listener. I've noticed one thing with Jenkins plugins is that the read me that we have for it is directly shown on the plugins page if you go to plugins.jenkins.io I think that's what it's called the read me that we have for read me dot mv that we have is the same thing that is shown there so we can stuff as much information as we want on the read me and keep that as like a single source of growth for the entire plugin and that would that be completely fine if anything would be much better because the users don't have to go to like you know different docs if so if we want to kind of differentiate between user docs and developer docs we could do we could do that but like user docs would be the read me and the developer docs could be something else but I think to start off we can stuff everything on this read me I think that that would be that would be good yeah that sounds good um and I missed so the the video that I um what referred to earlier I did just mention about Jenkins is thinking what we are thinking of doing you know just to sort of give an idea of here's what people can expect and it is a bit aligned above with the the sockeye that we showed earlier and I figured that and I did also create um a slick yaml file um of what it can be or what it can like what Jenkins is in configuration can look like so if maybe we can talk a bit about that too right now okay so what I was thinking is um inside of you know having filters so a person is configuring Jenkins as a sync so there would be sort of this field of filters where it could be cloud events metadata cloud events um event body or event data so those those can be like the two broad fields where someone might want to filter it by the cloud events metadata or by the cloud events data itself so configuring that might help us figure out what information can be extract from either the um the headers or the body as in like the metadata or the event data and I also thought that um inside of you know the actions that a certain user might want to trigger what we can do is have an option of okay is this action parametrized or not just how you know Jenkins does this build parametrized and um if it is you might want to add parameters so actions a person chooses yes to start a build of a job and a person then selects yes this particular action is parametrized and this will be receiving parameters from the event which is coming in right so this is just happening a configuration is happening in Jenkins as a sync um so yes this particular action is parametrized and inside of that particular you know if it's parametrized or not it can be okay where is that parameter present whether it's inside event body or whether it's inside event metadata and then obviously um whatever a person selects then actually entering the the parameter that someone might be looking for for example intact on whether it's um inside of you know if a person selects okay the parameter is actually going to be inside of event body and the parameter is a field in task run so I I don't know it's yaml is with yaml you can sort of over achieve but I don't know how exactly we can move this inside of Jenkins but I feel like having the javascripts and having um if you're trying to run a framework it might be easier to go that sort of direction of dynamic filtering what what do you guys think of that wherever we can allow the user to use rechecks probably we could allow them but I don't know how that would work um that's why I refer back to the common expression language that was there uh we could if there is a java library for it we could probably just use use that actually but I'm not sure because the cl has a go I think they should have a java okay they have cl c plus plus and go we don't have java but there's there's the spec but we wouldn't want to implement cl that would be that would be a thing I like we can um do sort of implementation inside just like a simple um matching and filtering of you know if it's like a like a string coming in and we might want to filter it with just regular expressions uh like we can achieve that I think we might not really need common expression if we are just doing that simple matching no if you do the simple matching I don't think we need to need to see your stuff like if we just you know do the prefix post post fix I think and matching or substring like if you do all this stuff I think it'll be fine to start with so the what I had I was like sort of thinking is I did not have like regular expression fields inside of um the like the metadata or the data itself but more inside of the actions where a person is selecting parameters um but we can you know we can like put that both in both of those places when a person is selecting filters and when a person is entering parameters for the particular action um I was just like looking at different events and you know looking at them I just I thought that if if a certain um sort of a field is present inside of the cloud events metadata then it will be present like whatever a person might be looking for for example if it's a it's a tecton event if someone is looking for a task around a particular idea of a task run it is going to be present inside of a body so just giving users the ability to also filter by the body and also enter parameters from the body as well it plus the headers of course and yeah I think filtering inside of the headers uh it's it's never a bad idea to you know give users only ability to configure it the way they want to also think of filtering in um the the filters itself like the regular expression um there might be there might be times where we have to figure out what is the best way to filter the body maybe maybe we can come up with examples for that because just like match won't work in this case at those points we might have to do stuff like you know pick up pick up certain information from the body itself so I think in those cases we love to work with like those cases will arise when we are doing more sync stuff and we are you know doing real world applications such as you know working with tecton or something but I think that is still um ahead in the game right now so what we do for now is um let's let's work on the um let's let's work on the actual uh configuration bits for the sync first and see like how it does like we do like multiple iterations of it and then kind of us and how many of our iterations you do from that we'll figure out which one works out best so we'll take that path and because most of this is just going to be a lot of trial and error or especially with the syncs like what I was thinking is for example if here is an array right inside of the event body and a person select that they want to trigger build of a job and yes the build of a job is parameterized and then parameters are inside of event body instead of like event metadata you know the regular expression we can have something like since this is an array um like the parameter includes or like the parameter contains like includes obviously makes more sense here it includes this particular object or if it's like an object itself it can be like it like contains this particular field and uh if it's like instead feel it can obviously sort of go in like a nested direction of um this instead and like the condition includes a message like this particular message again because each event if you like go in and look at a k-native event they have a different way how they are defining their events and you know not everything here is an array or um not everything could be an object so maybe just just thinking of how the matching might not only be matching of strings but objects or or arrays or even um I don't know like something as complex like an entire field or an object what we could do instead of regex because it's not very intuitive to because what we'll have to do in that those cases you know look at the existing or like the user will have to look at the existing payloads and then figure out the regex based on that what is better if that the user doesn't have to go through go through that path so I was thinking if we could do something like dollar uh dollar uh period and then you know uh whatever task run dot creation timestamp and you know task run dot metadata dot creation timestamp so that dollar uh that the user gives it will replace uh it it doesn't have to be dollar it would be something else as well but it that would replace the payload that we are giving uh that the the loud events plugin is receiving at that point and and the and then the creation timestamp would be used for like a parameter yeah yeah that makes um yes that makes sense and uh one thing so um can we think of any events which might not be like parameterized but just like the last time that we were talking and the idea stopping the build of the job came up right and um risha suggested something really good which was like which was if a certain parameters went inside of that then stopped that particular build else just stopped the last build that's running uh so I was just thinking that it can be complex to understand where to get that you know value from so it's like how can we um sort of like trigger an event which is a default version but also it does not mess up the current like whatever is being run or whatever is whatever has to be stopped you know because yes a user can supply information about a build that can be that has to be stopped inside of like the the event body or inside of like the parameters and for our um think configuration but how like I think that that can be sort of confusing of where to get that information from and then also to configure it inside of the plugin itself let me reiterate what you said example um so you're saying that what if we want to stop a build like that's the last thing we discussed right um stop a build based on certain parameters uh parameters given in the payload that we received yeah is that what you asked uh yes but like but what if that particular yes like a person is looking for here is the parameter um in this particular like value and like stop this build if that particular you know parameter is being received um like a person has configured that parameter so um stop a build when like yes the the generate name is curl run whatever then you have to stop this particular build so it's actually kind of reminds me of tcp for some reason because um this is this is kind of not easy to do uh because what is because what you're saying is the um system needs to know like what is the name of the uh you know like build which is going to be something random or like an arbitrary number and what or what is going to be the name of the task run because task runs also have like arbitrary names like they usually uh don't they usually work with generate names right so um like it could be something like like this could be something we discussed with the uh cd foundation guys in which we figure out like a way in which there's like a sin act happening so like uh imagine the cloud event that is being sent by tech on to Jenkins is a sin it's like synchronization is and then uh an event is triggered oh sorry the event triggers a bit okay now uh based on that the uh name of the build is returned back to the uh from Jenkins to tech on or because they have been configured to do that and like then tech on knows which uh which build has been triggered so there'll be a build number there so and after that uh like and that's how that's how tech on will know that this build has been created by uh Jenkins so that's like a sin act so that's like the second phase of pcb right so this we might have to send out a new event back uh to tech on and then based on that you know tech on will do something be like okay this is completed but you know actually you're discussing something that has i i don't know if it has been discussed before so this and uh there i think there are like talks going on i think there is already like work being done to uh kind of store the state of uh events uh and you know link them together and this is this is the problems that we are discussing right now so like linking events together is is is what is what we're talking about so we have to probably figure out you know how this might work and i'm not the best person to talk about this we might have to take this conversation here um yes thanks about it like um it does make sense and you know since events especially we are since this is like an interoperable system we are not only building a single service or just like Jenkins tech on just looking at events from different systems and thinking what kind of payload can be sent and how can we map what needs to be done and sending it back yeah and creating it in a way which is abstracted and also not dependent or not really attached to a particular system it's pretty hard to just think about but um i'm like finding resources and looking at ways that is pretty fun yeah um this is this is very interesting problem to solve it's um it's just it's just that but but this is a problem which is probably discussed further with uh you know the community because i don't know if Andrea has already faced this issue or like has thought of this issue in tecton and captain poc um but i think even uh i think he he would be interested to discuss this further as well yeah yeah basically this basically tells us that we need to uh like even in the tecton payload we probably need to have like a synchronization block i don't know how that works but we probably need to have that and uh probably need to look back at you know uh the like some like protocols such as tcp and see how they store this information and you know how this could be interpreted in something like events which is not a which is not a lower level protocol oh my god i am getting all of my network in here but uh yeah we need to we need to figure this out not just between ourselves but with the community because this is not an easy problem yeah that's right um i i hope that um so the next cloud event safe meeting is on the 19th and i um like i'm obviously really wanting to join and i hope that i'll be able to join and maybe take some of our questions there and also just hear what or how they have built the current adapters and system for tecton jackhens and canada tecton jackhens yes definitely definitely yeah we should we should definitely bring this up in the next meeting so it's uh next monday so if you want next so next monday is also your 19th is the is the day you have to demo this stuff so what we could do that day is so when is the demo exactly the demos will be on the week of the 19th but we haven't chosen the exact date and time yet please please respond to the doodle he was in the email and what's in the slack as well um if you haven't yet i know shoochi's been great about about filling in her details okay thank you um hopefully hopefully i'll be up to announce that soon and also if you can think of a better time i tried to make the times i mean i guess it's because i'm in europe so i wanted to make it european friendly and then i would be i really wanted to make it a pack friendly because um that is really important for all our students this year as well as most of the mentors or a lot of the mentors so hopefully that is true if you all can think of a better time that somehow i just i don't know i then let me know somehow getting this a global time slot this is very it's very difficult to around you know that's amazing times car thanks you thank you so much so um so it's the week of the 19th so like 19th up till 23rd right yes that's right but you are doing a pre-recorded demo which you have but you may edit or not i mean i i'm not sure what you're out here yes um yes i i'll go through it again and so i still think that on the 19th we will be able to do this meeting and then the meeting right after which is for cloud events and move the the demonstration to 20 other 20 particularly guys or okay of course um but if not i i um cloud events say they they're recording the meeting so we can just maybe post our questions on the this live channel for um i think it's cloud it's called cloud events i'm not really sure that sounds good it would be good to make the meeting there so maybe we'll explicitly choose not to overlap with them um i i don't have anything to add right now and i will send the as soon as i'm able to configure the um the git scm thing that i was talking about i will send it to you guys maybe a video or maybe just um screenshots uh and also working on like right now i'm basically working on adding more tests and also just making sure that the data that's being uh emitted and kind of makes sense for the events again do you guys think that it's better to have no fields so you know for a job that's updated it's better to have like started or like created time as now or updated time is whatever then just not having that field at all because as i showed earlier i just decided to not either serialize those fields or not just include them at all but just just you know just wanting to hear what do you guys think about the event i was thinking if the field is null then it probably shouldn't be in the payload but i don't know how i feel about that really i don't have much information with uh like sending a lot of data uh like it related to something maybe maybe the null is necessary uh you know like maybe having the key itself is kind of kind of important that's what i was thinking too i was thinking that maybe if someone is trying to filter things by you know created date and just created data is not there and maybe if someone is trying to parse and there's just unnecessary data maybe you just don't need that created field created date field at all why include it but there might be uh there might be cases where we would like to keep the field and like the key at least key and whatever value is null but at that point when the user is parsing the data um we shouldn't have a null pointer exception on our side you know there they could uh when the user is uh parsing the parameters they could probably have null as a string um but i don't know how i feel about that really so actually neither um but yeah so the fields that you're absolutely sort of essential i have kept them even if those are null uh for example i don't know maybe just even or maybe just like job name maybe that's just null for some reason i don't know but feels like those but where i thought okay i'm not really sure if this might be helpful i have i just chose without really you know maybe like thinking a lot about how it might be looking like for a user who's receiving this event and just to remove that and sending it over as enough but um yeah still not for example like scm like as you guys uh we're looking or there it was null even then even when it was wasn't configured so someone is like okay is this scm configured no it's null um but certain fields obviously don't exist if they're null i'll probably think more about that and okay that's and that's that's it for me if you guys have any suggestions and feedback on anything um especially for anything that's like Jenkins is a source related so that can be included uh for the week of the presentations and i can include that in my presentation um so when you're doing Jenkins as a source i would it would be nice if you could uh send uh send like configure the sync as the uh sockeye sync um if you want we can we can reconvene on that and you know figure out how to do that but i think it should be pretty straightforward because uh when it comes to giving a presentation uh i think uh showing the cloud events in sockeye would be actually very nice and anyways when you're able to run these uh run run this on minikube so like we can you know give users like a end to end kind of like a not so not least solution of anything but like end to end solution of you know you have Jenkins and then you've got sockeye and to like see how this works you can just set this up this way and you could just make a small repo in which you have this bash script which just runs all of this or like at least gives the instructions on how to run all of this so i think that'd be a good place to start all like for the demo that's my that's not only that's not the point yeah that's a really good suggestion thanks a lot and that sounds interesting too but let me know like if you want some help to set up the sockeye um it should be pretty straightforward on minikube but yeah let me know thank you i think um that's maybe it okay awesome good thank you all uh great meeting and i really look forward to seeing your demo shruti thanks bye y'all thank you everyone great work shruti thanks