 Okay, I think I'll start with the pull request the last week and webhub was able to look through the code and also suggest changes. So the most of the changes are basically with the way the names are the names of the functions and stuff. I think all of them are about, you know, like moving things or moving things in terms of, you know, moving a code to a function itself and renaming the function, but there was one interesting bit which I really liked, and it was about this public class endpoint. So what endpoint is basically do all it was doing previously was I'll switch over. So what it previously was doing is it's just a collection of, you know, the same URL where the event will be sent and the type of event. Which is, you know, job started or job ended and Q started and stuff like that. And then it had the same method in here, which, you know, I remember last time I mentioned the meeting that this is not, this is not the right place to put that code and I do want to move it. I was thinking about where what class can we implement. So I then had, I was working on just, you know, moving this around and thinking what we can do with it. So I then had an idea of what we can do is, do you guys see my ID. Yes. Oh, no, we saw we see you'd have, I think. Every week. I think it should work. Do you guys see it now. Yes. Yes. Yep. So I will go to, so what I sort of thought of doing was I moved endpoint to just represent the sync URL and the event, because the endpoint is going to be a collection of, you know, the sync where an event will be sent to. And the event type itself. So a user can go into the UI and select the type of the event and then also select the URL where that will be sent to. And I moved on. Okay. Oh, yes. So there's this sync class right here inside of HTTP sync, which is, so there's this abstract class of cloud events sync and this basically what it can do is allow users to configure different kinds of sync, other than just HTTP sync. So I moved the method of sending the event itself to this HTTP sync class. So what this is going to do is just use, you know, a post. So it's going to have, it's going to get that particular object from from a stage class and stages basically what's what will be triggered as soon as like a person selects. Okay, this is an item that I've selected. And then this is an enumeration and it's going to, you know, build, build those models and then send it back using it's going to check what kind of event or what kind of sync. It's selected and it's going to send the cloud event to the HTTP sync and the HTTP sync is where the method is. But one sort of not a problem, but it's more a UI thing. And I kind of want to change it, but I think it might look unclean on the user side. So right now is just over here, you know, so a person can choose one sync. And within that sync they can choose the the endpoints and the events that they want to send to so you know they can either click all events and they can have this deleted or they can choose a specific events to be sent to specific endpoints. You know, maybe something like users to and then select that I just want to receive a job created event here. But what I was thinking, ideally, it would be nice if user had the ability to configure multiple things and then inside each sync have, you know, have this option where they can configure different events and different endpoints but I think that's I tried creating that on a clone of this plugin so it doesn't mess up with the UI of this one. And it wasn't really working because the jail I was not able to quite figure out the jelly for this. And I'm thinking of going to just using in a movie scripts because they might be a cleaner way. And so, you know, a person will be able to like I should be saying and then maybe select another type of thing. And I kind of wanted to hear from you guys that like what ability should user have here because I think that a user might want to configure two different kinds of sync at one time and within each thing they might want to configure different events or endpoints. So I'm curious on here on hearing what you guys or like what you guys think. From my side, I think that this looks good in general what I tend to see happening is that people will want to send multiple events to the same sync. Right, so it should be easier to select multiple events there when you have event, I would make that events and basically an array of events that are going to be sent to that sync. So you don't need to configure like every event that you can possibly send to a sync. Right, and this should be like a multi select. Is that what you're saying? I think so. Yeah, I think so. Okay, yeah, this came up last time too. And I think the reason why I kept it this way was what I feel is that it will allow users to sort of have modularity over the kind of events that each sync receives. So that particular sync is just configured to receive like job creative and so as soon as receives an event from Jenkins it knows that it's a job credit event so it knows what to do and go ahead without really like parsing through the either the headers if it's a binary structured cloud event or parsing through the body if it's like the structured. But that's the thing about events right like usually what you're going to do is you're going to send it to a broker that doesn't know exactly who is going to consume that. Right and here you're assuming that you know you're sending it to an endpoint that knows what event is supposed to receive right. So I think that's kind of like the difference in general I would say you are sending it somewhere you don't know what that something is going to do with the event and you need to allow like people who is configuring this to say okay I want to send all these kind of things. Like these things. It does have that option right you could you could pick event event all all all events. Yeah but yeah but then all of the events mentioned here will go. And I think in terms of like like it would just be like I just want to receive created left completed is that I am. I see. So, yes. I agree with the overall. The overall feedback that this kid to do add endpoint and then and then you know enter your sync URL and then select the event if you're if you're doing lots of them that would get kind of cumbersome I've done a jelly you I like that before and it's it's not fun to set up. Oh no it's not. I but what's the multi select is with that with that work. So I mean it could be something as simple as instead of having a drop down all of the events are listed with checkboxes. It would make each thing kind of kind of kind of long but you could then you could see you know you'd have a checkbox for all events that you'd have a checkbox for. In fact you could have have a checkbox for all events that's checked initially and then when you uncheck it, then you see the other ones and you can individually check them. And then so you see them all right there you can see which ones are checked. I think that's a really good idea. I think that might be a better configuration. And Mauricio you think that having like, if we're like all difference between that all events and sort of giving the users the ability to select multiple events. And you know for the configured sink. Like you know what you were saying earlier that it should be the job of the sink to sort of figure out what they want to do with the events. So when we send them all events they have complete, you know, ability to see what events they have received and then work through it. Yeah, yeah, yeah in general I would say that the sink is the one responsible for dealing with all the events that you're sending right like, and even to reject the events that you're sending right like the sink can be configured like the broker can be configured to say, you know I want to filter these types because I'm not interested in in some of those types. But I think that like my point was more about like to make like the life of the user, like is here, like on the setup side. And also imagine that if you have a sink with security parameters when you need to configure tokens and stuff you don't want to do that multiple times for everyone. Right, and I think that that's, that's, that's kind of like where I am right like I just don't want to configure it once for a sink and then just move forward. I do think that there could be value and being able to to select which events you send to the sink. Like for instance I might only be interested in job completed and job failed and my sink is running on AWS so I have to pay more money if I process a lot to stuff and I maybe maybe don't want to send stuff I'm not going to process so that that does seem interesting. Although I mean if we didn't have it people would probably still use it. I think that yeah, I tend to agree I mean as soon as you send everything kind of like the sink will be able to filter as well. Yeah, and yeah, I don't know. It's it's it worth spending time on the filtering mechanism on the on the clients and these guys like on the producer. Maybe that's I think that like a checkbox like a list with checkbox I think that that will do the work for now. It might be fine in this case the the example I'm thinking of that that give GitHub Auto status plug in. I added events for test cases, so it would send something for every test case that pass succeeded, and it's a lot of data and so people asked me to, to not make that the default. But but it was also writing them to a database and so your database gets huge and so in that case there are reasons why you might not want to actually send them. But, but I'm not sure that that's true here. I mean if the if the sink just rejects them and doesn't do anything with them. These are all probably really small. Yeah. Yeah, I can actually show around the event body as well. And also I think you know the the conversation about structured versus binary, I think for, you know, as it's being sent to external systems. I think it's a better idea to have it constructed in a binary manner because that way, you know, the external system is much easier to figure out for the headers what they want to do. You know, going through the body. You know, testing everything. Well, so the events that are here right now are the job created, updated job in terms of the queue job left the queue and had started completed and job finalized job field. So right now I think because we are going to be, I'll be looking at when we're not, I can, I can try configuring one and then seeing what happens when job is completed job of data. So, and then I can also try running it now so it should receive all events regarding the run. Oh, yes, it might have not received the job field. I look into it. But this is sort of you know what happens when all the events are going to go so I am thinking of increasing the payload in terms of adding more information so there's also going to be information about the user since I'm not logged in right now. But, but yes, the payload might increase when there are artifacts involved when there are parameters inside of the job. So, I do think I have been thinking about how we can configure the UI so it's useful as well as it's, it makes the most sense in terms of opportunity between different systems. So and I do like the idea of having, you know, like check boxes for the events and then just like one single instead of the repeatable where you have to enter the sync URL and so. I don't think something maybe not for now but to keep in mind is the idea that maybe when you're sending cloud events, you want to configure, you know, these things and the events kind of you want to probably decorate the events with some extra metadata. Right, sometimes you want to add something to the cloud event as an extension or an extra field or something for for example in the in the headers so the sync can do filtering based on that right so for example in this case. Maybe the job name as a as a header. I'm not seeing any headers there like just have type of them right source. So do you see my yeah so there. You see my, yes. And the headers are. Yep. So yeah, I did. You you ID right now. So yeah, I think we can also send name but I, I what I, the reason I think I didn't do it because when we're sending multiple events and receiving like the I, there's several events with the same ID, I think there, I think cloud events there was something about cloud events for the event ID or like the CID to remain unique, but I'll look into it and I can change it to job name. I think that the ID is fine right like the ID needs to be there right like you need to have an ID for the event. So that's perfect. But what I would like to see for example is then if you can get the job name from collect the job in Jenkins. And then send that because then, then you have a way to say from the sink side. I'm really interested in the, you know, job builds for this specific job name and not for all the jobs that you can have in Jenkins. Yes, and that should go inside the headers. Is that what you're saying. What I'm saying is that we need you need to choose wisely which of the parameters you want to promote to the headers so the sinks will be able to filter without like actually going and passing the body of the event. Yes. Yes, I think. Yeah, the best that makes sense and that's also, you know, one of the benefit of using just like a binary format. Because it's easier to look for information here. So, yes, I think that's that's a really good idea and I'll move critical information which identifies that particular object outside from the body. I mean, in the body as well but also inside of others. Exactly. Yeah, both again, if you have it in the body and you can promote some of those very important things to the header, I think it will make a lot of sense. So, and, and, and in terms of selecting the same type itself. Should there also be a way for users to configure more like two or three type of things and the same time. Can you repeat the question so if you have multiple things. Like sink type so like someone has to be saying a TCP sink and then maybe some other kinds of sink and they wanted to. So at once I just want to configure all those three things. Like I'm just maybe just thinking of what the user need to configure three different or two different sink types at the same time, because right now. The easier option on like for jelly to me was just having a simple drop down and I can choose either either an issue to be saying for others think at one time. I feel that's all right yeah yeah you usually want to configure like, but because the sinks are going to have all different configuration parameters, I think that it's good like as it is right now right. So, depending on the same type you're going to be able to need you will need to show kind of like different configurations parameters and that's what I think that at the end of the day you will need to have that kind of like structure that you have now. Type and then based on the type you choose the parameters that you need to configure right like let's say that it's the message you or something like that. So you need a URL probably a user as well like in a password or some credentials of some sort. But for now, for now I think that what you have is perfect I mean I think that you should keep it like that. And until we, you know, we figure out a little better more what what we need there. I will also move into a little bit of like just the changes of that I've done was just adding an item listener in the queue listener. And as I said all of them are basically just going into this a stage class which then decides what to sort of do with the with the event that it's or not with the event but the the sort of object it's receiving. One thing here that I also sort of wanted to talk about was, so as you can see that there's, you know, there's handle build which is basically handling, you know, a job started and did that sort of like a one run or one bill I think I should name it maybe one run, but I then because at that time build it more sense to me. And then there's handle queue is handled item. And all of them are basically in the same class and they also are using you know the payload inside of this same like class so build job model which is which the item listener so you know job updated job created and also the same thing here so stage has started, it has completed and finalized so this is what both of those stages are using, and then there is a design. Oh, and this is okay well the same methods but with different parameters because both are sending sort of different and then there's the Q model, which is basically building like the Q model and I am going to be adding, you know, more fields to it. So I was maybe thinking that is, is this single class, the, you know, should, instead of having this single class, maybe if I can move into, you know, build stage for q build stage for job, like, you know into different numerations and that way maybe we can have more flexibility in in terms of, you know, this this particular for example function it's just sort of. It's not the best function right now it's it works, but I think that I can maybe make it more sort of function in a better way, because if you're looking into for example should send build. That is the failed which might not have worked this time and I think that that's the reason because it's all in the same stage. It might make things a little bit unclear because there might be different stages for each of you know build or job. If I, if it does make sense to you guys do you think that, instead of having like a single stage which is just doing the work for all the listeners of sending events and you know, should send that event and then building the model should it be divided into maybe you know different stages for different objects. Yeah, yeah sorry I was muted. Yeah I think that it makes a lot of sense to split that functionality up right. If you if you split it up, then you will be able to extend it later on if you if you need more stages right now you have like the and I'm there right. So I'm just having like a different enums for like you know enter waiting and at left is basically functionalities for a queue and created and updated or functionalities for a particular like job itself and started completed and finalize their functionalities for a run of a job. I'm thinking of moving this to a different name and then this to a different name and inside of the listener classes I would just have like stage q dot created or something like that. And then have specific methods because I think like having what the having items is making one thing easier which is just, you know, it should send info to a particular like should that sink we receiving this particular event, and then like building I think it's simplified like using an enum is simplifying some things, but I still want to split this enum into maybe two or three or four or however many enums which the however many different listeners, but another, another implementation would be just putting you know more sort of, you know, maybe computer added, you know, and then I would have another sort of listener which would be a computer listener and it will just call something like stage computer added or whatever the function which is going to handle. Does that doesn't make sense. It does make sense to me. Yeah, it does make sense. But again, try not to over optimize before having code I think that, like, like you mentioned before right like when you see like that the class and the enum is getting too big. Then it's very definitely like a, you know, the point to split up. Do not split up just because you know you you think that it's going to be nicer I think I would say that just let's do it whenever it's needed. And I think I just found kind of like the right moment now to do it, because it's becoming more and more complex and you, you know, you know, now how you can like no now and you see how it needs to be split right. So that's usually the indication that you can be you can do it now. But don't over split it. Yeah. Yeah, maybe I was basically thinking of like keeping items together, you know, items in terms of like job like job created updated in queue. I think that might be a good implementation since there's some similarity between the event. You know, parameters and stuff like that I think that's what I'm going to probably do. And with the different listeners that handle the different enums, would they have different behaviors really or is it more like like classifying like things together. Yeah, so in terms of behavior it's more what what her so okay I'll show around the listener classes and just. So here's the job list there which is over like three different parameters, and then it has this handle build method so the, and then there's the item listener which is sending item of class of like type item listener from listeners and there's the q listener which is basically sending item which is a different kind of item. This item is basically like the queue item. So, when I go on to the item class, you know, I have these methods so there's the handle item and this one is for a job created and job updated and then there's a queue which is using an item but it's a different kind of item. And what this is basically most of the functionality is same as you can see it's just going through all the like the for loop and the if and the try statement, the try statement. But the one thing is like that building of the payload, and what needs to be put inside that payload is also, you know, it's different and it's all happening inside of this particular enum. Like I said it's also going to see whether or not a particular sink is configured to receive that kind of event. So this is what this particular function here is doing which is, which is only different for job type it's similar for handle queue and it's similar for job started and job or job created and job updated. So there's another, oh yes, and then you know we have this the building of the model which is also happening inside of inside of this and then it's going to basically check if the sink type is again an HTTP sink then it's just going to go into this class and then configure. So this is like it's going to build a cloud event. This is the abstract class implementation, where it's just using the HTTP standard method like a post to send that message to to the to the sink and HTTP sink. So I'm just thinking because yes, this particular you know it does have its benefit that you know it does. It's, it's like pretty similar for some things like this method and I can use into another listener which is like the computer listener as well. I looked into it, but yes it might sort of make the class really big and might complicate the process because then I have to if I go back to job list or no job is HTTP sink. So this is you know doing this kind of if else just to just to see what information is being received and then I can add that type and then add that type is used for the cloud event header. So there is both you know a good side and a bad side to using it and I'm right now just evaluating the cost sort of like the pros and cons of both implementations. I've also tried splitting them up. But then I move them together as like okay, I'm going to try and see how complicated stage before I want to move it. Can you go back to like the job one that you're just showing the job listener. Yeah, the one the one where you were building building you're building like an identifier. Okay so. So one thing that comes to mind and this is just a suggestion and I haven't looked at the code so it may not even make sense but if you had an like an interface that all of the models had to implement and one of the methods on the interface was something like get get type, then every model would have its own implementation of get type that could return like whatever is appropriate so for a job it's either the phase or the status where for a queue it's the status and you might if you did something like that you might be able to get rid of some of these if checks and let the fact that it's that each model implements the get type function handle it. I don't know if it like makes the code simpler or or more difficult but it's just just something to consider. Yeah, no I think that's a really good idea. And I think yeah that's some that's maybe what I'm going to do because even with you know all of these sort of models. There's also some sort of like common listeners which are using like the job model there, I think to listeners which are using the same job model. So just having that you know so one of them has like a created update and the other one is using updated date. So I think that having that sort of implementation might help in differentiating what needs to go for what listener and for what type. I think that's a really good idea. In fact, I guess it maybe wouldn't even have to be get type that it could be, you know, yeah get get type so it could return the whole string like job underscore whatever. Anyway, code working and so so if that doesn't add a lot of value and simplify the code, we should, we should move on but just just like to add things to think about. And that's no that's actually that's a really good idea. And, um, yeah, I think I might, I might like try and see what it does and if that simplifies I'm definitely going to do that because you know as as we add more listeners we will have to do more of these else statements, because then it's going to be listener of type, you know, like a computer model and then, you know, more and more models. So so we might have to do that in the future and then I'm going to move on to test. And again this test really needs to be simplified and be be broken down into simpler, like into more modular tasks rather than like one big test. Because again, this is going to be testing a lot of functionality so right now I have to test basically, and I'm actually not sure about this first do check global config. I am not sure if this is the right way to go. But but I read earlier now it's like I'm going to check if it works so it does work. But again so this is what this is doing is basically checking all the global config parameters, and if they're setting all right and what this check sync URL is doing is basically running a form validation on the sync URL so you know it shouldn't be, it should should only be HTTP, it should, it should not be null for like the HTTP kind. And this is doing the checking for you know the parameters and just null or people and stuff. I'm not sure again if this is the right way to go but I've been looking into other like global config tests and seeing how they have been commented it so maybe in the future there, this will probably would have changed. But this would have probably changed. And then there's the stage test and this test is basically checking that for each event type, I'm only sending that particular event and no other event. So, you know, as I was showing the should send bold and should send item method so these are the methods which sort of define whether or not that particular endpoint is configured to receive that sync. So if it says all then it should be receiving like all the items are like all the events and if it's like just entered waiting them should. It should only triggered the method which is sending the queue item for entered waiting so this is basically going to take that endpoint is going to take that the stage which is that even right now so for example these stages left and I'm only requesting entered waiting to be falls so moving back here it's not going to send that method to schools and falls. This also works and again as I said I need to make it more simplified in terms of like moving it into its different tests, rather than like one big test. And right now I was also not sure what other tests I can write. Because there's not like there's a lot of functionality in terms of it's all combined into like one one sort of thing. There's not a lot of like different things that are happening there's a lot of one kind of different things that's happening. I'm not sure what other events I can or should or like test I can or should create and you guys have any suggestions or comments I would love to. One thing that I often do is, is enable code coverage, and then look at the code coverage report to see just to see if there's any, any areas of the code that aren't even aren't being hit at all. You know the goals, maybe not to have some X percentage of code coverage but just to look for things that that maybe aren't covered. So that's one idea. And also, yes, I did try looking into configuration as code and that it's in it's added to the plugin. So back to the. And so if I'm correct, and I'm not really sure how like Jenkins or the config is code can help us, or it's more for I think the users who are going to be using this plugin to go in and basically like edit the configuration. Um, I, I think that's what the purpose is so I'm not really sure if you know it's a way that it's going to help us. Yeah, that's exactly right it's for the user. I mean, I think the guidance is just to make sure that it works with with configuration as code because people might might want to use that to configure it and most plugins do out of the box but occasionally you may have to do something. Right. Okay, so that's okay that was another question. Yeah, so this is a stupid question but um, I tend to like comment or send like full request when I feel that this code is working. And you know so last what happened the last time when I sent the full request and like would have like mentioned some changes and then I went into those changes and then I did more changes and that led to me waiting until you know it was working sort of perfectly before the code that was there. But I think with you guys rather me push code more often even if it's like it might not be working or if it's not like the cleanest piece of code, then like wait for the code to work and be sort of implemented the right way and have just be better and make sense. I'm just, I think it would be helpful for you guys to just see what I'm working on, if I push more frequently, but then I tend to not push because I feel that this is broken and their notes might not make sense. My recommendation is usually push whenever you know the test are passed right like never like break the test or do not push even like in your full request right I would definitely work in those kind of like incremental incremental changes where you know that all the time kind of like the test are passing and you have tests for the new stuff that you're adding. I think that it's usually beneficial to push code that it's broken like completely broken unless you need like special assistance on you know on how to implement the pattern or for example how to kind of like refactor like a big chunk of code right. So unless unless you need that kind of assistance I would just tend to push for you know the push working code most of the time. Yeah, I'm small small changes are easier to review so that that that makes sense to me. I would like now that you have kind of the base code to maybe be thinking about about small changes so so you're pushing a small change for us to review but it's working. The test are passing, but when when it's approved you're planning on on merging the pull requests. So there's this kind of kind of two, two, two times what when I push code what one is I want to merge it and I want people to comment on it. But but but as I said sometimes you you specifically want feedback and it's like hey I'm doing this approach curious what people think about it. And then so I might push that and make it a draft pull request and just kind of make it clear in the comments and I'm looking for feedback, maybe even on a specific section of the code so people can target that so there is that kind of two, two, two purposes but you should distinguish. That's a really good idea I think the, you know, the draft PR, I think that I think that might work better. Because right now there are kind of not really not a lot of changes but the last PR did not have a test to it and I was working on it but now it does so. PR is shouldn't be a draft PR but I'll push it just so you guys can take a look and send comments and any if anything needs change. I don't have any more. Our students. That's looking really good I'm super happy with the progress. So, yeah. I mean if you move forward, I will sync with Viva as well offline to see if he needs help and I just want to check with him like how aligned are we on the work, but yeah, I think it's looking really, really good and can you please send me yes luck or yeah if you can send me yes luck that will be great. The URL do you have like an open request with those changes or not yet. Yes, there is a pull open pull request and this is the, this is for the last one which was pushed three four days ago, and I have made some changes in terms of you know like the endpoint class that I was talking about so moving that to a different abstract implementation and also adding tests, but and also adding three different listeners but it's in the basic sort of idea is the same but I'm also going to push like send like more commit to that PR. The all the code that I have right now by making two or three small changes. Good. Great. That sounds really good. Yeah, and I, again I don't have any more questions at the moment. And I think that this meeting was helpful for, you know that talk about the UI and I think if like, maybe once we have a better idea of what a user might need from this kind of like cloud event or just like a media architecture that might shape or change the UI. So I'm also like designing keeping that in mind that it's not very like complicated. So if the UI has to change it wouldn't be like complete change over of the base code and stuff. Yeah, let's keep it flexible there and yeah. Awesome. Yep, looking forward to to hear all the changes and I will try to join every week now. Thank you. Thank you for sharing your expertise. It's looking awesome. Always impressive what you're doing. Are you do you feel good for the coming week, knowing what your next steps are, and you can always contact us on Slack as well, but you're solid for the next couple days at least. I think and so the Jenkins contributor summit is coming up, and I think I will be presenting the cloud events plugin. So I think this is the right time to start the implementation of Jenkins as distinct. So I think the next three to four days is going to be looking into methods and modes of doing that. So I think I'm pretty set on the past for the next few days for the week. Awesome. Good. I'm excited for the contributor summit. It should be really good. Yeah, so the entire get off summit and also CD code. I'm super excited. Yeah, it's a good week this week. Okay, so if we don't have any more questions or last minute comments then we can wrap up 10 minutes early. That's great. Thank you all for being here. Thank you. Thanks. Bye bye. Bye everyone. Bye guys.