 So what we have been discussing, do you all have access to the meeting notes? It's on the invite but I'll stick this in the chat as well. And then what we have been discussing in past meetings is the cloud, a little bit the Tecton client plugin, but also the cloud events plugin proposed by Vybav. And there's more on that in the meeting notes to give you a bit of context if you are not familiar with those initiatives. Hey guys. Hi Vybav, hi, how are you? I'm good, I'm good. Is there a little bit of noise in my microphone by itself? I'm using another microphone. Okay. Okay, I guess that's fine. Okay, so hey guys, nice to see you. So let's get started. Very good. I will screen share. We can go over the action items from last week's meeting. So we were going to do some research on cloud events, mesh data, what that would contain. So the thing about user stories around cloud events are a number of action items we have here actually and the play with the cloud events SDK. Wouldn't, I'll let you go ahead Vybav. Yeah, so, okay, so research needed on what cloud events meta data should contain. So this one I saw around a bit and saw how events only being collected and sent to a place through Jenkins. So don't know there is a plugin called statistics handler or something. So let me just start right here. Thanks to Jenkins. Oh, yeah, it's the statistics cat. This plugin basically could help us get started with like what all stuff we can send this cloud events. Then what I was thinking with this was we could probably just start with getting these events with the statistics that are already candles and then convert them to cloud events of the plugin. So this is what I was thinking about and this would be a good start basically for this. And yeah, so that is that is for that. Let me just open the doc so I can just, yeah. Okay, so yeah, and then there is. So that was that was some of the research I did. Then also the metadata. So I still have to make the metadata table and like what that looks like. Things that have been on vacation for this last week, not vacation as such, but last two days I was working but before I have been on vacation. So I work on a table for that for the metadata what should contain exactly and then with respect to the cloud events SDK. So I read the code in the examples for the cloud events SDK and seems rather straightforward like how we could actually change the Jenkins like job, like start, stop, finish, like what are the events that are there from the statistics gatherer plugin. We could actually just convert them to cloud events directly. We just have to see how to extend the statistics gatherer plugin for that and we could kind of start out over there. And then also kind of all under what could be the extent of the bootstrap. So the bootstrap can contain a basic cloud events as to be like use it or cloud events as to be basic module. And then the student can go ahead and see how that works and then they can play around with the statistics gatherer plugin. We can help them out with how to get these statistics and then convert them to cloud events and then what the metadata should look like then. So this is what I was thinking we could do and so what is next so find and points that can be subscribed. Can I can I ask you what you were just saying about this is for me and it might be helpful to others. When you talk about taking the events that we can already sort of register and transforming them to cloud events what what is involved in that process of transformation. Good question so that would be reading through the statistics gatherer plugin code and the statistics gatherer plugin has DSL which people can already use. So it is low it has sync as well that you can give and this sync like it's given the statistics gatherer plugin and all the events regarding your project creation or anything. That have that the users have to they just go with that sync. So the track so what so I don't have a detailed idea about like what goes into the transformation exactly but this would mean that the user student would have to extend classes from the statistics gatherer plugin and then make work them out in the. Cloud Cloud events plugin itself so that would be the high level process but I what I'll do is I'll try to figure out what is the exact detailed process and I can update you guys like on the channel or the next meeting. Does that answer your question. Yes it does. That would be awesome as well. Thank you very much. Thank you. So. So that's fine. So the next one we have is. Fine endpoints that can be subscribed to so this one I was thinking a little bit because this this one is more related to consuming cloud events and. I am not sure how that would work exactly because I haven't worked on something like that before. So do you have an idea how we could probably consume cloud events maybe into jobs or something. Or would it be so do we consume and then convert the cloud events into Jenkins metadata which can then be used to trigger jobs probably like trigger something like a generic rubber trigger something like that. Yeah I'm wondering. I mean this is definitely something to like investigate but I'm like initially I'm kind of thinking is would this be kind of just like a webhook handler that we can post cloud events to in a similar way to like the GitHub. So initially this. Okay. There be some level of authentication I'm assuming on that like HMAC token or something. Yeah. Yeah, something along those lines. That might be a good first pass and there are plugins out there already that do something very similar. So there's a generic webhook trigger plugin which so Mark had suggested to look at this one I still have to look at it. So he suggested that maybe this could be a good place to start to understand like how the consuming of cloud events would work in Jenkins. So we could probably look at that. I'll add that as an action item. That plugin is called what is the name of the plugin? Yeah, generic webhook trigger. So, and the next one. Okay, so the next one we have forced prototype to understand how cloud events work. So this prototype I feel that would be part of the bootstrap. So this bootstrap would basically, yeah, this bootstrap as I said would basically contain like a minimal like cloud events, just to be leader and handler, like handler, which which would just like be based on the sdp cloud events, sdp basic module, which we can just create so that so that would be like a good place for the for the student to start. And I had a question about this actually so cloud events. Instead of it being a build step, would it just be a global plugin configuration that like what in the global plugin configuration, it would be like the admin should be able to enable like certain cloud events for the Jenkins. So he would go to plug in configuration, you would go into cloud events for configuration and under that he would just mark, you know, okay, I need to see jobs in the cloud events. I need to see job steps also maybe like how they are being executed, if they fail or something like, and then there is projects, maybe I need to see what projects are being created. So probably like this, what I thought about this was, we could make build steps as well, but I don't, but I think global plugin configuration would make more sense, and it would be quite should be easier for the user to have like, you know, like just have like proper configuration for the Jenkins itself. There's a GitHub, is it GitHub build strategies or something like that plugin, or GitHub check strategy or something like that that sends, sends GitHub sort of checks on every build stage that happens. And I'm wondering whether there'll be some useful code inside there to listen to for that. That's just like you install the plugin and every build that happens automatically gets that something similar to listen for creates a check for every stage that happens in a pipeline. Okay, that's the consumer aspect of it. Okay, okay, okay, yeah, that makes sense. So I was actually talking about the producer aspect of it. So to produce like the plugins like so produce the events, probably having a global plugin configuration would make sense. That is what I was thinking about. That makes sense for the production. I think so, yeah. Okay, and in, so that is producing and in consuming yours, you're saying that we should have, we should look at the plugin you're talking about, this plugin, can you remind me what it's called? In a few minutes, I've got it here, I've got it locally, I can't remember the exact name. Is it just the GitHub checks one? It's the GitHub checks plugin. It's the GitHub autostatus plugin. Okay, GitHub. So that can be used with a few different things, actually. GitHub autostatus. Can you put it in a link in the chat? Yep, certainly. Far too many windows open. So this one, this one would be for consuming. Okay, I actually have to look into this. What does it mean by auto Jenkins plugin to provide automatic status or multiple control? So I think it kind of, it kind of sends events to something. Based on whenever you, whenever I built a pipeline, it runs a stage. So I think it's used for like reporting into things like influx DB and Grafana or something like that. But when I think it's installed with the checks plugin, it sends that to GitHub so that if you've got a pipeline with 10 stages, it will appear as 10 different checks. So you can kind of follow the status of your pipeline through. We could actually follow the model for this plugin to understand how consuming would work. Yeah, that means that's good. Yeah, it was how it's obviously listening to the Jenkins internals to trigger is I think would be the interesting part. Yeah, I agree. Yeah. We need some reading. But yeah, I think. So next one. Yeah. So next one is figure out extend to bootstrap for the JSOC students with that. Yeah. So the bootstrap would be basically just implementing the cloud events, HDB basic. It would be a module to like a very simple level and in the plugin and like implementing that in the global plug-in configuration. That would be it. And actually this is actually related to your last question. How do you see the transformation from the statistic gather a plugin to cloud events plugin. So probably would need a little bit more homework on this part because that would tell us like what the bootstrap would look like. Because I think we'll have to bootstrap the global plug-in configuration. And after we are done with that later on we have to guide them with how the transformation would look like. So. So that. And then. Yeah. So next one is created draft on tecton plant plugin. And. Okay. I haven't shared it, but I have a draft here. Would you like to share? Yeah, I'll just share it. I'll just share it in the doc itself. Okay. So there are. So there is quick start. I should have already implementation examples. And probably got a few more links. So Vibhav I've had to request access maybe. Do you want to make a look at it? Can you, can you see it? All good. Should we, should we take a look at this? I'll share. Oh yeah. Have you, um, you've had a look at the metadata that say tecton is producing. Because I think for the cloud events work, you said that was initial first step. And I think you've, you've probably already done that, but that might be initial first step to. Outline for students or to have them do. Oh yeah. That would be, that would be part of cloud events. Yeah. So yeah, I did that and the metadata. So I, so I have written down the metadata. That's an, it's in a text over here. I didn't put anything on Google doc like these two. So it's basically would be like IO.genkins.event.project.created. And that would be succeeded. And all that would be failed. So it would be something like that. So I still have to write the table itself. That would be the action item which would be moving. So I couldn't complete that action item properly. I don't got an idea what it would look like, but I still have to write the metadata table down. So I'll do that. I'll do that and post it in the media. Should I upload? What I'll actually do is I'll create a doc in which I'll just note down everything that is required for the cloud events, plugin, the extent of the bootstrap and everything. I'll just make good notes about it instead of having it in text. So like actual text. So I'll do that. I'll put that down as an action item. That would be great. That would be really, really helpful. I think for students and for all of us. Staying on the same page. For the text on client plugin proposal. Gareth had mentioned the potential integration between tecton is code and the tecton client plugin. I haven't had a chance to look into it. Gareth, do you want to speak more to that or above? I mean, yeah, I was wondering whether I suppose we could use that idea of the doc tecton folder and apply those type of changes. I did speak to the guys on the Jenkins X project about how they're manipulating and modifying it within Lighthouse, which was very similar sort of idea, actually. So I think that kind of thing could work quite nicely. I think there would need to be some processing of resources, but it's fairly minimal. In this case, yeah, kicking off the pipeline run with the correct variable set and things like that, but it should be. Is that what you mean by processing of resources? Yeah, so there you have a dot Lighthouse folder, which contains the tasks and the runs. It's just a list of YAML files to be applied, but some of them, I think they're kind of like load. They're not just applied. Loaded, manipulated slightly and then applied. And it's what level does that have to happen? And that's listening from looking. Yeah. Yes. So the tecton is code. Yeah, tecton is code project center. You have to create a pipeline YAML and then a tasks YAML. There are certain file names you needed to create, but generally the approach is very, very similar of how it's doing it. Okay. Awesome. And above just, and everyone else. So, you know, Lighthouse is functionally similar to it's a proud. Was that a fair enough way to say it? Yeah. It's more of a, yeah, tecton native version of proud. Really. It supports tecton as a more of a first-class citizen. Nice. So how do you see the integration thing about like, like we have some technical idea, like what that could look like? No, not yet. Not yet. I suppose it would need, I mean, it would need to handle whatever the incoming events are to trigger the job and then it's going to need to clone the repository presumably on the master or on the controller and then load any, if that tecton folder or the dot tecton folder exists, load any resources from there, process them, and then apply them into the cluster. And there would be some configuration like what namespace and things like that do you want tecton builds to run in? Because you may want to separate them, have them run in a separate namespace from where, certainly from where Jenkins is running. Or whereas tecton itself installed. I mean, I suppose you could see longer term that people may want to actually invoke tecton jobs in a different, in different clusters. But certainly not for a initial, okay. I'm still thinking what, for me, like thinking in Java is like a little tougher than thinking in Golang. I'm just thinking what that would work like. Probably, probably could you, could we have an action item in which like we understand technically like how that would work? Yeah, that's good. Because in my, in my head when I'm saying tecton, I just see the UI. Okay. So I'm just having like a hard time understanding like, and when it comes to tecton as a code, it just, they seem like on like two different planets to me. Because the client plugin basically would create stuff and probably allow to have DSL, which could be used in Jenkins file at some point in time. So I don't know how that would work. So are you saying you want to integrate with tecton as a code for the land bugging? No, probably not. So probably not integrate. So that the, from my understanding, the tecton as code piece is a POC at the moment. It's like a, someone's pushed it up as a proposal as like, we could do something along those lines. And I think that approach, that approach would work really well. So I don't think integrating, I don't think you're directly integrating with what the work that has been done for the tecton as code, but I think doing something very similar, something along those lines. So yeah, having a dot tecton folder and being able to, I suppose test for the existence of that. If it's there, then apply those resources. I actually have a POC of that working from a Jenkins pipeline. But the problem is it, it spins up an extra agent to be able to do the applying of things. And that's one of the things I was trying to avoid. Because it's an agent, it's a JVM, it's another pod to do the applying of stuff. And then the monitoring of that job is difficult. Can I ask a more basic question? What would be the end game for that? Like what would be the user benefit? Of using tecton in general with Jenkins? Of doing this combination. Like what? So one of the things that, one of the things tecton supports really well is the tecton catalogs and the sort of reusable tasks. And they've got this templating stuff now that's quite nice. So you can pull in existing tasks from other Git repos and have them versioned and synchronize them. I think you did the same with pipelines as well. It's almost like a, you know, a slightly, it's a different implement. It's more of a cloud native implementation of pipeline libraries. Yeah. So that's the tecton client plugin helps in that sense. The tecton is code, what? So tecton is code is a proposal to kind of, yeah. It's the kind of inception problem you have with, with any of these things, right? So when does it live? How does it, how does it kick itself off? So that's such a proposal of how they would, how they could extend the tecton project to do that. And it's actually what Lighthouse did. It's very, very, very similar to what Lighthouse has done to be able to create the pipelines in the first place. So the proposal I suppose was to extend the current tecton client plugin to do that. Okay. Okay. Or something similar to that, yeah. Okay. Thank you. That's more clear for me. Probably, probably more idea about this. I'm still a little bit confused. Like I still am like trying to piece the puzzle together and to understand how that works. But when you say that you're trying to extend the tecton client plugin to do something similar as the dot, the tecton is code, that does make a lot of sense. But in this scenario, I don't see the dot tecton folder. What I see is, I see a DSL in a Jenkins file, which is, which the user can use because of the tecton client plugin. Maybe the dot tecton folder can be a place where they can keep the resources for the Jenkins file to consume. But this is close to it, as I can see it. I am not sure how the dot, I probably need to read more about it, see how Lighthouse works. So should we add an action item for this to understand this one or the other better? Like how this should work out. Yes? I'll let you, I'll let you as well. Probably this would be more of a team action item to understand how this would work. And how the link? I actually want to see an example of one of the code running somewhere. It's more of a sandworn face, but I don't know where it is. And I will be seeing the changes and see if they want to discuss the tecton client plugin and the tecton as code because that might, they might be a good resource. Also Andrew Bayram might as well since he's worked on Lighthouse. Yeah, yeah, yeah. Yeah, actually that would be, that would be good actually to get a bit of Lighthouse. How does Lighthouse exactly work? Do you have any idea like I don't have like a perfect idea of how tecton as a code works as well and you were saying that Lighthouse works based on the same principles. Yeah, so Lighthouse, it was originally written as a kind of replacement for Proud where it takes in your GitHub or Bitbocket or whatever webhook it converts it into a, like an internal representation really. It has a series of plugins that can be run against that data. And then originally it used a Jenkins X YAML which was like Jenkins X's own, almost like DSL for describing pipelines that it then converted into a tecton job. But what they found is that they spent so much time essentially like version chasing or trying to keep up with the tecton syntax because it's evolving and you know they had to build that into the Jenkins X syntax and it was very time consuming. So it was felt that it was better to use tecton directly. So what they have is a .lighthouse folder where you put your tasks and pipelines and things and when a job needs to be triggered it loads the tecton resources from there and there's a slight bit of manipulation but then applies them in the cluster. Do we use customized by instance with that that they take those tecton tasks and probably they use a kind of patches those tasks or something like that? I mean I don't think they use customized at that point. I think they have been using customizing kept and things in other parts of the product but I'm not sure if you, I think the lighthouse was designed to kind of be a bit agnostic to all of those things. Okay. Okay. Yeah, then definitely need to read more about it. It was originally kind of designed to be like a high availability webhook handler for GitHub so that when your cluster was updating or if anything died at least you would still record webhooks or capture them and store them like internally and they can be processed. That was the idea and then it would eventually catch up. Okay. So shall we move on to the next one? Oh yeah. Okay, so the last one is would be great if James is for attending to discuss tecton planning and tecton scope. That would be great indeed. Yeah. I will get them to when and also Andrew if he could speak on Lighthouse I will ask him as well. So I've put an action item for myself to do that. Oh okay. Hopefully we'll get them in. I know they've been really busy with the general world of alpha of James X3 but hopefully we'll get them. Come in. Hello. Okay. Could any other questions or action items anything else to bring up? No. Okay. So God, did you have any questions? No. Because I don't have even currently even much idea. I'm just trying to figure out. I just want to work. Yeah. Later on maybe, but not currently. Okay. Nice. And do you have a good sense of like first steps for for exploring cloud events and how you would move in that direction? Currently what I have a plan is exploring Kubernetes and then reading about cloud events by on because I mean some steps that would be also great. I think exploring Kubernetes is a very good first step. And then you can look at some of the resources we put in the doc. I don't know if you would like to add anything to that. Yeah. So into that talk actually so if you want to get started with cloud events that doesn't actually require Kubernetes knowledge you could actually just play around with the cloud events SDK and see just go through the examples see what it looks like. I would give like a really good idea how to get started with cloud events. If you want to see the Tecton plugin then you can then you would need Kubernetes and Tecton like a little bit of that knowledge. That would be nice. If you want to do it. Okay. So I will explore the cloud events and then when I have some idea what it is actually doing then I will maybe later on then I have further questions. Thank you. Awesome. So those were going over all our action items from last week we have many more for next week but it's good. I think we're making good progress in understanding the problem space so that's excellent. Anything else? Last minute questions, anything? All right. Good meeting. Thank you so much for being here. Thank you for working on these projects. It's really interesting. See you next week. See you. I'm going to go around on the beach. See you.