 Hi, welcome to the cloud native SIG for Jenkins. We're discussing some GSOC admin and answering any questions that you have about GSOC or any of the initiatives that we are involved in. So welcome, Aditya. Do you have any questions? Great, welcome. One of the things we were just discussing is the thinking how we can bring the Texan Client Plugin project forward as a mentoring program. So hopefully we'll get that done. What has been done recently in the Texan Client Plugin? I feel like there's continuous work being done on it and I have not been able to track it. I hope the time. James Racken added some, he did put in some PRs to get the Texan Catalog stuff working. It uses tech pipeline, which is a binary that's kind of bundled into the plugin itself, which I think is okay. For short term, I think longer term, we probably want to make an actual implementation of that. But for the time being, I think we're probably okay. We may well have issues running this on funny architectures. Like if someone tries to run this on ARM, I'm pretty certain it's not going to work at the moment. But yeah, it is good. What we have there is really good and it does work. And then we're trying to work out how to get it into a pipeline at the moment. We actually have it working as a local PR pipeline, which is quite nice. Although there are some issues with the code when it comes to locating files in the remote workspaces, which need to chat to some people about understand that a bit better. But yeah, it's progressing. It seems to be working. And then get P29 stuff is working really nicely to release it each time. So for me to understand, Tecton has been added as a binary within. So Jx pipeline has been added as a binary inside the repo. And what we do is we extract that out and then call it wherever it's being run from. And that influences your pipeline. It reviews the users syntax. So you can do this kind of pipeline library stuff, which is really nice. I know James has added a proposal to add this, add that kind of functionality into Cortecton. So it could just be a temporary thing. And then in a few months time, as that proposal progresses, it may be something we want to remove or re-implement something like that. As a sort of temporary solution, yeah, it's working really good. James put a blog on that. That was this week's episode. That's really nice. I think that's actually being published now. Yeah. How would you re-implement it? What would you do? So if it's something that is going into Cortecton, then I would expect Cortecton itself or the Cortecton controller to handle that. Just asking if it's a blog. This is a PR. Yeah. I'm not sure actually. I'll have a look. Yes, it could well be. For the implementation, this is the... So that's the implementation. Okay. So it's not all of it because that pipeline does a number of things around version streams, which is like the next level of complexity really. It's very, very nice, but version streams don't make sense inside Cortecton at the moment. So he was working on how to sort of chop that bit up, to use it in a way that is just... It's something that's usable for everybody. So it doesn't have to deal with credentials and all that kind of stuff. So when it comes to... If that gets into a new tech on release, and so I think all we will really need to do is remove the logic on our side that does that enhancement. I think we'll just be able to use just the updated tech on release and it should support that out of the box. Great. And by support that out of the box, sorry, I'm trying to get this right in my mind. Would that be support version streams right out of the box? Is that... So just the notion of this kind of like composite tasks, I think that's what he's calling it. So it's the ability to... You can define pipelines in other resources and other images and then refer to them with the ability to override certain steps. And those... That's the way over. It's a bit like the kind of pipeline library. So in Jenkins, well, in theory, you could implement pipeline libraries as tech on. As these tech on composite tasks and depend on a version set of those, which would be really nice. You could centrally manage them and do whatever it is you need to do, but with also providing the ability to just override the bits that you want. And that's within running a tech on pipeline run or running within... Sorry, I'm just trying to get the linking of what is being inserted where. So the way that it kind of works at the moment is that we're either assuming you're running Jenkins on a Kubernetes cluster, which has tech on installed, or you have in some way configured access to that. So it knows about how to get to the Kubernetes cluster. So all what we're really doing is we're locating the task or the pipeline whatever the tech on resources and we're creating it in that cluster. And then if there are logs or anything like that that we do, we stream them back. So that's the current way of doing it. So in more of a real world scenario, you may be running these in two different clusters that might be referred. You'd certainly run them in separate namespaces. So you're creating a new set of tech on resources and tech on would be installed in a different name space from Jenkins. And then it's a case of just monitoring the status of those jobs, the completion and feeding back. Vibhav, how does this fit your vision for this? Because you've been involved from the very beginning and just curious what your feedback is on how this is iterating forward. It was actually great. It is actually great to see that we can right now just give a reference to some task, like one of the catalog tasks and just build the pipeline from there. So yeah, that is great. But I am still thinking like how this would be could be made cleaner probably because as Gareth you said like this, I don't know if this how this will work out in the long term, with the binary being there in the plugin. So, but I think this is really, really good to start with to use like catalog tasks. Probably at some point what we could do is we could query the catalog API and pull tasks from there. And I think there is some work going on in Tecton to make this easier. Not exactly the catalog, but about pulling pipelines. To reference pipelines, you should be able to reference them from anywhere. Like if I have like pipelines stored in like some some place, I should just be able to give the URL to the pipeline. And it just, it doesn't have to download it, but it just like refers there. And like it starts using it from there. There is some work going on regarding this upstream. This is if this works out, it will be like very cool because then at that point from like a Jenkins pipeline, like it would just say, okay, I need to use this catalog task. And I want to make a task and out of this. And yeah, it would be very nice to see. Yeah. Karima, sorry, go on. I think that would be like a long way down. Currently, I think, so, Gareth, you open the PR for the creator of pipeline, right? So this is, yeah, this will directly go into this Jenkins files, no? Yeah. So, yeah, I'll just pick up here on the charts. I'm just trying to get my, I'm trying to work out. So I've got, I've got like a kind of integration test that's using the Jenkins rule. And, and the Kubernetes server as a rule. And then it changed, it creates a real change, change them together, but I'm trying to work out how to configure the tectonic client aspect of that in the unit test. So that I'm actually, because what happens at the moment is when I run the test locally, it actually creates the tasks in my own Kubernetes. Oh, I was like, this is working. Why is it not working when I push it up to GitHub? And then for the life of me. Are you passing the cube context? I'm not passing it. I think it's using my default cube context. I think at this point, like what we'll have to do is we kind of have to put, probably put like cube context and then, but yeah, this is, this is awesome. Like this is, this is really cool. Like now we can just run pipelines directly off of plan. Nice. Okay. I can't wait for this to get more of them. Yes, it's not, there's, there's some issues with it. And when it runs on a remote agent, it can't find the task that I have actually extracted in the local workspace. So there's some logic in the code that isn't quite working. But I'm just going to, I'm just going to keep on iterating on this. I've kind of been given the whole player to get this all request or stop. So that's good. So yeah, this is what I'm, this is what I'm going to be doing for the rest of the day, right? If, if that is the issue that's happening, do you think that when we run, or if we run one of these Jenkins files in the future with the feature that we're talking about just before this, with the JS, JS pipeline effective, like if we had to run the Jenkins file pipeline with some, with the code that is using JS pipeline effective on an agent. So in that case, also we will need to have the binary on the agent, right? Yes. Yeah. So I mean, and there are multiple ways of doing that, I suppose. I mean, you could just say that people need to install the binary on the agent that they, they're running on. So yeah, that's one option. Another option is to try to. I think it might sound a little too weird, but can we serialize the binary and then execute it from Java and then? Yes. So that, that's what, that's actually what's happening in a moment. It's whether or not we can copy the binary and the right binary onto the agent to execute it. But that is, that is also an option. There is a, Jenkins has a way of doing that. I think there's like a, I think it's called the master slave callback that you can write that does often does this stuff. As well as one area they haven't renamed you actually. Just think, God, I hope that's renamed. And there is a good initiative happening to up everything. Yeah. So I think there's, in terms of the renaming, they're starting off doing everything that's kind of visible in the UI, but then there's actually a lot of code in there that if they rename that, that is going to break an awful lot of plugins. So like eventually it's, yeah, they possibly need to like change the hierarchy of classes so that people can start including the right one and deprecate that. But we know that Jenkins plugins are built off really old versions of Jenkins. So, yeah. Is it, is it still possible to like send the deprecation notice? Like, have you guys kind of done that kind of thing? I know Jenkins is huge and have you ever kind of done that kind of deprecation for an APM? Yes. So if you, I mean, if you're looking Jenkins calls called an awful lot of it is deprecated. Certainly. So one of the PRs that I tried to, I just pushed up, it isn't, there is something else going on with this Kubernetes mock client that I need to have a look at but it was based on the recommendation from Jesse to upgrade the core Jenkins version that we built a plugin against because there is a different series of kind of APIs that you can call. So it's like you get the perform method but without the workspace. So it means you can kind of run it. This is meant to be much better anyway. So that is a potential thing. But if you have a look at the code there, most of the functions are deprecated but they still have to exist because all plugins still build against those versions. That reminds me of something I read about while starting off with plugin development. Is there a way to safeguard like plugin out, like if we have maybe like a change in UI and the plugin, is there a way to kind of like safeguard the actions that are happening from like because like some feature or something got deprecated, like how do we manage that on a plugin level so that it doesn't break users if we deprecate something like on a cosmetic level, not on a functional level? Yeah. So I think I'm not sure if this will completely answer that question. But so in Jenkins core very little code is ever removed. You always kind of overload the methods with the new ones that you need. And so the old working versions, even though they're marked as deprecated, still exist. And they might delegate on to the new methods or something. There is a way of wondering it, but it's quite strict in terms of, yeah, you don't really want to remove code from that. I think there's a really good wiki page called like the prime API directive and it goes over these kind of things. Like so when you're changing APIs, where APIs are Java code, you know, what you can and can't do to them to make sure that they're forwardly and backwardly compatible. So if I can find that, I'll put it in the chat. Yeah, can you please? Yeah, it's from the, it's not the Eclipse pages. But it's valid. It's not just Java based API. It's the same as the case and the same is true with pretty much everything. Yeah, this will be, this will be very helpful. I haven't actually studied this part of API development. It's something that the spring frameworks being quite good at. Like they're very good at not, they'll mark things as deprecated for a quite a long time until they produce a new major release and only on those major releases are they actually removing those functions that are there. And even then it's, they always remove them straight away. You might find that people in the community are saying, no, it's too much effort for me. I can't upgrade to the latest version because it would be too much effort for me to enjoy this and they leave them in for a bit longer. It's, it's either if the developer is or like the release team is lazy to take care of the bugs that will come. Yeah, it's, I have noticed this like deprecation is always like, if they say something is deprecated, but it's actually in use for like a longer time. Like no one really goes to the newer versions. I think with, is this discussion at the moment about potential Jenkins 3, isn't it? That's happening. And that would be very interesting to understand are there, is there a, is there going to be a policy removing a lot of deprecated codes for the Jenkins 3 release? As if you're following proper semantic version, this would be the perfect time to do it. Yeah, it would be a very good moment. I, I have seen the proposal for that and it's very interesting and really great for Jenkins. Really exciting. I don't know how much bandwidth there is in that or in general for actually doing a big deprecation. So even though semantically it would make sense, I don't know if you practice that many of the changes or group changes. Yeah, it's a very good issue to raise actually at this moment. If we, if would you go with Jenkins 3 because yeah, so it's a good time to do it. Is that a proposal piece like a white paper or something like for Jenkins 3? Um, I'm not sure. I'm not, I think it's something that's being mooted. I can look into to, um, so I'm not sure even how much public leads. I will, I will get you a copy. This is sort of it. I will see, um, yeah, I have to circle back on that. Is it, I think it might be a proposed topic in the next interview to summit. Yes. Um, yeah. So the, yeah, the idea is to be sharing it. So, um, but it's a very, it's a, it's a new initiative. So that's why it's just actually check how much is public. But yes, we, we definitely are going to want, um, feedback obviously from the community. What would be nice to see is if the next version of Jenkins probably is corpus able, I don't know. What is, what is the support right now for Jenkins within corpus? The last time I remember it wouldn't compile, like I tried compiling it with corpus. Yeah, it was, I think it was like a year and a half ago. I didn't have much luck with it, but it would be nice to see Jenkins be compiled with corpus. I'm wondering if, yeah, I have a feeling that Jenkins relies quite heavily on reflection and that may be a bit of a, I'm sure it's not that or at least some of the libraries that it uses. So reflection is the main issue because of which it doesn't happen. Yeah, you have to, this, I can never remember which one's which, but like, you have to like register for reflection. Had it at compile time that this can, this class can be used for reflection so that it can do that. Otherwise, yeah, you can't just do it for everything like you can with the normal behavior. If Jenkins 3 does compile, like what do you think would be like the main features and stuff, like changes would come, it probably would have to be mostly in the engine rate. Yeah, so I actually don't know and I don't know whether it would be be more of a marketing release or an actual like functional breaking chain release. But I suppose that these are the things that will come up in the contributor summit, like so, when we, when they start having conversations there. I know that we've had quite a few sort of requests from the community to release it as a, like even like what we have today as a Jenkins 3 because we're on, we're on 2.289, so it implies like 289 minor changes that have gone on. So it's, it's quite a long way from 3.0 and they feel it would be quite a good marketing push if we release something as 3, but technically you'd probably want to use that 3.0 to remove or upgrade certain components. Yeah, 289, I'm not, when did Jenkins 2 release exactly? Quite a few years ago. I know that, I know that there are still some plugins that are built against version one, one but something that any, any code that was removed or would be removed is potentially the plugins that aren't maintained. Just, just a quick shout out for the contributor summit. It's on the 25th of June and is essentially part of CDCon, which you should all sign up for because it's going to be awesome. And I put a link in the chat to the contributor summit and there is a draft agenda, but it is still looking quite draft, but that, that does mean you are more than welcome to add topics that you would like to be added to the agenda. And there, yeah, I would imagine that Jenkins 3 is going to definitely feature quite largely in that. So it would be fantastic to have your involvement and input and proposals. So during CDCon, will this be like a separate session, like an long session or something? Yeah, it will be at a separate time as well. So it's, it's adjacent to CDCon, but it's, so you will not have to miss CDCon talks to, to participate in it. Yeah. I think it's the day after CDCon and it's an associated summit. There's actually a number of really good summits that are happening around there around CDCon and connected with CDCon. There's a get up summit too, which should be really fun to look into. I'll put a link in the chat for CDCon and I'll put it for, I mean, it's on these lines as well. I think it's like CDCon, get up summit and like one other summit that are happening, like they are co-located, right? Yeah, I mean, co-located, co-located means something, you know, it's, it's a bit sadder now, we're still in the virtual world, but yes, these are, these are essentially co-located events with CDCon. Yeah, thanks for reminding me about, I haven't seen the sun in like, I think, yeah, so CDCon is from June 23rd to 24th. Here's the registration link, stick that in our chat. Can I circle back and ask a, a basic question on the TechTron client plugin, as it's currently configured, are there any limitations on what, from the TechTron catalog you can bring into a Jenkins pipeline? Well, as far as I know, the TechTron catalog that comes with the binary is the, is basically the one that can be used. So there's no, I don't think there is any, like remote calls to the binary, sorry, remote calls to the catalog. Not through the Jenkins interface or not through the Jenkins controller anyway. I'm assuming you could always install catalog items manually in the namespace that you have TechTron running and then refer to them. Yes. But I mean, it's something that I haven't, I haven't tried, but there's no reason why that wouldn't work. No, but it's pretty straightforward, like using those catalog tasks, you know. Yeah. Like initially, how I was thinking catalog would work out is, like we kind of just store all, store the entire catalog, like all the YAML files, which is, I don't know how great of an idea it is, but kind of just store them with the resources in TechTron client plugin. And we kind of, you know, sync the catalog once in a while, like once every day, probably to like keep it updated. And what we can do in the create task, like create raw, probably we can give one of the items in the catalog over there, like just reference it, have like a drop-down money or just actually just reference it from there and then, you know, allow users to use it. That was, that was a much simpler idea that I thought of using of the way we could use catalog tasks. But yeah, J's pipeline effective is, is also a good solution. Actually, the way it kind of unwraps the pipeline is, is much more helpful for the users. But the problem that I see there sometimes is like once it unwraps the pipeline, it looks, it looks a little confusing, like if you have to go and change some variables, there's a lot of stuff to look at. So if, so it'll be nice to just like kind of reference the catalog task on there. Yeah, but I mean, that sounds pretty sweet. Actually, when properly, like when fully implemented that you could be able to reference any tecton task. That's awesome. Yeah, actually, we can, we can do it right now, if I'm not wrong. We can, we can do it right now. We can reference any tecton task. It's like an ad, like you have to kind of give an ad slash something and the location of the task that is that you want to use. Yeah. But it's, it's there right now, but I think it will be, it will still be like a long time until we kind of change the implementation completely. I think what we have right now is also pretty, pretty good. So recently, so the last thing I had worked on the plugin was being able to give different contexts, I think, or like not context, like Kubernetes context, like which cluster would you like to run this tecton task or like pipeline? So I was able to implement that on the global plugin configuration side, but I needed some help with referencing that in the job itself. So I was able to reference that, but the data that comes back from a global plugin configuration isn't updated. Like if I go back to global plugin configuration and see that okay, these things are here, but I'm not able to reference anything apart from default in the, in the job. So I'm, I have to work on this. So after merging it, I opened a bug with this. Gareth, do you have, could you help me out of this probably later on next week or something? Yeah, I'm out, I'm out on Monday, but I can probably do something outside of that. I think that may even be the bit that I'm looking at now with the pipeline stuff, because I'm, I'm trying to tell it the great tasks in the mock server, not, not the one with the current cube conflict that I'm connected to. So it could be very similar, whatever the solution is. Actually, can you also try, considering you're already working on this, this part, can you also try like creating a pipeline step from the, the create draw has that part where you can give the context. So probably if you can define it in a pipeline and then pass it on to the context and create draw and in create draw itself, you give like which cluster to use and which, which namespace to use, probably we can start there. And this would be, this would be good. That, that would probably work also. Yeah. What I'm thinking is like how so what I've done is I had made the, made the array for the cube configs global. And I tried to reference that from the, from the job. And it only takes, so it seems like the state of the array is what it was initially, and it is, it is like localized only to global plug-in configuration. Even if I thought it was global. So something, something related to scope that needs to be fixed. So in the, in the middle of two weeks, I was just thinking about like how, how we could probably use this plugin, you know, to just kind of run entire Jenkins pipelines, like in, in the pipeline, I basically give like a Jenkins file and that entire Jenkins file execute start, execute tasks on tech-con. I was thinking how this probably would work with DSL. So once I was thinking if the DSL gets to a point where it is like we, we are able to nicely give things like, like define an entire pipeline, we could probably write an example where all of the, we have different Jenkins tasks and all of them are running in the tech-con pipeline. But then I was thinking of the OpenShift Sync plugin and I was thinking that in OpenShift Sync plugin, they have something called build config and builds. So build, so what happens is exactly build configs are synced into, into the Jenkins, into Jenkins as jobs. So I was thinking if the, if it would make sense to do something similar of syncing tasks and task runs, or sorry, pipelines and pipeline runs in Jenkins. So if I have like a pipeline and sync it in Jenkins, that means that, that pipeline would be completely converted to a, that pipeline would be completely converted to like a Jenkins file. And that would mean like we need to have like a DSL ready at that point. I was thinking if, like it sounds like it's possible, but the hurdle for this one is the DSL. So at, so at that point, what, what, what is possible for us to do is just like write a Jenkins file completely written in the pipeline step with Tecton client plugin. And then we can execute or like if we install Jenkins on a Kubernetes client, which is also if we install Jenkins on a Kubernetes server and we turn on the Tecton whatever sync plugin probably and it will sync all of the tasks and task runs into, into Jenkins. It seems like a big, like a big step, but I was thinking if I was just like playing around with this idea in my head the other day. I just, I just like blurted out, yeah, but I was thinking about this the other day. What do you guys think? Yeah, I mean, oh no, sorry, I just, I mean, yeah, it's just, it sounds like a good, sounds like a good idea. For me, I think the kind of most important thing is to get this to a point where like it can be used in the community and we start getting feedback and people sort of like real world use cases of how people actually want to interact with the Tecton client plugin. I think that would be really cool because at the moment it's, it's hard to see how people would want to use this. We know why they would want to use it, but it's hard to see how they're actually where to go about, like what things do they need to, I don't know. I have a feeling that we'll get like a probably a good spike in usage once we have the DSL ready, like to some extent, because then I'm slowly understanding the fact that not a lot of people are directly used. There are a lot of users for it probably, but like the ease of use with Jenkins file is much more. So if there is a way to like turn this thing around, it's like probably have like good Jenkins file integration, which will come through DSL. And before that, and I think, I think at that point it would be like quite easy. Do you, Garret, would you like to give a talk with me for DevOps world on this plugin? Yeah, that should be fine. When is that DevOps world? That's I don't know when it is, but the deadline is on May 20th, I think. Yeah, that'd be cool. We'll, yeah, we'll sync up on that. My question was much higher level and conceptual and you a bit addressed it in your last comment. So for this idea, what you would be doing is something quite, quite different. You'd in fact be taking a Jenkins pipeline and the Jenkins file for that using the syncing plugin and be able to take that and then run that with Ntecton. So it's the inversion. It's just like the flip. Yeah, so basically exactly. Yeah, so I would, it would be not really, not run it. It's like, it's like, I would be able to see all my, it would be like a alternative dashboard to Ntecton in a way. Maybe it comes off incorrectly because what this means is when the tasks and task runs, the pipeline, pipeline runs are synced. So what will happen is they'll be synced by namespace. So there'll be a project Jenkins project called with the namespace name and the job in the namespace will be like a pipeline. Like it'll be, now that I think about it, it should be like one of the, okay, I'm confusing myself between like job and project. So it should probably be like, if you go into a Jenkins job, I should go there and see that, okay, this is the Ntecton pipeline which ran. And then if I go off, like this is the pipeline run which happened. And if I press build again, it runs the Jenkins file, which is running the Ntecton pipeline run. Like we're just triggering the Ntecton pipeline and creation of it. And then after that, I should see all the results back in Jenkins. So it's kind of like the sync plugin calls the client plugin in a way. I'm just thinking like whenever we do that sync and the pipeline runs get converted to Jenkins files, like how that will happen. It's because it seems, that is the one hurdle that seems to be there. Did anything I say made sense? I felt like I just blurted out over there. No, that was, that was very good. I can't answer your question, but it was a very good explanation. I did, Gareth, did you see this? I think there was once a PR for Ntecton about running Jenkins or Jenkins jobs. Yeah, yeah, yeah. So I'm just wondering about how that relates to your idea and how much of a lab or how, if it can form it or That idea is also similar as to what it does is it, so Ntecton will have like its own new controller. And that means it's that controller will be running in its own port. And then it will. So once we created Ntecton, there will be like a Ntecton, what like a Ntecton resource called Jenkins job, which is actually, which is a Ntecton resource. And this Jenkins job will trigger or like, it will, it will do something on Jenkins and then wait for that, whatever process it is to like get over and come back to the controller. Usually what happens is that there's like multiple ports for like each of the pipelines. So in this case, that won't happen. All the controller would be doing is like triggering and watching those tasks and probably have like something of a queue to manage, you know, keeping keeping in touch with those Jenkins jobs, which are happening. So that's the idea basically behind it. So it's basically calling Jenkins from Ntecton and managing it with like a Kubernetes controller. Yeah, it just feels like everyone's trying to run everything. Yeah, it is a bit like a big game and it's like LEGOs and it's quite fun. Interop is just LEGOs, I feel. Yeah. And then to get this point, like, it's really fun to build things, but then it's like, okay, and how can I actually take this new toy and play with it in the real world? But you know what's interesting? Ntecton initially, I think as they started considering like abstraction and composability was such a big thing with the jobs and tasks and all itself. But I mean the task pipelines and the way it's created, I don't know, it will slowly build towards this like more interoperability and having something like a plugin mechanism because plugins were initially discussed quite a bit in Ntecton, but slowly the discussion moved to, you know, doing so the idea of plugins and how it would be implemented. There was no such decision made on that as such. So that's why the catalog project started in the first place because tasks are usable. And if we give tasks, which can be just used by, and just some parameters need to be given, that would be cool, right? So that's how it kind of like started the catalog thing. But if you notice or to actually extend the engine to support something somewhere else, there is a need for doing something like this that has been mentioned in the Ntecton experimental issue that you're talking about right now, where like there needs to be a different controller entirely. And this controller needs to then watch for a resource like Jenkins job. And probably on the Jenkins, on the Ntecton side, this is the way that would be taken for, you know, extending Ntecton. Because I think we've built it with extensions in mind. Sorry. That's good. For when we're looking at cloud event data, which Ntecton is supporting and Jenkins might be supporting, is that would that also be a way to link up very disparate pipelines? Could that be another alternative way to go between Ntecton pipelines, Jenkins pipelines, Ntecton pipelines? Like however you want to do that. Am I understanding that? And then you could have a Kubernetes controller, which would watch for the different events. Does that make sense? Yeah. Actually, yeah, well, what the Kubernetes, I was, I was thinking about this before, like you could just, you know, have like an event queue on the Jenkins controller on the other side. And then that event queue on Ntecton would kind of do all this, like manage stuff on Jenkins through cloud events. I don't know, like, probably we'll have to pass like a lot of data. And like, how will the event thing in that case work? Like, I'm not sure. Like, okay, Jenkins job event created. And the job event cloud event goes to Jenkins. And Jenkins is listening on this and probably like it will create, like it will create the job, it will start a job based on the data in the payload, which will be a Jenkins file. It could probably work, work something like that. And then the controller and the whatever the cloud events plug in, then we will talk together managing this stuff. It could work like that as well, probably. I'm just thinking about love. Yeah, no, no. Are you attending the event stick for the continued server foundation? Okay, I have, I have not, I've just been busy and unfortunately I haven't been as much as I would like to. I'll probably be attending more in the future. But I know that they do want more input from the Jenkins community. I may have pinged you about this. Yeah, they would like more input on, I guess, the data definitions that we feel we need. I just think you'd be a really great resource for that. Yeah, I, in the morning, I did see a message. I do need to reply to it. I'm not sure, I'm just not sure, like should I reply privately as well as publicly? Yeah, that's fine. That's great. That's fine. Yeah, but I'm very excited for the cloud events proposals and what's, what's going to come. I'm very excited to see like the infrastructure that will be spun up to test cloud events because it's, it's something it's, it's more of a, so the implementation itself isn't very hard. But what is, what would be interesting is to like how, how the students will use it. Yeah, I agree. It's a very exciting project. I have let this, we have had a good conversation. I've let this meeting drag a little bit. Just time wise, it's taken a more time than I had expected. But thank you for a great, like really great discussion today. That was really helpful for me and hopefully for others and really enjoyable. Do we have any last, any questions or any other issues to discuss? Anyone want to bring up anything that we have missed so far? Okay, great. Thank you all for being here today. We'll see you at the next cloud events. I mean, so much more in mind. Gladly to say we'll see you next week. Thanks.