 Hello, everyone. Welcome to the Mayday meeting of the Pipeline Authoring SIG. I'm Liam Newman. We have a bunch of people on the call. Hopefully you can see a pretty sizable list here. We are recording this meeting. And if you don't want to be recorded, please drop also a reminder that this meeting is governed by the Community Code of Conduct for Jenkins. So be awesome to each other and let's get going. As an overview, we have a couple of open items. Marky had to drop to the head off to handle some things. We'll see him talk to him next week. We'll have a presentation on Pipeline as YAML and also a multi pipeline job idea from Stephen Foster. And we have some docs for that. Cool. So the open items, the Jira crews are going to remain open. I got buried under some build issues at work. So I'll come back to that. Marky got the PR landing page merged, I believe. Let me go look at Jenkins. Is it on the roadmap? Is that right? Mark, maybe you can tell me. I do know that it is on the roadmap, the Pipeline as YAML item. All right. And the roadmap is remind me. Liam, it's kind of tough to get to right now. You should go to the bottom in governance. The structure and governance under the project column in the blue section at the bottom. We haven't highlighted it yet. Okay. And then at the bottom. Okay, then you'll see roadmap third link down. Thanks. Yeah, we will highlight it. It's still draft. All right. So as you can see, we have the Pipeline as YAML currently in incubation, but it has been released. The plugin is downloadable. So let's move on to that. The pipeline as YAML plugins been worked on by Aitun Schbecken. He's on the call with us here. And I will bring up the doc for that. So that was a meeting you guys had yesterday. And okay. Aitun, do you want to talk about this or what do you want to tell us? Sure. Also, I can do some little demo about this. So as the name says, it's purpose pipeline as YAML. I just released version 0.2 last week. It is in the still incubation period. We have some issues that we want to implement before going to version one. So this was the another topic that we talked yesterday. And after that, probably it will be released as version one and it will end its incubation period. So also yesterday I learned that this meeting is being done every week. So I will try my best to join as much as possible and try to introduce Pipeline as YAML because I implemented something. But also I want to get feedback from the users because there are very different kind of usages of decorative pipeline and like real life examples. Sure. So it will be nice to people try and like provide feedback about the usage or improvement points, maybe some other ideas. So I am open to all suggestions. So whenever you want, I can do a small demo about this. Yeah. I'm not sure. I think I have to make you the host to let you be able to share your screen. So let me give that a try. And let's see here. So I have now made you the host. There you go. Okay. And that means that you should be able to share your screen. I might have to, I might have to, like, ah, there we go. Okay. I've stopped sharing. And now you can share. Sure. Yeah, it's kind of buried in the menu system. Okay. I hope you all can see my screen. Now we can. Yes. I'm going to ask you to make your text a little larger just for people to be able to see easily. You're on Linux. Well, it's got a bit and now it's doing video, so it's going to take a little longer. Okay, so I don't know how can I close this. Okay. So this is the pipeline HTML GitHub page. Also, you can just access it from the Wikipedia of Jenkins. So the idea is very simple. You just put your Jenkins as file in your project, like the other Jenkins file in declarative pipeline syntax, and when you create your multiple branch pipeline, let's, they define a multiple branch pipeline here. So I'm going to be finding it. So I'm going to give my local pets. Could you see my black screen right now? Only the browser. The browser. Okay. Well, I can't share my old screen. Share one. It doesn't want to share the whole screen on my either. So. Okay, so I will switch it. No problem. So what I have is a local Git repository just to show the example. So the email file is here. So you can just define any email format file name is not important. So you can just define other names or why ML extension. So, you will see so much detailed examples in the plugin page. So I just created some simple example about that. We start with the pipeline key. So this is the overall structure of the pipeline. We need to define this key. Then we can just start define agent with node definition with any with none. So the usage is very similar to what we have in the decorative pipeline. So I just tried to keep it very similar to what we have in the current thing. So we have the another M1 definition. So it is again similar to what we have in the decorative pipeline. We define our M1 as key value. Then we have the options part. The good part of this pipeline do is it do not implement every step or every option post other things that is which is created in the Jenkins decorative syntax pipeline. So if you want to add another options to your pipeline, what you can do is that you can easily go to the pipeline syntax page. Sorry. You can select a decorative pipeline and you can select options, parameters, whatever you want. So every defined option in here can easily work with the pipeline. So the overall idea of this. Can you show us that? Just go ahead and do it. Yeah, sure. You do that and then generate. Yeah, that's this. So as you see, this is the decorative definition. So what I do is that I just get the disabled configuration builds and come back to my YAML file and just edit as another line. So as you see, this pipeline just creates the structure or the meta structure of the Jenkins decorative pipeline as YAML. So the old steps already defined in the Jenkins itself can be used. So for example here that we can see we define stages down there and then we start defining our stage under the stages. So similar to the syntax, every stage has its steps. And also we can define the steps as list under the steps key. So also you can see that we can use other steps which are defined in the Jenkins itself. For example, I will change my screen. So if you want to use another step here, what you need to do is that let's, but it will fail. I need to find something wrong. Let's do SSH. Yeah, I don't know. Test. So what I can do is that I can just take it and come back to my YAML file and edit as a list. Sorry, as a list or as a string. So it will just convert it into normal stuff. For some advanced gluey scripts, of course, maybe this usage may not fit. So there is another way of using scripts, gluey scripts in the definition of steps. So it is similar to what we have in the decorative pipeline. We define steps under that we define scripts. And then we can write our gluey script under here with multi-line YAML definition. So it will just convert it to normal script block in the decorative pipeline. And also I just wanted to show that we also implemented that post section of the decorative pipeline. So you can also use it. So let me save it. Okay, so I'm copying my file. I'm going to add my local directory here as a project as a git repository. And also what I do is that very similar. I just select the mode by Jenkins file as YAML. So again, this is a YAML script path. The default is this definition and just a quick save. It finds Jenkins YAML file. Then we come and of course it fails. Murphy. Now, okay, I need to put it as a sitting. My butt. Yeah, that's the reason because YAML fails because of this comma thing. Could you do that, that SH command as a SH colon and then have the parameters be sub items in there, or is that? I am planning to do that, but I'm planning to work on something like a converter, a syntax converter because I do not want to implement every step in Jenkins, but I want to create a rule just for like converting every step into decorative or vice versa. So I'm going to work on that. But until that feature, I'm just implemented this way. So can I ask a quick question. So the declarative pipeline engine has alternative parsers. So I'm curious if that functionality might already be done. If you weren't, if you haven't dove into the declarative code base, I pasted a link in the chat. So narrative has like a model definition and it has an existing JSON parser. So I'm wondering if you could leverage the existing declarative parsers as that proxy for you, right? So at the end of the day declarative pipeline is just a schema. I wonder if you could represent the exact same schema through YAML and then leverage declarative existing parsers to execute the pipeline side of things. Okay, I know this plugin, so I just work on it. But I will check it because I'm using this model definition, I'll set a pipeline there for radiating the converted declarative pipeline. So before going to running the pipeline itself, I just while they did through this class. I'm going to use it for another feature in the pipeline itself. I'm going to implement lately. So I'm planning to implement something converting declarative pipeline into YAML. So it will be much more easier for users to like convert the pipeline into YAML. So let me go back to browser. And it works. So Cool. Very cool. Everything's fine. Please feel free to let me show the pipeline page. So I just tried to document every kind of usage possible. And also with the links here you can go to the test files, which are also shows different definition types, because I didn't want to make the documentation so long. So currently we have all the definitions in declarative pipeline is also in amount of tier environment options, post tools, run conditions, and also run conditions, like sequential run conditions or child run conditions, any of all of something like that. We also have parameters, triggers, stages, and also parallel stages is already implemented. Probably another feature in the next release will be metrics definition of technical pipeline. So that's all like start or small demo. Thanks. Awesome. Awesome. Okay. I'm stopping my share. So Sounds good. And you want to give me host back. Of course. And the way to do that is to click on my picture and the little dot dot in there. And yeah, exactly. Okay. Excellent. So I will be glad if you all can just use this YAML plugin in your daily life, life with real life examples. So, and also I will be glad if you can provide feedback for the usage before before going to version one, because after that it will be hard to change the all of us. Yeah, it'll be much harder to change the basic idiom at that point. Yeah. Okay. Well, cool. This is really interesting stuff. So, yeah. You already put up a blog post about it, right? Or at least I saw a tweet. I don't know if I actually saw blog post. No blog post yet. I'm going to like it. So, okay, if I ask a question, well, we've got ito and chen Steven, both here in the same group. So Steven, the, the, the templating library that you would create that you had created and maintained and have cared for uses YAML as its basis as well, doesn't it? Can you help me understand. Interactions there yours is is. Well, help help me understand help me conceptualize this. Yeah, sure. So there's some slight differences. So the plugin I maintain is called the Jenkins Templating Engine. The basic idea in a nutshell is that it lets you pull the Jenkins file out of the repo and create tool agnostic templated pipelines that can be reused. So instead of, you know, hard coding that I'm going to use Maven for a build and then so in our key for static code analysis you define a generic scaffold of a pipeline like I'm going to do a build and I'm going to do static code analysis I'm going to deploy. And then the PCU or mentioning mark is actually a configuration file that's that lives alongside your templates that sort of hydrates what the template is going to do where you can specify what libraries you want to load, depending upon the libraries that you load, you get different implementations of that template and pipeline. So that's actually a custom groovy DSL but you could have a YAML parser for the configuration. So it's a little bit different the YAML side is for a configuration file that then configures your, your templated pipeline, as opposed to being the primary driver of what the pipeline is going to do. Got it. Thank you. Thanks for the clarification. So the whereas in in pipeline is YAML that I just created the YAML file is specifically defining the the what will would conceptually become the Jenkins file. It's defining the pipeline outright. Yours provides a layer that it's using that as a data file. Thanks. I am very curious to see if like we can save you a lot of work from having to implement steps in pipeline is YAML on a per option basis by just taking a YAML file and sending it to a declarative already offers from a parsing perspective. So what that functionality is like inherently one to one, because it's running through the same engine so you don't have to do any of the, like translation, maybe declarative could do all of that that work for you. Liam, are you do you help maintain the declarative. Yeah, exactly. And I was going to say, the, the problem there being is that declarative gets down to gets down to the level of the steps being executed, and doesn't. It doesn't actually parse those steps. At least, at least not at the, the model level once it actually gets down into going from getting ready to run it in run the job, it'll it does a bunch of different things, but at the the model definition level level it doesn't actually cover that. Gotcha. I was sort of thinking like the same way Jenkins configuration is code was able to leverage descriptors to support everything seamlessly I was just hoping that maybe there was a way to do the same thing for for publishing. You know now that I think about it I might be I might be thinking of that wrong actually so. I took, I took a quick look when you link that Jason parser to try and see it looks like it does go down into arguments at least to some extent it doesn't interpret them. Right. I would say that probably for the YAML stuff you don't, you don't need to make sure that I mean I guess ideally you would like the syntax to be correct but I would just probably allow arbitrary arguments at least as a starting point. Yeah, yeah, that's right. You know pipeline data bind it itself, you could use descriptors and data binding to implement like syntax parsing and stuff that is correct for every step, but it'd be a lot more difficult. Yeah, and I was, I was thinking of the script block. So the regular steps it actually does do that. That's right. So, cool. I mean it's possible. Yeah, I just want to think about this script block because really defining every step model is very hard. So probably maybe I can just create some maybe converters but trying to implement every model of the step. It's not very possible. Yeah, no you might not be able to actually have the modeled but you could at least have some use the same underlying structures. Cool. All right, I'll see if I have time maybe I'll play with that and see if I can open a PR and we can talk about it if it ends up panning out. If you're open to that, of course I don't want to overstep. From what I saw the big, the quick or the confusing things are basically that step arguments can come as like, you can have one default argument you can have many unnamed arguments or you can have many named arguments and all of those would have different YAML structures so you have to kind of figure out which one is happening and which one is allowed or not. Beware. Unless you just delay failure till execution right like you let you let pipeline declare to tell you like this doesn't make sense and explode as opposed to trying to do it at the YAML layer like you just wouldn't even be pipeline groovy is what actually does the step instantiation but yeah. I've had to learn just enough to know some of how this works and I have tried to stay as far away as I can. All right, so moving on. Stephen Foster is also with us here he, I believe, yes, there is. And you brought up a multi pipeline job as an idea here. So I'm wondering if you wanted to talk about this a bit. And maybe give us a quick introduction for anyone on watching the video that hasn't seen actually let me make things larger. If there we go. So, Stephen, do you have anything you want to show us or do you want to talk about it and I can run the screen. Sure, can you hear me. Yes, I can. Great. I can talk about it a bit. I guess the main takeaway is not about the actual syntax or describing the pipeline or anything. It's just drawbacks I encountered and a feeling that what I wanted to outgrew a single pipeline job, and whether or not that was kind of a valid feeling or whether there was any gaps there. So, for the example being with the pipeline I have it's quite simple it just it has a very large number of parallel branches that some of them, most of them are unrelated to each other and they can be grouped by platform that I'm compiling for. Sure. And I started writing it in a single declarative pipeline and kind of immediately found that there would be a big drawback of, and they're all sharing one build history, one kind of test history. Each parallel branch is quite heavy to run in terms of it builds out on loads of nodes and takes up limited number of specific hardware to run its tests and things like that so what I did was I split it out into jobs that all run together, but Jenkins doesn't really tie them together in any meaningful way so there's lots of weird navigation problems and history problems that I was identified as a kind of a gap. And your thought is to have what then to have a. Yeah, so there's a maybe an idea of a new job type similar to how a multi branch pipeline generates pipeline jobs based on a given set of source control elements. There could be a job that generates pipeline jobs based on some other parameters such as like a list of platform names in my case or by detecting certain Jenkins files by some pattern in a source control. That way it would reduce the number of multi branch pipeline jobs so I have about 16 multi branch pipeline jobs so they're all wasting time and doing the same indexing and things. And you need some way to navigate between them for the same commit, which is awkward. So, you could arrange in hierarchy where you have a multi branch pipeline job that generates multiple pipelines. You could navigate between them much easier that way. Okay, I mean so this has been something that I don't know it's been brought up specifically certainly not in this group but I've heard discussed kind of, and I've sort of thought about it a bit where it's, there's the classic Jenkins freestyle jobs have this concept of hey I have a job it calls it the jobs. And that was how Jenkins achieved parallelism for the beginning, you know, five, six, 10 years. And then we, we went to pipelines and there was this different sort of idiom. So what you're proposing sounds a bit like a move back in the direction or melding in those two worlds again where it's like okay we, we have more of a concept of separate jobs again, and we build on that structure. And I guess that it's a kind of a judgment call, based on where you see the point of, has this work outgrown a single pipeline or not. And for me it's because the branches the parallel branches they're not really related they can be grouped into target platform, but they don't really. They don't matter to each other that much. I think that happens with a lot of, I'm seeing that especially with hearing, looking at questions from people on matrix build. For example, there's a lot of, I need this built on this combination of platform and and hardware or whatever and, and, or some settings. And those individual things aren't, I mean they're related but they're not. For you I think they're you're saying that for you they're even more unrelated but they are the same basic structure. Right. Like, so what's the primary driver here like I think I understand what what the goal is and I, I think it makes sense. I'm trying to understand if, if the primary driver here is like ease of configuring all these different pipelines or if it's around like build history so you can better navigate like is the is the ask is one way to phrase the ask that we would want to take the matrix block more or less and then turn that into its own sort of job factory that instead of creating multiple, you know flow executions for a single pipeline. That's probably not the correct like CPS terminology but instead of creating, you know, a bunch of parallel threads in the same job, being able to take that and then say okay let me create some jobs that are going to execute the pipeline that that was described. So that so I get the benefits of like build artifacts per per thread and I get the benefits of like different, you know histories per for each thread. So that's essentially a yeah so I wasn't thinking necessarily of using a part of declarative to generate the jobs. There was just kind of a question mark there about how you generate these jobs. So yeah, like the simplest possible way would just be supply a list of jobs you want to generate. So if you put it in a declarative pipeline, and they all use the same Jenkins file that would be actually a really good way of splitting that up. Gotcha I did post on that thread my obligatory I maintain the Jenkins sampling engine and it's similar but it's not quite the same so I might retract my like this this might do a lot of what you ask so what what jte does is you from one Jenkins file. You can create multiple jobs based off the same Jenkins file that are a little bit different right so there might be some synergy there around like, here's a template workflow, plug in the parameters for different platforms and then we're going to create different jobs based upon that, but I don't think it's a one to one. I don't think the goals are the same necessarily. May I ask something. Sure, certainly. So I add the scenario you define is quite similar what organization folders provide. So did you ever try them or what is the difference. If you try what's the difference or the, the things that you couldn't achieve with the organizational folder. I mean, I think the thing that we run into. Sorry, Stephen I'll just jump in here real quick that the thing that we run into here in a bunch of cases that you have subtle differences and what it is that you're templating off of or what you want done. The, the thing that first pops into my head when you talk about this structure and also the org folders and that kind of thing is that. You currently have the concept of a, of a folder that is a job that runs, and yet also has children. Right. It's not. Right. So that I mean that that's one different second spot right here where, where kind of what you're, what you're looking for what the behavior that's kind of describing is like, Hey, I want to run this, this, this job. It sort of automatically generate and run these other jobs that are its children. Correct. Like the way we currently do that is we have a separate declarative pipeline that all it does is kicks off, like 15 other jobs and waits for them and reports kind of brings all their statuses together into one. Right. But then they start getting out of sync. If you want to start a particular job again because of some transient error or something. Right. Hmm. Okay. So you also the concept, what you'd have there is the concept of okay I have this top level job and then oh this. So they really are. They say I want to re one that re one that one because there's a transient error but I want to keep it in this group of jobs. Yeah, exactly because it's it would be very expensive in terms of just resources and occupying agents and things to run everything again. So you're almost talking about like a what you're talking about there would be almost something that you'd have to create a when you start a new run of this particular pipeline it creates a folder in which it puts all the things that it created. So each one of them be its own, like, because if you want to be able to run, we run individual items in there those are the only way we have to group those right now would be a put them all in a folder. Hmm. I mean, I mean, these are all this I'm just put on. Yeah. I do keep them with in as few jobs as I can that makes sense so if like a platform has for parallel threads in it. I'm okay with running all of those again I don't want to go to that detail myself but I want to see what you mean about having to manage all these. I'm not saying I'm actually not saying that you have to imagine it's more like, what would the, how could, how could we leverage the current sort of idiom to do what it is that you're talking about better. I feel like it's like what we would expose would be a properties layer at that top, and then it goes in and like automatically creates parameterized jobs beneath it and when you run the top job, it just passes input parameters to the sub jobs that it that it automatically created from one job definition, you're able to then basically end up with a combinatorics of sub jobs, and then instead of you telling it what parameters to run its job with it knows that you want to run them all so just automatically pass these parameters to everything. Okay. And the difference here between this and say an or folder is that the or folders working off of get a different data set, right. Yes, it's kind of more more generic or arbitrary. Yeah, yeah. And this could all be for one job right like for one repo so I could have my app in a repo and I want to run it on like six different jobs for different platforms. The difference there to me is that like organization and multi branch jobs are assuming they like your job is your repo, and this is sort of like I could have an arbitrary number of jobs coming from a single repo, or even separate repos for that matter. Yeah, maybe I don't know that inverts the hierarchy of like with an organization job I think that's above the multi branch. Yeah, but this would flip that so that the multi branch would be the top level still. Interesting. My comments are definitely not pushed back I think I'm just trying to wrap my head around. Yeah, obviously worth. So, actually, I wanted to ask maybe Devon or some of the people on the cloud bees pipeline team that are here that have as much or plenty of experience in this area. So the question I have is that, like, how does this. I don't have as much history with the discussions that went into pipeline and some of these things I have some sort of user level view of it from my history. If there's some reason why we chose why why the design of one of a single job as your pipeline sort of was chosen and if there's a, I mean if there's a reason why we shouldn't why having this as a possibility is not. It would be a problem, I guess. Unfortunately, that those decisions were made before my time. Okay. Yeah, I don't think any of us were there for the history. I mean, I guess it's just that it that's how freestyle jobs were and it kind of made sense to do the same thing. I mean the one thing I will say that seemed very similar to me about this is like people asking for kind of like promotion support and stuff like that and pipelines where they want multiple jobs to really be conceptually part of the same pipeline and have different levels of promotion and stuff between them have one trigger the next one, but really conceptually they want to think of that as the same pipeline and have to be visualized together and things like that. This seems tangentially similar to that. So I don't think there's any particular reason that we said no, don't do this. Okay, cool. I definitely don't know as much about multi branch history as I do about pipeline so I'm really not sure about that side of things. Okay, I was just wondering if there's any any perspective there. So, I guess the question I have for you Steven is what, what do you think the next steps are and it seems like there's interest at least like well this is, this is an interesting idea. There's, I can think of at least three or four different ways that it could be implemented or how it might look. And the advantages and disadvantages of those like what's what do you, what would you, what's your next step would you like to try or what do you see this going next. It's a good question. What I was looking for was a sense of if there was any philosophical pushback and it doesn't sound like it. So, I mean, okay, so there's there's people that are that have more. Okay, so let me, there are people that have more opinions about the pipeline engine and and some of those those things are pipeline design. I would say the internals of Jenkins the Jenkins internals aspect of this that aren't part of this meeting so we probably want to bring them into that I into the discussion. I would say as long as you don't need to, like I wouldn't put this inside of one of the existing plugins I would create a new plugin to build this functionality. If you can do that I don't think there would be significant pushback I mean, maybe they, they wouldn't like this approach or something but I think that, you know, like Jenkins X follows this kind of model like we talked about like it says here. I think it's very reasonable in general. I don't think it would require like extreme gutting of like super core pipeline API is if it did I would probably say, maybe not worth the complexity but I mean, if you can do an independent plugin by all means, I think that from like my brief diving into the internals of things it seemed like it could exist on its own layer in its own plugin, and maybe with a couple of modifications to our extensions to things like folders I'm not sure. The, my question is, is what, yeah, where this would hook into the existing sort of design idioms from Jenkins freestyle and pipeline sort of because it's kind of a melding of the two right. I'm pretty similar to like a multi branch job in the sense that it's like a folder and a job itself. Yeah. I'm interested in like how do we, how do we orchestrate the pipeline API compatibility between all these different things right like we've got the underlying pipeline engine and CPS and it seems, from my understanding that declarative sort of sits on top of CPS, and then the thing that I do for living sits on top of CPS and then pipeline is YAML comes in and sits somewhere in that and then this would also sit there how do we how do we make sure that this whole ecosystem is like orchestrated and works together. Does that make sense? I know it's a very hard question without a good answer. I think I'm just thinking about how do we maintain API compatibility here as more and more frameworks start to sit on top of the pipeline engine. I mean, do you mean like technically how do we keep things from breaking or like conceptually how do we keep things holistically making sense? Probably the latter right like I think all these different pieces I'm talking about are developed in silos without too much maybe declaratives and exceptions that declarative obviously is very closely intertwined with the underlying CPS engine. But like pipeline is YAML and if this job were to leverage existing APIs for creating pipelines and the templating engine leverages some of the the script engine like maybe maybe I'm not phrasing my question. I think no we're I think we're following where you're headed it's just sort of like we're trying to clarify what you're like which direction because there's different ways to take your question is is you're what you're saying though is that it's more question of how do we keep them all working together playing to play together well is that the how do we keep them playing together well I'm trying to like phrases as agnostically as possible without being like how can I make sure that the plug in I maintain keeps working. You can always start from there that's fine it's okay to be selfish. Right so selfishly I rely on particular conventions of how the pipeline engine works. Like, I don't I think that this biggest focus on authoring an experience I'm curious if like things that live within the pipeline ecosystem needed its own conversation between the different moving pieces. Right so I, I inject functionality into the pipeline runtime to enable templating and governance, you know if the underlying API for how pipeline works that that sort of breaks that that layer on top of it. So I'm constantly trying to figure out how do I implement my pipeline plugin in the best way so that it's a, it's maintained as a first class citizen as opposed to like something with that's duct taped onto the pipeline engine. Without pulling in the same developers that are working on everything. Right like in my best case scenario I get to sit down with with you Devin and with you Liam and we talk about like all these APIs and how everything integrates but it's not the most scalable model out there and can be difficult to orchestrate. I mean I guess like the, the hard reality pessimistic answer I guess is that like because everything is so widely used right there's minimal changes to most of the core underlying pieces like workflow CPS and groupie CPS with our bug fixes are minor features etc but we're probably never going to make really wide sweeping changes to that plugin ever because we can't because it would break too many things. So like, to a large extent I think that technically it's just like, you know every change we make we try and avoid any kind of compatibility issues as much as possible, you know enhancements. Sure, if it doesn't affect existing things but in general I guess we just try and be extremely conservative. That makes sense. I definitely don't want to turn this into like a Steven asking questions about the thing he maintains. I think I think that that isn't helpful enough for me for now, but it's looking at it from the, you can start from a selfish perspective and you're one of the perspectives that needs to be served right so that's that's perfectly reasonable. I guess I would say too is like, at the end of the day things like the current implementation of declarative and your templating engine like they are extremely coupled to groovy and CPS and workflow CPS right like that's unfortunate. If we implemented a new pipeline execution engine those things would just stop working or they wouldn't work with a new engine at all so like that's not ideal but it kind of is what it is you know there's no alternative for you to use just an ecosystem that's that's how the ecosystem is today. I will say that, at least selfishly the thing that I maintain is fairly decoupled right I, and this is this is going to change but right now I just inject executable things into the binding that leverage Jenkins pipeline as code. So, you know, if we were to change CPS, you know, there's not too much that changes on the layer I've added to be able to continue supporting that unless like we get away from groovy, I depend heavily on groovy and some fancy programming that goes on. Yeah, when I imagine so like the things that I imagine as big changes that would be problematic for the ecosystem would be alternative pipeline engines as in groovy no longer executes on the master or there is no groovy at all it is truly strict declarative things like those nature those would obviously be huge breaking changes for the ecosystem, but I don't really think there's anything we can do to, you know, support those things in advance for most cases like a lot of the features are kind of specific to groovy. And I guess I don't want to get into too much right like this is about authoring, not so much about engine stuff I'm happy to talk about engine stuff a lot but Maybe you and I could tag up out of band, I'd be interested to get like 15 minutes of your time, if you're ever available. Yeah, if you're free to email me or whatever. Thank you. And I'm just taking few notes here. So, let's hear major breaking changes would be if we moved away from that. Yeah, I mean as long as we have a groovy engine that runs inside of the master. Most things are going to conceptually be similar. But anything that changes that would be very significant. In terms of breaking compatibility. You could offer a lot of interesting new functionality or new features or different user interaction methods avoid a lot of problems but it would be probably it would be complex for the ecosystem. There we go. I can type. It'll work. I'm sure there we go. Okay, cool. So, you two can meet up and chat a bit. So Stephen Foster, the, I guess the, the thing we're coming up on our time here. And I guess the, the next steps I guess would be there's there doesn't seem to be like a major philosophical opposition to this. There's questions about like how it would work with other things and how would be, I mean, I'm for me I'm also thinking a lot about there's location right now so like how would, how would this be visualized would you just use a Obviously it wouldn't show up so well in blue ocean or that's deprecated now like how would what would this look like, where is that sort of secondary, the more, you know important question is does it run, but like, I think maybe the next steps here are to try to put together a, at least some sort of set of ideas either one one proposal for what how you'd want to work or maybe work with some other people to kind of talk about what it would, what's what people implementations would look like, and maybe submit a jep that people that other people would, would be able to comment on, understand the jeps not required, but it is, it's one way of getting more feedback. But it sounds like what you're doing just to wrap up the idea that sounds like what you're proposing is something that would be be be able to work as a independent from the other things. And it wouldn't necessarily be something that you'd have to take feedback from other people. Right. You could just say, I wanted to look like this. And you could go do that and that, I mean, that's, I think you would probably get more, more engagement by you know getting some feedback and trying to kind of do things together but you know pushback from people that did have a lot of opposition to the this philosophically, you could just kind of go well that's your philosophy. Yeah. So, that sounds like that sounds good I could definitely do that. It was interesting to get feedback even from just this meeting there was kind of neat that there was people with different perspectives on the same kind of thing. I'd be up for more discussions separately or Well, it's it's kind of up to you if what you want that to look like. Maybe you reply on the thread and we can sort of hopefully you'll join us again at this meeting and talk more about it. You know, sort of, we can have some discussions offline and then bring it here and chat about it and sort of present or just sort of nail down anything that we need to do in real time. Yeah, for sure. Great. All right. All right, that that is our time for the day for the day. Thank you, Stephen Foster and for coming and showing us new things in pipeline. And the next meeting will be on May 8. Next Friday. Hope to see you all there. Thanks. Thanks everyone. Have a great weekend.