 Alright, we are live. This is the Jenkins Pipeline Authoring SIG meeting for January. I swear that this will now be a regular thing that we have it on the second. This is the third Wednesday, normally second Wednesday this week. This month's weird because we had to delay to get Steven Tirana able to be here to present his Jenkins Tepleting Engine stuff. I'm sorry. That's fine. That's fine. You are forgiven. The agenda link is around here somewhere. I see other people on it. It's pretty barren, but I'm kind of relying on Steven to eat up a lot of time. But first, we've got Martin Bonjour to talk about Jenkins and Google Summer of Code and potential for Pipeline Authoring Interest in that. So why don't you take it away, Martin? Alright. Thank you, Andrew. Let me just turn on my camera so you can see that I'm a real person. And my name is Martin Bonjour. I'm one of the org admins with Oleg on the Google Summer of Code program. This program is organized by Google and Jenkins will apply this year. Actually, we've started the application process. So we apply to Google and the way the program works is essentially Google is like a matchmaking service between organizations, mentors, projects, and students. So organizations have projects. They sometimes have mentors. Sometimes mentors come from outside the organizations and offer to mentor projects. And Google pays the students a stipend at the end of the project for their work on the project. And so this attracts students to participate in the various projects that are proposed by organizations and mentors. So Jenkins were in the application process this year. And I just want to, maybe I can share my screen and show you where you can find this information. Okay. You can see my screen. Wonderful. Can you see my screen? Yep. Okay. So if you go to Jenkins project page, you go to Google Summer of Code in the subprojects. This is the entry point for the project. We have information for students to show them, to explain to them how they can apply, how the rules of engagement, how they can participate. We also have a page for mentors, how they engage, how they participate, the expectations that Oregon means and that the program has with regards to mentoring. We've defined roles and responsibilities for Oregon means and mentors. But most interestingly, what we have is how to propose a project idea. So if someone in a SIG, in a special interest group or in a Jenkins subproject has an idea, they can go to this page and they're going to find instructions on how to make a proposal. When you want to make a proposal, you simply click on this link. It's going to take you to the proposal template, which is here, and you simply start writing your proposal. So there's several sections to creating a proposal. So first of all, you know, the body of the proposal goes into this first section here. If you happen to have quick start instructions for students, students, you place them in this section when you have links. If you have newbie friendly issues, link them here. Then there's a skills section. So these are the skills that the students either must possess or that they will develop during the project. Below the project description, there's project metadata. So we're expecting people when they propose an idea, they need to fill out the project metadata. So we're asking, you know, things such as when did you create the project idea, provide a short summary, enter the name of the person proposing the project. So we call that person a champion. Then we need links to GitHub, Jira ID or Jenkins ID if you have one, and so on. So there's different types of mentoring here. This is something I wanted to talk about. So when you propose an idea, it helps if you're the mentor, but sometimes there's many ideas and you just want to propose them, but you're not ready to do be a full time mentor, but you're willing to be a technical advisor. So it's okay. There's different levels of mentoring that we have. In a minute, I'm going to grab a glass of water. Okay, so there's different levels of mentoring, technical advisor, subject matter expert. So those would not be involved as much in the day-to-day activity of the student, but they would come in either once a week or once every coding period to give advice and general direction to the project. And then you can list potential mentors for this project. So typically the champion tries to be a mentor as well. But if you have people who also collaborate on this project with you, ask them to participate, and we want every project to have at least a minimum of two mentors. And this is because, first of all, it's easier when you have two mentors. It's easier on the mentors as well as on the student because you can answer questions depending on your time zone, and sometimes you're not the expert in what the student is asking, so the other mentor will fill in there. We try to have two or three mentors per project. So that's how you prepare an application for Summer of Code. Now let's go back to the Summer of Code page in Jenkins. We have a list of existing project ideas, and these ideas have been published and are offered to students currently because we've started the application process. Obviously we need to be accepted by Google before this becomes official, but students have already started to ask questions in Gitter about these different projects. So for example, let me just take the first project artifact promotion plugin for Jenkins Pipeline just to show you what it looks like. Once the project idea is published, we have a page on the website, and we have a direct link to the, it's actually the live Google Doc embedded in this webpage so people can make comments on these proposals. So I can start typing, you know, comments here. Obviously I'm not going to enter this right now. And so that's how we can engage with students quickly. They will see these projects and will read the project proposals and start asking questions in Gitter. So these pages are generated pretty much automatically. We need to send a pull request with some metadata, and it shows up on the Jenkins IO website. So if anyone's interested in proposing ideas or being a mentor, we have a Gitter chat, which is the JSOC SIG, and please don't hesitate to reach out to us. If you need help with the process, if you want to clarify project ideas, if you need help getting started with Summer of Code, we're here to help. So there's four organization admins here that are ready to help. Now in terms of mentoring, what does it mean? So the way I see this is you propose a project idea and you invest about five to six hours a week during the program. And you have something, someone working for you full time for about four and a half months. So the chances are that once you land a student, your project is going to make a lot of progress with you investing just a few hours a week. You get a lot more in return. So that's my sales pitch for Summer of Code. All right. So I'll stop the sharing now and take questions. Anybody have any questions? GSOC is a great thing to get involved with. I don't have the attention span, sadly, to be a good mentor. I've tried once and was just just not good at it. But it's a great way to help encourage new people to get involved in the community to focus on, bring focus to features or plugins that you would like to see improved or added to in a scheduled structured way, which can be a good way to not just, you know, helping people get involved with the coding, but actually help get something you care about done. And, you know, it's just a good community worth considering. I forgot to mention one thing if you allow me to have a few more minutes. Of course. Thanks. There is a proposal, which is there's a couple proposals which are related to pipeline and pipeline authoring maybe and the one that we have is one that Kristen and I have proposed for. It's the second time we try to get this project off the ground. If I may share my screen again. Okay, and let me go to the list of projects. It's a project with the documentation. Just a minute. It should jump to me pipeline dot pipeline step documentation improvements. Okay. Now, this proposal. So one of the requirements of summer of code is that the core of the project has to be coding. It cannot be documentation. There can be documentation, but the core has to be coding. So Kristen and I are trying to frame the pipeline documentation problem around. More of a coding problem than the documentation writing problem. So this brought us to, you know, the infrastructure for publishing the documentation and the format under which it is presented. And we would really appreciate feedbacks and ideas and comments from the pipeline. So I'm going to go ahead and say regarding this proposal is in Kristen. Would you like to add anything to what I said here? Sure. So we know that we have that generator that goes through and automatically does the pipeline step documentation generation. But it's, you know, it can always use some improvement and maybe some like readability stuff because again, like some of the pieces we are missing with some documentation. But we don't also know where all the documentation is had. So it just goes through and pretty much runs what you'd see if you hit the pipeline step help button, like, you know, pretty much like the auto generator. So it'd be helpful to know like where everything's located or if there's kind of, if there is even kind of like a standard so we could try to pull the different pieces in programmatically. There isn't a standard, but we wish there was. Yeah. But the first meeting and it's not, you know, exciting and I've been overly distracted by other work. We have not really moved forward on it. But I think there's definitely GSOC or no GSOC that there's definitely work to be done in tandem with documentation efforts for pipeline and pipeline steps and with pipeline steps generator. If it is going to be in GSOC, could I make a little feature request and code wise? Don't show every single parameter type every time. God, I know, I know because when you end up with that nested list of objects and that's like the worst because it just makes what doesn't look like going to the build step doc. Like, okay, so there's a couple of parameters up here, a couple of the options for the step up here. Then 18,000 parameter types, you know, job parameter types. And then at the bottom, there's a couple more options. Yeah. And that's so hard to read. So like that would be something that would be really helpful to fix. And even like just even what is in here like showing what the obvious usage is because right now, you know, sometimes it's just this step and like string string string string and it's like what does this look like to actually make. Yeah. That would be so helpful. Yeah, examples would be good. That sounds like something that we could come up with a standard for from the stuff step stock generator perspective and take advantage of that to add that to plugins in the, you know, the UI help side that can be pulled in by the step stock generator and in other cases as well. That seems like it would be something to encourage that we can come up with a good well defined and well documented way to do that. That's something that we can encourage plugin developers to adopt. Yeah, definitely. That sounds good. That's a good idea. Where is this step doc generator code located. If you could post a link to that later in the in the. I will do that right now. That would be nice. Yeah. It was actually just reusing it for something. And it makes me like kind of a little happy that it's being used or something. It has that really cool. Jenkins list by plugin manager. Yeah, I use that for trying to find every having a class path with every single plugin in it to do some search for the security issue to see if there were other things that could cause problems in exposed by any other plugin. I'm like, when I try to spin up a Jenkins master with every Jenkins with every plugin installed. It does not work. The hyper local aware plugin manager. Work. Fantastic. Awesome. Good, correct. Yeah, I think that in general that the doc generator is something that if we ever get back to improving documentation and examples, etc. is something that we're going to want to work with and incorporate interact with pretty heavily. So that's something that is probably of interest to us in the, you know, over the longer term. Can I ask that question? Sure. So first, Martin, thank you for that presentation and for leaving this effort. That was a very good presentation. And it's good to see that you're focusing on summer of code and mentoring. So the question that I have for you is in a previous project, not Jenkins. I was a Google summer of code mentor. And I had a bad experience with that because what happened was Google summer of code, they do pay the student. And so the student that I was assigned, they were more interested in getting the money. Since the sum they are paid is in US dollars, not in the local currency. It's actually a lot of money for some students working overseas. And so the problem I had was the student, they misrepresented themselves, what they could do. They didn't follow up. They actually accepted another job at the same time. So it didn't even work on the project. And I ended up having to fail the students so that Google would stop paying them. So that was unfortunately a bad experience. So my question is, how do you sort of avoid this problem? Because on one hand, a student, they're not a contractor that you can say, here's a project, it has to be done by this deadline. They're a student. But on the other hand, they are getting paid. So I mean, there has to be some level of expectation in terms of the amount of work that they put into the project and what they get out of it. So I'm just curious, what has been your experience in the Jenkins project? And how has that gone either in a positive or negative way? Sure, that's a very good question, actually, the question of students who try to abuse the system. It does happen. It does happen. We've seen it happen in the years that I've been a mentor. I would say that, so there has been a change last year by Google in the amount of money that they pay. So $5,000 US, they now translate that into local buying power equivalent. So let's say $5,000 in the US buys you, and I'm going to exaggerate the example here. Let's say it buys you a loaf of bread. Then they're not going to pay for the student for more than a loaf of bread in their local currency, which means that students get a lot. They get the amount but in local currency purchasing value equivalent. They don't get the $5,000 US. There were concerns about that, but when we attended the mentor summit last year, which was the first year where students were paid differently depending on their local currency, actually Google has not seen anything negative and there's been a growth in the number of students. So that was a structural change that they made possibly in response to students abusing the system. And yes, we have opportunities to fail students early as soon as we detect that they are not committed. We also want them to contribute every day or every other day to their project, so we expect to see code every day. If we don't see that, typically after a week, we ask hard questions to the student and try to make sure they're still committed. And yes, we have failed students who have misrepresented themselves, but we've also had three very successful projects last year. They all resulted in published plugins. In 2016, we had a very successful project as well and that project is continuing this year. There's a new proposal which is going to take that project into cloud native if we land a student. So that's my experience and that's what I know about this topic. I hope this answers your question. Yes, it does. And thank you very much for all your work in this area. You're welcome. All right. Well, thank you Martin for coming and Martin for coming and talking to us. And if anybody is interested in getting involved with GSOC, the things he said and that'd be pretty cool. Next up, we have Stephen Tirana to talk about his Jenkins templating engine. I first saw this Jenkins world, DevOps world at that conference that was in San Francisco in September and found it fascinating. I think there's a lot of really interesting work here. It has been properly open sourced since then and yeah, I'll let Stephen take it from there. Thanks Andrew. Thanks everyone for coming. So before we dive into the actual plugin and how it works, I think it's probably worth talking about why we've spent the last 12 to 18 months iterating on this, especially because it's, I don't know if I'd call it novel. I think there have been a lot of templating type solutions out there, but what problems this has solved for us specifically. So I'm a lead technologist at Louisiana Oldham Hamilton or a federal consulting firm. We do a lot of work with modernizing government applications. So that's things like working, going from a monolith to working with 40 or 50 different microservice teams, perhaps being built by multiple vendors at a time, spread across different geographical regions. So that's one engagement. We also do a lot of DevOps pipeline work across different client engagements. So some of the challenges that we were experiencing were things like every time we start a new engagement, we have to start from scratch. We can't take code that was developed for a particular engagement and then drop that somewhere else as part of the pipeline implementation. So it would take three to six months of pipeline development when you get started before these application development teams could hit the ground running, doing all of the best practices that we use Jenkins to implement. So that's a lot of time being spent up front. We were doing a lot of code duplication and reuse. A lot of the lessons learned weren't being well communicated across engagements because that's a difficult thing to do at a large organization. Another, you know, someone that was building these pipelines, a lot of what you're going to see today comes from the challenges I experienced while developing pipelines for 40 or 50 different microservices at a time from 20 to 30 different application development teams. So on a particular engagement, there were 30-something teams. They all wanted to use the exact same process, but they had slightly different tools for that. So the way Jenkins currently works today in most cases, is that you have a Jenkins file on each of the repositories. And this is okay, usually, but from a governance perspective, it can be challenging to be positive that everyone's following the exact same software delivery processes when you have a different Jenkins file on every single repository, especially in the federal space with requirements around different code quality gates or security requirements or reviews that have to take place. Having that Jenkins file in the repository was a challenging aspect for us because you can't actually be sure that they're all the same. The other challenge came from, they wanted to follow the same process with a slightly different tool implementations to do different aspects of their pipeline functionality. Some people were building front-end apps, others were building backend, could be Java, Python. But really the flow of what was going to happen was exactly the same. So what ends up happening is you can abstract out some common code into Jenkins libraries and try to consolidate code duplication. But over time, as you want to improve the workflow of your pipeline or you want to add new tool integrations, you have to do a migration of all of those 50 Jenkins files across the repositories. And that can be challenging when you have this sort of collage work of art because you're trying to do the exact same pipeline with different tools. So we sort of took a step back and said, at the end of the day, regardless of what tech stack these teams are using, they're following the same template. They're going to build an artifact, scan it, do static code analysis. We're going to deploy it somewhere, do penetration testing, accessibility compliance testing. And all of that workflow is exactly the same. So we just need a way to be able to pull the Jenkins file out, define what that template actually means, and then be able to swap tools into and out of that template to A, not have to do these Jenkins file migrations, B, give these different organizations some real governance and auditability over what the different business processes were being represented in their software delivery pipelines. And then purely from a pipeline maintainability standpoint, being able to have this mental model of, I have a series of steps that are going to take place. I have some code that implements those steps, and they need to be broken out into libraries, if you will, that represent the different tools that are being used. So that's why we came up with the Jenkins templating engine. We can walk through a couple of demos that sort of explain what I'm talking about here. So locally, I guess this is our documentation page. It is, if I can find that link again. I'm doing my best to keep it up to date. It still needs some updates to it. But the idea is that this site will be where you can go to get all of the different information you need on how to leverage this plugin. It's open source through the Apache license. So how does it actually work? Log into Jenkins. So there's a couple of different ways that you can use it. I'll start with the simplest and then move up. So when you install the Jenkins templating engine plugin, you get a couple new things added to your configured Jenkins into some of the different jobs that are available. The first piece is under managed Jenkins. You have a Jenkins templating engine configuration where you get to define a couple of things. The first thing you get to define is a location of a configuration file and a place to put your library sources. So what does that actually mean? Within the Jenkins templating engine, you have templates that define what's going to happen when, generically. And then you have configuration files that give the template the information it needs to actually implement what it says it's going to do. So globally, I can define a configuration file. For this example, it is fairly straightforward instead of a Jenkins file in every single repository. I now have a pipeline configuration repository where I can centralize a lot of the configuration. So this is a very simple example and I can show a more complicated real-world example after we go through some of the basics here. But within this config file, I get to define what my template means. So there's some primitives available to me to be able to define things like application environments and then things like libraries, which are going to have the tool implementations. Alongside this, I have a template and in this case, it's very straightforward. It just says I'm the org-wide pipeline template proving that we're going to create an application environment object for some syntactic sugar and then we're going to call a build step. So this build step is intentionally, generically named in our specific example. If we go back to Jenkins, I can look at my configure page. This is just a regular pipeline job. I have, we'll turn the sandbox on. So I have a template keyword and this is not, we'll show a different example where you don't have to define this as part of a pipeline job. But there's a step contributed to your pipeline that's template. Within this closure, we're going to use those configuration files to populate some syntactic sugar and to load the appropriate libraries. So within this, I'm able to just say I have a dev application environment. It's getting pulled from my configuration file that's specified. And then I'm going to call that build step. I can execute it. And really, I use this template keyword when I'm testing things out. The real power is going to come from being able to define these in that configuration repository and then have your application repositories inherit that configuration while being able to provide their own to customize what specific tools that they're using. So in this example, we've got some logs that came out. We obtained the configuration file from the global configuration location. We loaded the Maven library from a library source. So if we go back to this configuration, if the template is calling steps, the libraries are the modules that have those steps. And depending on which modules I load, I get different implementations of the same step of my template. So within your configuration, you get to identify library sources. So this is just a link to a source code repository or any steps that you're used to seeing with Jenkins shared libraries. Within this repository, we have an ant and a Maven library. You identify the library name based on the name of the directory. Within here, I've got a series of groovy files that are my steps where the step name is going to be equal to that of the base name of the file. So I've got a build step that's going to be contributed by the Maven library. And all it's going to do is say, building with Maven and I have the exact same thing for ant. I can show at Booz Allen we have a series of libraries to integrate with the Jenkins templating engine. These are licensed to the Booz Allen public license, which is a little bit different. But in this example, we're showing the simplest that we can to sort of get the idea of how it all works together. So this was a regular pipeline job. I have a template keyword. It's going to load things from my configuration file into the environment to be executed. And then I have a build step that gets executed. But the real power of this for us has been from being able to define pipelines in one location and then use them across all the different applications. So we have two GitHub repositories. We have a sample application for Maven and a sample application for ant. Open these two up. I can go to my pipeline configuration repository and in our global configuration I specified that the library was Maven. But now we just want to define this application environment globally. We want the individual app teams to tell us what tools they're using. So I can say merge equals true and this is going to allow individual applications to make changes to the configurations of the library spec. We could talk a little bit about how the merging of configuration files between a global configuration down to application configurations works. But for right now I'm just saying merge equals true which is going to let my individual sample applications which for this example it's just printing statements. So they just have a configuration file of their own. So instead of having a Jenkins file that's going to define the entire pipeline we've pulled that out into a template in a pipeline configuration repository. Instead of that Jenkins file individual applications get a configuration file where they're going to tell us what's unique about their pipeline to configure it for their application. For the ant application we're just going to say we're using the ant library and as you're expecting it's going to be the same thing but Maven for the Maven sample application. So if we go back to our organization job and go to the configure page there's a new option called project recognizers. So by default it's pipeline Jenkins file which will identify all the Jenkins files and the repos and create pipeline jobs for those. With the Jenkins sampling engine you have a dropdown just to say Jenkins sampling engine and what this is going to do is look at the available configurations and globally as well as on folders and we'll talk about that in a little bit and run the template that was defined. So if I go to the Maven job I can run the master branch of our Maven repository. It's going to attain the right configuration file it's going to kick off that template keyword that happens in regular pipeline jobs automatically it obtains the correct template it prints it out and then we get building with Maven the same thing for the ant application repo you can run the master job and as you're expecting it's going to do the exact same swap out what the implementation of that build step in the template means for this specific application. So the idea is that you pull the Jenkins file out of individual repositories if you have these shareable templated workflows and you use pipeline libraries in a configuration file to be able to implement what that template is actually going to do so the other aspect of this there's a few other things so organizationally you're going to most likely have more than just two hierarchies in this example we showed a global configuration file and then an application specific configuration file and those will get merged and that result of the merge will be what's used to populate your template but in reality you're probably going to have some organization wide configurations within that you'll have sub teams which have slightly more granular configurations and so on so with the Jenkins templating engine your governance hierarchy can match that of your organizational hierarchy by having the structure of your Jenkins jobs match the structure of your governance if that makes sense so on folder projects and GitHub multi branch jobs you're able to specify the same configurations here so in Jenkins templating engine verbiage we call these governance tiers so you can have as many governance tiers as you're looking for where you're able to define a configuration file and that's another set of library sources specific to that more granular subsection of your organization so that's the general idea that pipeline workflows when they're shareable can be abstracted out into a common location with common templates use a configuration file to populate what that template's going to do there's a couple other details that are worth talking about so if we go back to configure here we just have println dev and then we're going to call a build step and I can run that in this case nothing's going to happen because for this specific job we're just using the global configuration here in this pipeline config file we didn't specify a library that called that implemented the build step so by default that's a no op it would be possible to translate that into a I actually want to fail to build if the library is loaded for a specific application don't supply the correct steps to populate the template that's defined but to be able to show a few more of the things that Jenkins Supply Engine can do we'll add maven back we can rerun this job and this is a sanity check for myself making sure that this works as expected adding a new library and a new step to this would be as simple as creating a new directory in the pipeline libraries I can't navigate and talk at the same time is adding a new library or directory under the pipeline libraries repository adding the name of that directory to your library section and it would automatically get picked up and you'd be able to add a new step to your template and it would be distributed across all the applications leveraging that template and configuration another aspect of JTE with library development is the ability to do aspect-oriented programming type things one of the challenges associated with building a framework that lets you swap things in and out easily is handling the relationship between tools so for instance if I had let's see if I had a situation where I wanted to notify after I'll do it so that it aligns to what I'm actually saying so let's create a new library called Splunk let's say I wanted to send Splunk events to I want to send out Splunk events every time a deployment happened so I can't take the logic to send that Splunk notification to my Maven library because now you have a coupling and you can't get the swap ability that you're looking for so one of the things that has proved useful to us is to be able to do some things like before every step gets executed I want to send a notification or when the pipeline is done I want to run these cleanup routines after every step I want to do a notification or execute a particular pipeline method so when you create a pipeline step you get a couple use them so in this case we're going to mock out sending a Splunk event and by default your library steps will have available to them several annotations so in this case I can add the afterstep annotation to my library commit the file and then if I go back to my pipeline configuration globally and add Splunk as a library that is being loaded because we just created it in the pipeline library section library Splunk when I rerun this pipeline job it's going to pick up that we loaded the Splunk library we still ran the Maven build step but it automatically picked up that one of the steps loaded from the libraries involved has a hook annotation on it so in this case it's an afterstep it tells us that it's the first step from the Splunk library and it invokes that method so there's a couple different annotations available to try to seamlessly inject code between pipeline steps if that's something that's necessary so you get afterstep you can change this to init this will run when your pipeline kicks off you get before step this will run before steps there's notify which runs after every pipeline step and at the end of the pipeline you get a context variable when you're using these different hook annotations and this is so that you can be context aware within your hook over what's exactly happening so in this case we changed it to notify and we are now printing out that context variable so if I come back and rerun this pipeline job it's going to run twice it's going to run after that build step and at the end of the pipeline the context variable that gets passed in lets you know what step just happened, what library loaded that step and what's the current success the status of the pipeline so that if you want to do things like only send notifications when the pipeline is in failure or only send notifications after particular steps you'd be able to abstract that out the final aspect of this is configuring libraries so we have a set of pipeline libraries that we're able to use across different engagements and to be able to do that we need to pull out the configuration that would normally be hard coded into your pipeline across multiple engagements and pull that out into the configuration so in this case if I had a number variable so lets go to this pipeline configuration and say that banana is always my really strong, we'll say that we set a configuration value within my library definition that's going to become accessible to you as a library developer within your steps so within your Maven library there's a config variable and this config variable gets automatically populated with the library configuration for each step that's created so if I needed access to that configuration I'd be able to say and this will get populated based on the configuration so this is how we're able to create generic sonar cube steps OpenShift libraries we take all of the information that's specific to a particular environment and add them as configuration options on the library and then you have this config variable available to you to then do what you have to do within your library step so in this case we were able to print out it read this configuration directly from the file so that is the bulk of the library engine there's a few other aspects like you likely won't have just one template you might have multiple so inside your pipeline configuration repository you're able to also create a pipeline templates directory and have named pipeline templates which your applications would then be able to choose from we have a default step implementation so there's going to be situations where you don't have a library that does exactly what you need to do and you don't want to create one for just one application so there's a steps block that gets created and here you're able to create steps on the fly by specifying things like container image to run the step inside of a command that should be run and then a stash information for any files that get generated that you need to hold on to there's a couple different primitives which we're calling them so these things like application environments there's also a stages section you might be calling the same steps over and over again and it could be tedious from a template perspective to keep calling the same things so you can create a stage and we will dynamically create for you a method within your template to be able to call continuous integration instead of call unit test static code analysis over and over again so I think it would be helpful to show a example of what a real configuration file looks like after it's flushed out with a bunch of libraries in it in this example we have a dev and pride application environment continuous integration stage and then instead of libraries it will read just like the text being used so enterprise sonar cube docker twist lock open shift with a bunch of configurations for the different libraries being used a real template looks something like this where these on commit on poor requests are coming from her booze Allen github enterprise library but it lets us do things like filter what should happen when the developer does something in github so when they make a commit to a feature branch we're going to run the continuous integration stage which gets dynamically created from your config file on poor request to master where master is a keyword another one of those primitives available to you is a keyword section where you can just define variables to become available to you in your template so the the idea here is that your template should be as human readable as possible to define the business logic of what's going to happen when so on poor request to master we're going to run the continuous integration phase create an ephemeral application environment based on prod and then in parallel we're going to run penetration testing accessibility compliance testing functional testing and then wait for somebody to say okay on merge to master we're going to deploy to prod so this is a common template that gets used where if I have some people doing static code analysis through sonar cube and other people doing it through fortify that's totally fine they can just have different libraries loaded for their individual configurations and the same goes for any of the different kinds of testing so Stephen can I jump in here real quick is this important inheritance control for the configuration it does so thanks for bringing that up I've got a I'll just T that it does and I have slides somewhere explaining exactly how that works there are some improvements that we're interested in making to this but right now there's a couple options so a big part of this at first was usability but being able to share these libraries across engagements out of that we found that we're able to define these templates in one location and give organizations governance across their entire application portfolio so some of that is being able to pull out common configurations and then limit exactly what your applications are able to develop so by default when you define an organization wide configuration your individual repositories can override that if you were to have an organization config that defined a dev and prod environment and your tenant or application tried to have an implement environment the result would be it doesn't get loaded because you as an organization have not allowed them to manipulate that specific configuration but there's a couple ways to tune that governance with your configuration files so we've got merge and override so at any point in the configuration file not just for application environments you can say merge equals true and what that means is that if the individual application is trying to add new configurations to a particular block within your config file they're going to be able to do that because you've explicitly said I'm allowing sub-organizations or sub-configurations to add their own to this portion so in this case we have a dev and prod environment from at an organizational level the tenant has a dev with a slightly different config and an implement environment that they want to add the result of this because we specifically said merge equals true is that we now have a dev implement environment you'll notice that the dev environment did not change because we're only allowing merging the other example to be a little bit more flexible in what's allowed is override so there's going to be situations where you've got some suggested defaults for application environments but you're comfortable if they want to supply their own so in that example you'd be able to say override equals true and then we're just going to use the key from the application or the more specific configuration file and this kind of conditional inheritance happens at every governance tier to governance tier so if I have a 10 nested structure of configuration files which sounds a little complicated and I'd have questions of why that's necessary but if it was then this governance between configuration files happens on a tier to tier basis so if I wanted to allow a sub-configuration or a sub-organization to add their own application environments that sub-organization would also have to allow their sub-organizations to do it and so on so you can really dial up and down the level of governance that's applied to these different types of configuration. The same with SCM Jenkins files so there might be situations where you really like how code gets organized with the Jenkins Templing Engine but you don't have a need for a common Jenkins file or a need for this kind of governance you're able to just have a Jenkins file in your repo that does the template and then it will use the configuration files based upon where it sits in the Jenkins job hierarchy to load everything together and execute that template and as an organization you're able to specifically say I want to turn that off within my configuration file I were to say allow SCM Jenkins file equals false now individual application repositories would not be able to supply their own templates so the idea is that templating is the main focus we want to be able to use shareable workflows across multiple apps we want to be able to abstract out what that implementation is into library components and provide some syntactic sugar in the form of these primitives like application environments and stages and then secondarily we have the ability to do governance of locking down exactly how applications are able to customize their pipeline based upon the organizational requirements so that's the Jenkins Templing Engine I rambled for a bit and I hope people have questions yes does anyone have questions I have so many questions but I want to digest your GitHub repo before I ask so I can sound semi-intelligent well you can once Steven gets around to joining the Gitter channel or the mailing list you can ask questions there awesome and a really great presentation yeah there was one question actually can we print the effective configuration yes I need to look at where that takes place this framework has been through a couple iterations in a previous one it was all just a shared library in Jenkins and we've since moved to taking the framework itself into a plugin and I know that we definitely used to take the aggregated configuration and then apply that archive that within your pipeline run so that you're able to see what the configuration file was that ran the specific template for that build if that's still not happening it will be happening by the end of the day and when will you be opening a hosting request to get this put into the Jenkins CI GitHub org and releases going into the Jenkins Update Center my first priority was to finish the docs that should be done this week and then parallel to that I'm going to start with the process I need to from the corporate side to get approvals for things and then that shouldn't be happening this week and actually fantastic just in the meantime there is some documentation that's available on the readme I can paste a link to the actual source code repository and do you have a link to the slides that you can share yes let me find where I can put that maybe I'll embed it directly in the documentation or just send me a link you've got my contact info and I can share it in the channel or you can join the Gitter channel I think I'm in the Gitter channel but bad at checking it so I will get better at checking it I also love talking about this so if anyone has questions or wants to know exactly how it works and my docs are terrible then let me know and let's talk about it question I would like to know if you tested this with configuration as code so we have so that's actually a good question because I have questions so the global configuration it natively supports the managed Jenkins configuration for the Jenkins Templating Engine the problem is it's a little bit more complicated with the managed Jenkins section there's also a job DSL component of configuring folders with the correct governance tier information so I've been thinking about how to best do that if it's a customization or extension of job DSL to support this hierarchical configuration setting or if it's a combination of Jenkins configuration as code and job DSL so it sort of plays in both worlds where you have a global configuration if you choose to have one but there's also a job DSL component in some specific configurations does that make sense yes right so I don't know how to like I'm not technical versed into the jcask enough to comment on that yeah so with Jenkins configuration as code you would natively be able to support the managed Jenkins configuration everything inside configure system works out of the box but the specific governance tier for folders and jobs needs to happen through job DSL so that's on the roadmap of figuring out a clean way to define your governance hierarchy from these configuration as code solutions are pull requests accepted they are encouraged and I will week with joy if someone gives me one yes if somebody out there watching this listening to this later whatever you're familiar with the configuration as code mechanisms for this sort of thing feel free to dig in any other questions we're running a little late but I'm okay with that does anybody have any other items they'd like to bring up today yes I have one item yep so basically I guess in the Gitter channel before the meeting the the thing that I've been asking about before which is a way to lint a pipeline from the UI so Jenkins 52939 yep so I think I've discussed it but basically the ways to lint the pipeline right now you have to like enable a port and telnet to something and or there's a command line utility that you can use and it's all pretty clunky and so basically the problem that I'm having is that I'm trying to introduce pipeline to teams that have never used pipeline before they have no groovy experience and sort of the day one experience of writing a pipeline and making syntax errors is it's kind of painful because right now the only way to find out if you have syntax errors is to build the pipeline and so that's not always a good thing the other thing is that I'm heavily moving to migrate a lot of freestyle jobs to pipelines and I'm also trying to put the parameters in the pipeline itself so like the cron settings um so I need a way to um generate the things that get written to the job.xml file from the pipeline without running the pipeline so if I had a way to lint the pipeline and generate this stuff into the XML that would be that would help the basic usability of pipeline quite a lot for myself and also to bring new people up on pipeline so there's there's two threads to that the latter one the ability to set things like job parameters cron trigger build discarder you know the various build properties without having to run the build first is an annoying one it's your scenario where there's a specific button that you would click to do that is more viable than other things that have been discussed in the past having it automatically do that when it sees a change in the Jenkins file but then how do you think that also should trigger a build and then what happens and how do you know which one to do uh one work around that I'm aware of now is uh if your if your Jenkins files are not in scm is to you know create them via job dsl and specify the properties there as well as in the Jenkins file but that's janky so yeah that's an area that needs work um and that has never found the right uh approach to actually do uh doesn't mean that it won't it just there's a reason that's been lurking for a long time I have not found something uncomfortable with uh as a solution but uh in terms of linting from dy are you talking about being able to supply a Jenkins file and lint it or the contents of a Jenkins file or pointing to a Jenkins file in scm and linting it uh yeah so like right now uh so let's say I have a Jenkins job configured to read the pipeline from scm uh so right now what I do is I click build build or build with parameters if I had a button at that same level on the classic UI to say lint or reload or whatever and do it from there that would like just be totally awesome for my uh standpoint yeah the the linting part would be easy it would just be the UI side yeah uh the uh reloading or setting of job properties would like I said be a bit harrier but it sounds like that would be more viable than other options that have come up it's definitely worth pursuing uh I'm not sure when uh the people on the pipeline team here at Cloud Beads will have cycles for this because we've got a few big things going on right now but uh I will uh try to follow up on it and if anybody else is interested in that I am happy to help review and uh make suggestions on how to connect things in the uh within the declarative pipeline okay yeah I was almost tempted to like dig into this and try something myself but like I have you know limited cycles as well because I have to focus on my uh you know day job and and I have no experience in this area so yeah I've been basically heads down on something else for two and a half months and so not kind of not come up for air in a while yeah uh but I'll see what I can do okay thank you uh any other things anybody want to talk about or anything that any suggestions for agenda items for next month's meeting I will try to line up another guest speaker I will harass the uh author of Jenkins Spock and uh other people who've worked on unit tests tooling for pipeline uh who are supposed to be in the last in the second meeting but that didn't end up happening showing up so I'll try to harass them and get them to come and talk because I think that would be interesting all right then thank you all very much and I am going to stop broadcasting now yeah next meeting is uh February 13th Wednesday uh same time uh 4 p.m. UTC 5 p.m. European time 11 a.m. Eastern time in the U.S. and 8 a.m. Pacific time in the U.S. so I'll see you all then bye bye thanks sir