 This is a Jenkins online meetup. I'm Mark Waite. I'm pleased to welcome Stephen Tarano with us. Stephen is going to be presenting the Jenkins Pipelining Template, the Jenkins Templating Engine, and we're delighted that he's here with us. They presented some of these concepts already at Jenkins World in 2018 and have been using it since then to help customers at scale use Jenkins Pipelines much more effectively. Stephen, go ahead. Thank you, Mark. So, hi, everyone. I'm Stephen Tarano. I work at Booz Allen Hamilton. We're a federal consulting firm, so we help the government modernize legacy IT systems. So I'm currently a senior lead technologist at Booz Allen. I help lead a lot of our internal DevSecOps capability development. So I'm a principal engineer for what we call the Solutions Delivery Platform, which is a project that helps teams get started with DevSecOps principles. The Jenkins Templating Engine that we're going to talk about today is really the core innovation of that project. Besides Jenkins Pipelines, I do a lot with, as a platform engineer, building out Kubernetes and OpenShift clusters, and I've helped build DevOps platforms for multiple federal agencies supporting multiple application teams. So before we get started, as part of this talk, I want to talk a little bit about the work that we do and try to frame the challenges that we face that led to us creating the Jenkins Templating Engine. So this slide is just a level setter for what we mean of Booz Allen when we're talking about DevSecOps. DevOps is all about getting application developers and operations engineers to work together more effectively through things like infrastructure as code and configuration management. And then DevSecOps is really the next step, helping to bring security into every step of the software development and lifecycle. So in practice, there's all these different kinds of security testing we want to incorporate into our DevSecOps pipelines. We could give a whole another talk on what these look like in practice, different tools for them. But for this, we really want to focus on how we went about implementing these practices at scale using Jenkins. So this slide represents an example, DevSecOps workflow. And in addition to all those different kinds of security testing like container image scanning, penetration testing, 508 compliance scanning, there's still all the other types of quality assurance automation, like unit testing and measuring code coverage, browser test automation and API testing. And when we're doing digital modernization at government agencies, they frequently have a diverse application portfolio, which to sort of consultants speak for there's a lot of different tools being used. Some teams might be building front end applications, other teams could be building restful APIs. Some teams in the organization might be using something like SonarCube for static code analysis, but other teams might be using Fortify. So when it came time to actually implementing DevSecOps pipelines at scale for multiple teams simultaneously, we were running into a whole bunch of different challenges. So the first was time, right? Is this someone's writing their first Jenkins pipeline to implement these best practices? It could take months for somebody to integrate with all these different tools. The second was complexity, along with the time it takes to integrate all of these different integrations. You also have to manage the actual development of these pipelines on a per application basis, right? So the way Jenkins typically works is you have a Jenkins file, which is your pipeline as code artifact. It lives in your source code repository and it really specifies what your pipeline is going to do. So when you have a situation where different teams are using different tools, that Jenkins file frequently is not being pasted. You'll have a second made to integrate it with the specific tools for that application, which leads to our third challenge. If you've duplicated that pipeline as code artifact in your Jenkins file across multiple source code repositories, how do you actually enforce some standardization to your software delivery processes? It can be very difficult to make sure that all of the development teams in an organization adhere to mandated software delivery processes when the definition for how they do that software delivery is duplicated in multiple places. And then finally, continuous improvement. If you're working with a system that's comprised of something like 60 microservices, each of them has a Jenkins file in their source code repository and they're all trying to follow the same processes. Let's say a month from now, you learn something that a change in your workflow that can improve how you do software delivery. Making that change is very difficult because now you have to open 60 different requests to try to orchestrate a migration from one version of the pipeline to the next as you want to continuously improve your pipeline. And then at Booz Allen, we're doing this in a single client space where we're supporting a pipeline for multiple applications for a particular client. But we're also doing this across different client engagements. So these challenges are really being multiplied as we scale, right? So building a pipeline is largely undifferentiated work. Once you've Googled SonarCube plus Jenkins, there's really no reason that every DevOps engineer at the firm or really the industry should be figuring out how to do these different tool integrations. So we wanted to figure out a couple different goals here. The first was, how can we decrease the time it takes to instrument a mature DevSecOps pipeline? And like I was saying, writing these pipelines is largely undifferentiated work. And SonarCube would be one of the less complex examples when you're talking about orchestrating an advanced deployment pattern using Helm to an OpenShift cluster, you can get a lot more complex. So we thought there ought to be a way to modularize the development of our pipelines so that we don't have to continue some reinvent the wheel every time we kick off a project. Our second goal is to lower the technical barrier to entry. Mastering the art of writing a Jenkins pipeline is admittedly a niche skill set that has a learning curve to it. So we really believed that teams should configure, not build their pipelines because of the fact that this work is largely a duplication of effort across different client engagements. There ought to be a way that we can leverage that modularization to allow teams to configure their pipelines, tell us what tools they're using and not have to build them from scratch. And then the final goal that we had, and this is really where I wanna dig in today is, how do we bring standardization and governance to our software delivery processes by creating reusable pipeline templates? So the idea here is that regardless of what tools are being used, the workflow of a pipeline remains the same, right? So on the left, we've got an example Jenkins file for an application using Maven. We've got a stage that says we're gonna do a Maven build within a container image that has our dependencies for Maven. We're gonna execute Maven clean package. And then we're gonna do some SonarCube analysis. On the right, we've got a Gradle application where we're gonna be using a Gradle image to do the packaging of an application. And then again, we're gonna do some static code analysis with SonarCube. So the key point to understand here is that, the first step of this pipeline is gonna be built. We're gonna build an artifact. It doesn't matter if it's Maven or if it's Gradle or Ant or if it's Docker. We know that for our DevSecOps pipeline, the first step in this trivial example is that we're gonna build an artifact. The second is we're gonna do static code analysis. It doesn't necessarily matter if that's coming from SonarCube or if it's coming from Fortify or whatever tools that you want to instrument, the general workflow remains the same. So for this example, we're going to build something and then we're going to scan it. So how can we modularize this pipeline code so that every team, regardless of if they're using Maven or Gradle, can be using the same pipeline template without having to hard code their tool integrations in Jenkins files distributed across their source code repositories. So the first is let's apply the same best practices to pipeline development that we've been applying to application development for a long time. So that's the principles there are, how do we abstract away or separate the business logic of your pipeline from the actual technical implementation? And then how do we modularize that technical implementation so that we can swap tools in and out pretty easily? So the first step there is, in our pipeline configuration repository, so a central place where we can store our pipeline code, we can create what we call libraries. So these are similar, but not the same as Jenkins global shared libraries. This is where we're going to create the plug and play modular implementations of different steps, right? So in this example, we would have a Maven library that has a build step and a Gradle library that has a build step. On the right, you'll see an example of what those steps might look like. The pipeline code itself hasn't changed. All that's changed is how we organize the code and we wrapped it in a call method so that when we load it, it's invocable from the main template. And then both applications are gonna be using SonarCube for static code analysis. So we have a SonarCube library that has a static code analysis doc review file. And again, the pipeline code for that step has not changed besides the fact that we've wrapped it in a call method. So now, instead of having a Jenkins file in every source code repository, we can pull that Jenkins file as a pipeline template out into the centralized pipeline configuration repository. In this example, like we saw, the template is just build and then static code analysis. So then in each source code repository, instead of having an entire Jenkins file that hardcodes particular tool integrations or loads a common global library and implement some steps, we can instead just have a pipeline configuration file. So we know that both teams are going to inherit the same pipeline template. So all we need to know to run their pipelines is what tools are you using? So in this example on the left, you'd have a pipeline config doc review file. It sits at the root of the source code repository and it's got a library section, which just specifies that we wanna load the Maven in the SonarCube libraries for the Gradle application. The only difference is that you're gonna load the Gradle library instead of the Maven library, right? So at this point, we've separated the business logic from the technical implementation. We have a common centralized pipeline template that's a tool agnostic workflow. And then we have modularized implementations of different steps so that we can dynamically compose our pipeline at runtime using these configuration files. So the goal here is to be able to make use of the fact that these tool integrations don't change much from client to client or project to project. So for us to truly realize the reusability of these pipelines, pipeline libraries, we need to be able to configure them externally. So in this example, let's take a look at our SonarCube library. And it wouldn't be very helpful if we had hard-coded the SonarCube server that we're using or if we hard-coded whether or not to fail the build based upon the results. So what we can do is add some configuration to our pipeline config. So on the left, you can see that under the SonarCube section, when we load the SonarCube library, we can also pass it some configuration options. And then the Jenkins Templing Engine framework when it loads that library is also gonna make these configurations available. So in the top here under the parse configuration comment, there's a config variable. Every library step that's implemented is auto-wired with this configuration variable that is able to pull the configuration from that pipeline config file. So by modularizing our different tool integrations and then externalizing their configuration to the pipeline config file, we've really had a lot of success with being able to reuse these pipeline libraries across multiple clients. You know, these libraries are open source and available for multiple projects to use. So instead of the DevOps team supporting a client being limited to just the engineers staffed on that particular project, we can now crowdsource quality and have a common framework for all of the DevOps engineers implementing these DevSecOps pipelines to be working together to continuously improve the configuration options on the library, but to really speak a common language when implementing these pipelines at scale. So, Steven, they end up submitting pull requests to your pipeline library to propose additional capabilities. Exactly, so these libraries are unit tested with Jenkins Spock, which is another framework that was presented at Jenkins World the last time that I was there. So, updating this common framework is just like maintaining software for an application. We've got versioning of these libraries, so as new configuration options are added or you're gonna add a breaking change, we can create different releases of these libraries and teams can choose when to upgrade or the administrator for that particular Jenkins instance can choose when to upgrade, but software development, we've got unit testing through Jenkins Spock and we've got versioning and release management through source code, just like anything else. Thank you. Also, all of these, and I can show this in a bit once we move to the demo side, but all of these libraries that we have have a readme in them and that readme has a table of the different configuration options. It describes what the library does and then it has screenshots of any artifacts that might be generated from that library. So then in our documentation site, we can aggregate all of those readme's together and have a front-facing API almost for how we can build pipelines from these building blocks. Does that make sense? It does, thank you. Thanks very, very much. All right, so we can still take this a step further. So at this point, we've separated the business logic from the technical implementation by having a pipeline template in modularized implementations, but we can also pull out common configurations and this is where governance starts to come into play. So on the right, we have the two pipeline configuration files. We've got one that says libraries, Maven and SonarCube, another that says libraries, Gradle and SonarCube. So if you were at an organization that wanted to enforce the use of SonarCube, we'd wanna pull out that common configuration, right? So we take a look. We can have an organizational pipeline configuration that says libraries, SonarCube, because everyone's going to inherit this, and then it also says merge equals true. So this is where you're saying, as an organization, let's let the individual applications have their own config file, but then have some rules around what exact configurations they're allowed to manipulate. So for this example, application repositories and their configuration file can add additional libraries, but they can't remove the fact that they're loading SonarCube. So here we can add a pipeline config to the Maven repo that just says it's using Maven because it's inheriting the fact that it's using SonarCube and it's inheriting the governed pipeline template. And then in the Gradle repository, they're also going to inherit that same configuration, but they're gonna have a Gradle library that's being loaded, right? So in the Jenkins sampling engine, you can create governance hierarchies that match your organizational hierarchies just by sort of like in Maven, how you have parent com files. The same thing applies to the Jenkins sampling engine with our configuration files. And there's some rules there that we call conditional inheritance around how children configurations are able to inherit or able to modify the pipeline configuration as a whole. So let's do this in practice. I think it'll help to put some meat on the bones of what I'm talking about here. I'm checking time here, so we're still good on time. So the first and simplest example would be a regular old pipeline job. For the sake of demonstration, we're going to have these libraries do print statements because I really wanna focus on the templating side of the problem here and focus on building our mental model for how JTE works as a framework. So this is just a regular old pipeline job. Under the pipeline definition, we've added a Jenkins sampling engine section. So there are two pieces to your pipeline. There's your pipeline template, which again is the tool agnostic templated workflow that the pipeline is going to follow. In this case, we're doing a build static code analysis, then a deployment to dev into prod. Alongside your pipeline template, you have a pipeline configuration file. And this is gonna read a lot like your tech stack. In this example, we've got our library section again, where we're saying that we're gonna load the gradle library, the sonar cube library, and then an ansible library to do deployments. In our template, we make reference to this dev and prod application environments. So we really want these templates to be as human readable as possible because again, they should really only specify the business logic of your software delivery processes. So alongside your libraries and your pipeline configuration file, you can also define application environments. So JTE has what we call primitives. So these are different types of objects that can be created by reading the configuration file and passed to the template during runtime. So in this example, we created a dev application environment and we gave it, in this example, just a list of IP addresses that we would do a deployment to. And we created a prod application environment with a separate list of IP addresses. So we know that every different project might have a different set of application environments. Different clients are gonna have different numbers of them. So we really can't hard code the fact that there's gonna be a dev test staging and prod environment. And our goal along has been to create a framework for developing these pipelines. So in JTE, we were able to create these application environments dynamically through our config file. If I wanted individual app teams to be able to add additional ones, I could say merge equals true and then that would allow a child configuration file to add additional application environments. But these variables or these application environments are gonna be passed to the template as variables that can be referenced. So here, goodbye. Go ahead, Mark. Sorry, the merge equals true then is the magic that says it's allowed to be extended. Otherwise, the definition is static and owned by the thing that defined it. Is that I understood correctly? That is exactly right. So there's merge equals true and override equals true. Also more of our documentation at the end, but all of the conditional inheritance. So we walked through a couple different examples of how this works in our documentation, but there are two keywords merge equals true and override equals true. If you say override equals true, it means that we're going to completely replace this definition with what was defined by the application team. So yeah, there's some rules that govern the aggregation process of multiple configuration files. And you can have more than two. In the Jenkins instance, you can define library sources and pipeline configuration files as a folder property. So on every folder in Jenkins, you could specify a pipeline configuration file that the jobs within that folder are going to inherit. So if you were to draw your organization's org chart on the wall and have that taxonomy defined, you could create the exact same thing in Jenkins just by how you organize your jobs using folders. Thank you. So here, if I build it as expected and we see from the previous run here, we're going to load all the libraries that are specified in our configuration. So at the start of the pipeline, it's going to reach out. So you can take a look at the build logs along the way as it's building, it's going to tell us where it's finding different libraries from and the configuration files that are being added. So in this example, we only have one config file. But if you were to have, say, three or four of them, it can be tricky to know why didn't the field that I set make its way to the final aggregated configuration? So along the way, we log all the different pipeline configuration modifications from each tier of configuration. And then when we load libraries, we also show which source code repository did we get this library from because you might have libraries distributed across a couple of different sources. So let's say I wanted to change what tools this pipeline was using instead of, you know, without changing the pipeline template because this is our tool agnostic business approved process, I can just make a change to the libraries that I'm loading. When we rebuild our job, instead of loading the gradle library, it now says to load the Maven library. So as anticipated, it'll load the Maven library and implement that library's functionality, right? So, and we can see that right here, built from the Maven library. So this is the simplest example of just a pipeline job. But what if we scale this up to an entire source code repository? So in this example, if we go take a look at the GitHub repository for this repository, for this pipeline rather, we've got a sample app for Maven. In this example, we don't need any code because we're just showing how to dynamically compose our pipeline, but we have a pipeline config file and it says it's gonna pull the Maven library. We run this job for every branch and every pull request of this pipeline. It's going to dynamically compose that pipeline, loading the Maven, CenterCube, and Ansible libraries. So this use case is really just showing that without having defined a Jenkins file in the source code repository, we can inherit the same pipeline across all the branches and for requests. Now, the real power in the Jenkins Template Engine comes from when you can apply these templates to multiple applications simultaneously. So here we have a GitHub organization job. We have our pipeline configs and the configuration repository. So I guess this would be a good point to go show. Where is this actually configured in Jenkins? So if we take a look at the configuration for this GitHub organization job, while this loads, we're gonna see that under the project recognizers, instead of just pulling the pipeline Jenkins file from the source code repository, there's a new project recognizer that's the Jenkins Template Engine. And this just tells the GitHub organization job that all of the pipelines that are gonna be created are adhering to this framework where we're gonna look for parent configuration files and templates and we're gonna apply that governance to the jobs that are created. If we go take a look further down the page, you can also define on this GitHub organization job, where do we find the pipeline configuration file? So here we're saying the source location is a Git repository. I can go take a look at this repository. We can see that the configuration-based directory. So where inside this source code repository can I find my pipeline config file and my pipeline template is the pipeline configuration directory. So if I go take a look at the repository, we've got a pipeline configuration directory. We have a pipeline config file and a pipeline template. You can create multiple pipeline templates and store them in a pipeline templates directory. But by default, your Jenkins file in your pipeline config repo is the default pipeline template. And application teams, if you had multiple, we call them named templates, can specify which name template that they wanna be using. So our example pipeline configuration file says that we're loading SonarCube, Ansible, and here a Splunk library. We allow merge equals true. And again, that's what the Maven and Gradle applications can add which build tool they're using. And then we define the organizational application environments that are being used. Our Jenkins file, our default pipeline template has a build step, a static code analysis and then deployments to the dev and prod environments defined in our configuration file. So if we take a look at, I can leave here and the libraries, those can also be defined on every folder. Libraries get added as library sources. So here we added the libraries as a global library source which means that they're available to every job on the Jenkins instance. And under the Jenkins Templing Engine section, we didn't specify a global pipeline configuration file, but we did specify globally available library sources. So this library source is coming from SCM. It's coming from that same repository, but it's coming from the library's directory. So if we go take a look at this repository again, underneath the root of this repository is the library's directory. And then every single directory here is a library that can be loaded. So when we load the Gradle library, it actually sees this build.groovy file and it creates a build step. And when you call the build step from your template, it invokes the call method and we're going to print out build from the Gradle library. So these library sources, you can have as many library sources as you want to, you can scope them to individual jobs by defining the library source on a folder instead of globally inside managed Jenkins. And you could have multiple repositories. Let's say you have one set of libraries that are common to the whole organization, but you also have a set of libraries that are really specific to your team's individual use case. That use case is perfectly well supported by just defining multiple library sources. So JTE, again, is really just a different way to organize your pipeline code. You could put all your pipeline configurations in a single repository, or you could split them out across multiple repositories. And the same goes for your library sources. How you choose to organize the code is really driven more by who should have permission to access that code than by functionally if it makes a difference, right? You could have three different top level directories in your source code repository and configure each of them as a library source, or you could have three different repositories and reference each of those as library source. It's really up to you how you want to organize your code to build your pipeline, to optimize the rule based access for who can touch different parts of the pipeline configuration. So if we go back to that multiple application example that we have, in this case, the GitHub organization job created two multi-branch jobs. It created a Gradle, one for the Gradle repository and one for the Maven repository. Here for the Gradle application, we can see that it loads the Gradle library to do the build step. If we take a look at the Maven library, it's using the exact same pipeline template, but it uses the Maven library, right? If we take a look at one of the console logs for this, we can see that everywhere you see JTE to start the log, that's a log coming from the framework itself, we can see all the pipeline configurations that were added by the organization. And then we can also see the modifications that were made by the actual source code repository. So in this case, the individual repo just added the fact that they're loading the Maven library. And then for each library, we can also see which repository that library belongs to. Here we're seeing a warning message that the library doesn't have a configuration file. That might be a little outside the bounds of an introductory conversation on JTE, but the concept there is that because we can externalize configuration to the libraries, we need a way to validate that what someone puts in their configuration file is a configuration option that's actually accepted by the library to help with some error checking to make sure that typos aren't being missed. So alongside your library, you can create a config file that specifies the different required and optional external configurations. And then JTE will do some validation during loading time to make sure that what you've put in your config file maps up to what the library is expecting. One cool feature that I wanna highlight is the ability to do lifecycle hooks in JTE. So let's take an example of wanting to send events to Splunk based upon the pipeline. So let's say you wanted to send a notification to Splunk every time you did a deployment or every time you did a build step. Because of the way JTE works, you don't want to hard code that logic to do Splunk notifications in your Maven library or in your Grada library, because now you've broken the interoperability of these libraries, right? You wanna be able to plug them in and take them out without breaking everything. So we need a way to be able to dynamically register different steps to run in relation to events in the pipeline. So if we take a look at the logs here, we're gonna see sending a Splunk event at the beginning of the pipeline, Splunk running before the build step and running after the build step. But our pipeline template, if we take a look at it again, doesn't actually say anything about sending Splunk events. So how is it possible that Splunk is knowing that the build step just happened or that a deployment just happened? And we do that through what we call lifecycle hook annotations. So if we take a look at our Splunk library, we've got a couple of different steps here. If we take a look at pipeline start, we have this annotation, which just means when the pipeline starts up, after JTE is done initializing the environment by loading libraries and creating application environments, let's run every pipeline step that has this annotation on it. So here I don't have to explicitly call this method from my pipeline template. It registers itself to run at the beginning of the pipeline. The same can be done for steps. There's also a before step and an after step annotation. So if you wanted to send a Splunk notification before every other step takes place, you can add this before step annotation to your library step. And then right before we execute each step, we're gonna dynamically execute all the steps that have registered themselves as before step watchers. Real quick, these hooks also accept a conditional execution closure. So if you only wanted to run after a particular step, you can pass a closure. And if that closure returns true, we're gonna execute the step if it returns false, we're not gonna execute the step. And there's a couple of different hooks that are available. There's init, validate, before step, after step, and then clean up and notify. So Mark, I think you had a question? I did. So the steps, the lifecycle steps, those are defined still by the consumers, by the people who are writing, or is that a preset collection of steps? So these libraries, these steps are defined by whosoever creating the libraries themselves, that could be an administrator or the Jenkins instance, it could be a specific application development team that has permission to add libraries. All they have to do is say before step on basically whenever they're defining a step, they'd add an annotation to it. And then the way it knows to be registered is if we take a look at that pipeline configuration file, it has Splunk loaded. So just by loading this library, all of those hooks become registered in the pipeline to run. So if I were to modify this pipeline configuration to remove the Splunk library, and I commit that change, let's wait for that to commit, and I go back and I run this master branch. Because we're no longer loading the Splunk library, there are no steps in the environment that have that annotation, so nothing is gonna be executed at the beginning or in relation to the pipeline. So without diving too far into the weeds of how this works under the hood, but when we go to invoke a step that we've loaded from a library, there's a wrapper around that step where we're able to invoke different hooks. So we can say using some metaprogramming, find me all the steps that have been loaded that have this before step annotation and then invoke all of them. So the framework really handles a lot of that registration and execution of hooks based upon the annotations that have been loaded. So if we look at the build log here, we didn't load the Splunk library and because that library was never loaded, no hooks ever executed. So this has a lot of powerful applications to things like monitoring and metrics gathering. We do it a lot, that annotation that executes at the beginning of the pipeline, that gets used a lot for, we call them library constructors. So let's say you have a library that wants to expose some environment variables to the rest of the pipeline. You could do that by creating a step that has the annotation and then defining your environment variables for the pipeline. That way you don't have to hard code the invocation of that initialization in your pipeline template. It's just an inherent part of the library that when you load it, it knows to execute this piece of code at the very beginning of the pipeline. So if we go, let's take a look at a slightly more complex pipeline template. I don't want to dive through what these steps are doing under the covers, but I do want to show what this can look like in practice. So here we have a more mature pipeline template. This looks a lot more like the pipeline templates we use in production when we're not doing demonstrations. We have a GitHub library that allows you to create on poor request, on commit and on merge. And these really just are business logic routers. So we want this to read as close to plain English as possible. So on poor request to develop, this develop is a keyword that is actually a variable that represents a regular expression. So here we can create a poor request to the develop branch. We're gonna run continuous integration. Alongside this template, we have a config file. Continuous integration is a stage. So in the beginning, I tried to hard code what continuous integration meant and it turns out no one actually agrees. So the framework lets you group steps together into what we call stages. So in our config file, we're saying that the continuous integration stage is comprised of the unit test, static code analysis and build steps. So then within your pipeline template, when you invoke the continuous integration method, it goes and invokes the steps that have been defined in your config file. And that's really just a way so that we don't have to repeat ourselves within our pipeline template. If we wanted to do those three steps on poor requests to different branches or on merges to particular branches, stages let you basically not repeat yourself and consolidate common sequences or steps. On poor request to master, so this would be a developer creates a poor request to the master branch. In parallel, we do penetration testing, accessibility compliance testing, functional testing and then some exploratory testing. And again, these steps are named generically on purpose because we might have multiple libraries that implement the penetration test method. And then finally on merge to master. So that's whenever a poor request has actually been merged to the master branch, we're gonna do a deployment to production. So through this pipeline template, we can map our branching strategy for how we collectively work on a code base. That can be mapped to our pipeline template where we can perform different actions in relation to different developer actions. So poor request to different branches, if you've got release branches or hotfix branches, the GitHub library that we've created lets you do a lot of business logic routing to handle the pipeline and responding to different events from GitHub. We also have a library that does the same thing for GitLab, GitHub public versus GitHub enterprise, looking to add support for Bitbucket and all the major Git based STMs to be able to do this type of business logic. Our pipeline. Go ahead. It looks to me like you've created a higher level domain specific language that lets your people express things as a business level concept. How is that being received? What are people's experiences with it? So my experience personally is that development teams want to focus on what they do best, which is build applications, right? It's very difficult to scale the skill sets required to learn how to integrate all these different tools and how to set up and configure all these different automated testing tools like SonarCube, Jenkins architecture, like the developer experience should really just be, here are the tools that I'm using, build me a pipeline that does those things. So this has made it really easy for us to do that. Because we've already completed a lot of this work, every DevOps engineer at Booz Allen doesn't have to learn how to do the more complex logic, right? They can build their pipelines from the work that's already been accomplished. It's all open source, so we can collectively work on this framework and crowdsource quality. So the developer experience has been pretty good, in my opinion. There are drawbacks to this approach sometimes, like it can be difficult at times to fit everything into these clean buckets, but my response usually is that if it doesn't fit into this clean bucket, then I wanna have a conversation around the development processes that are going on. If your development process can't be broken into a build test package deploy, let's talk about the process that's going on and see how we can align it to some common processes because this also helps a lot with the different security teams, right? So in federal consulting, applications need what's called an authority to operate. So security teams wanna know that the code being developed has been tested in this secure. So we can work with those security teams using this framework to say, look, here's the business process that your requirements are reflected. You know that there's gonna be container image scanning and penetration testing. So let's get this process approved and then let's let teams still choose the best tool for the job by having these modularized implementations of different tools. So it's helped us a lot from a security and governance perspective. There's a big difference between being able to say, everyone must do unit testing with this code coverage, static code analysis, container image scanning, penetration testing, and then relying on teams to implement that versus the JTE approach of saying, you know, everyone is going to follow this process, but we're still gonna be flexible with what tools are gonna be used to meet these organizational standards. Does that make sense? It does, thank you very, very much. Thank you, thank you, this looks so elegant. I appreciate that. The beauty here is that pipeline templates run just like Jenkins files. You could put regular Jenkins pipeline as code, scripted pipeline in here, and it would work just fine. I don't recommend that you do that, obviously, but from an implementation standpoint, this on-pore request, that's just a step in the library. It takes, as an argument, mapped parameters, so it's gonna take a to input parameter and a from input parameter, and it's gonna execute this closure. That's also an input argument, right? So we've really just used the flexible nature of Ruby to create domain-specific languages almost, with which to build our pipelines. So I get a lot of frequent questions about, like, is declarative pipeline supported? And the answer right now is unfortunately no, and that's because I would love it to be, but declarative pipeline assumes that it knows everything up front. It looks at, like, the global variables from shared libraries that have been loaded. And just because of the way JTE currently initializes the pipeline runtime environment, we don't actually know which steps have been loaded until the pipeline has started and libraries have been loaded. So my response is usually, like, the goal of declarative pipeline, and if Andrew Bayer is on the line, please, please correct me, but the goal of declarative pipeline was to create a simple interface to lower the technical barrier to entry for development teams to be able to create their pipelines and the Jenkins templing engine does the same thing with just a different approach to it. So, go ahead, Mark. I think of declarative pipeline as an opinionated way to do pipeline. JTE seems like a strongly opinionated way to do pipeline. They're different opinionated therefore the fact that they don't intermix does not shock me at all. It's, this is a, looks like a really elegant opinionated way to approach pipeline description and generalize it. Nicely done. Thank you very much. And that's really what I try to communicate with this. You know, there's a lot of moving pieces. You might have different layers of configuration that can sometimes be hard to track, but at the heart of JTE is just a framework for developing pipelines in a tool-agnostic way that helps you scale them to entire organizations, right? There is no reason that we need to be copying and pasting Jenkins files if the workflows are largely the same. And JTE is really just a way to say without hard coding yourself to particular tools, what's the business process to get code from a developer's laptop to production? And then, you know, the fact that we might use different tools for unit testing or for static code analysis, that's just an implementation detail and you can specify that in your configuration file. So this is what a more mature pipeline configuration looks like. You have the option of allowing application teams to load their own pipeline templates and their repositories. In this example, we turn off that functionality. Governance is really a dial in JTE. You can choose how strict or flexible you wanna be or how much power you wanna give application development teams based upon how you configure it from a top level down. So we define some application environments. We define some stages. And then the library section reads a lot like your tech stack. Like I said, JTE is really the core innovation of Booz Allen's solution delivery platform. So we have our onset of libraries that are available on GitHub. We've got an SDP library that just provides some helpers to the others. So we load that, we've got a GitHub enterprise which provides that on commit on four requests functionality that you saw in the template. We've got a SonarCube library for static code analysis, Docker for building and publishing container images, Twistlock for container image scanning, OpenShift for deployments using Helm to an OpenShift cluster, a tool called the Ally Machine for accessibility compliance testing, and then all of us zap which is an open source tool for penetration testing. And all of these libraries have externalized configuration options. If we take a look at the documentation for them, all of those libraries have a read me like I was saying earlier, where we're able to see the configuration options available to the library and then also see any artifacts that are generated. So through this approach, we've managed to modularize our implementation. No one should ever have to figure out how to use SonarCube analysis with Jenkins. The work has already been done. Here's the library that you can load in the current configuration options. If you need new configuration options, don't reinvent the wheel. Just open a pull request to this library, add the configuration options you need and we can continuously improve the extensibility of these libraries and the configuration options that they support. So if we go back to the slide for a few minutes and then I want to open it up to some questions here. You can move this item up away. So key takeaways that I want folks to have about the Jenkins sampling engine is that it's a framework for developing pipelines. There's no right or wrong way to do it. I have opinions, obviously, but really the Jenkins sampling engine is a framework for developing tool-agnostic templated workflows to share pipeline templates by multiple teams, regardless of what tools they're using. And then this approach separates the business logic, your template from the technical implementation of your libraries, allowing teams to configure their pipelines instead of build them from scratch. So the three main value props that I like to share are applying organizational governance. We now know what software delivery process each team is going to use, but we're flexible enough to let teams use different tools. We've optimized pipeline code reuse. So instead of everyone reinventing the wheel, we've seen pipeline development decrease from five months to five days for new projects that are leveraging existing tool integrations. So that's a 97% decrease in how long it takes our new projects to support a mature DevSecOps pipeline from the get go. And then finally, simplifying pipeline maintainability. So I've managed large Jenkins instances before JTE existed. And in my opinion, it is a lot easier to manage a pipeline template and then modularized tool integrations than it is to manage 60 copied and pasted Jenkins files that have been tweaked to integrate with a particular tech stack. So it's really made it easier to onboard new applications to make changes to the pipeline's flow over time because you only have to update the template in one place. And it's just really made it easy to support multiple teams simultaneously. So let's turn it over to questions. Here we've got some few R codes if you want links to our documentation or different hands-on learning labs that can help you build out a local environment using Docker, build out the libraries that we showed as part of the demo today. And then a link to our Gitter channel where I would love if you could get involved and ask some questions. So I hope that resonated and now I'd love to make this more of a conversation, Mark. Super, so I got a question from one of our participants asking about Garrett. Any plan to support Garrett as one of your tools? So the roadmap for tool integrations is really driven by which clients need them. So at this point, I don't think that we've gotten any requests for Garrett, but that being said, the second a team that I or anyone at Booz is working with is trying to use Garrett, it'll be on the roadmap. Or if you want to get involved in the Gitter channel, there is a lot of documentation in the Jenkins Templing Engine Log documentation for how to create new tool integrations. So if you already have pipeline code that does some Garrett integration, it's really just a matter of reorganizing where that pipeline code is stored to be able to make it support a new tool integration. Thank you. Now, I'm in a platform world personally and I think about platforms like Internet of Things devices, ARM chips or Windows boxes. Do you have anything that you need to be special about in using JTE with those sort of non-typical platforms? So that's a great question. So there are some best practices or at least maybe that might be a bit far. There's a particular approach that we take when writing libraries and that's that we don't install tools on Jenkins. I have been in situations where there are three versions of Java in use or three versions in Maven or Gradle. So instead what we do is maintain container images which become runtime environments for the different library steps. So for example, if there was a Maven library, we would have a configuration option for that library which specifies which version of Maven and then the library would run its step inside of a container image that has the appropriate version of the tool. So when it comes to platform specific considerations, if you're looking at integrating with the libraries we've already developed, the only requirement is that Docker is installed for the agent so they can use those images for runtime environments but otherwise it's really just, it executes just like any other Jenkins pipeline. So you can connect agents within your library steps. You can specify which agent should be used. We're currently, during the initialization process of JTE, we make a node call or two. So we're working on making it totally agnostic so that you can pass agent labels to every piece of the initialization. But in general, it should be able to work on any platform that's currently in use. Okay, so I think I understood the Docker capability and that sounds really powerful as remember how a build tool is. In my world, some of the things aren't dockerizable at all. For instance, I don't know how to do dockerized free BSD or Mac OS or a dockerized but you're saying that I could label something there and use the label still to refer to it even though it's not a docker image. That's correct. So your library steps execute just like scripted pipeline code. So if inside your library step, you had a node block and you passed it a node label, then that piece of when the framework executes that library step, it'll execute the code on the node with the assigned label. So it's really like we took a scripted Jenkins pipeline and instead of having it be a 700 line file, we broke it up into a bunch of smaller files but the execution of that code still works just like any other Jenkins pipeline. So you can have node labels, you can run pieces inside container images, you can really write your pipeline however you want to. It's just that JTE becomes the framework for how you dynamically compose that pipeline and then execute the different steps. Thank you. New question, is JTE compatible with the Jenkins configuration as code plugin? Yes. So let me clarify that. So the Jenkins configuration as code plugin natively supports any plugin that was developed using standard plugin development best practices. So that being said, it's supported. It comes up under the unclassified section. So as long as you've configured the global governance tier we call them, the global managed Jenkins JTE section, when you export your JCASC file, the JTE stuff will be in there. So that being said, you need a combination of Jenkins configuration as code and job DSL, right? So JCASC is responsible for managing the Jenkins configuration itself so that manage Jenkins JTE section that has a global configuration file and a global library source. That can be exported to Jenkins configuration as code, but when you make it a folder property or you put it on a GitHub organization job, now that's a job DSL problem, right? Because that's a configuration on a job, not on the Jenkins server. So it really takes a dual strategy approach. That's actually one of the things I've been thinking about how to improve. It would be great if you could create a hierarchical configuration in JCASC and have that mapped to different jobs on the instance. But as of right now, like I said, it takes a dual approach of JCASC and job DSL. Well, but the fact that you've described JCASC and job DSL tells me you have experience doing both that you've used job DSL to define the job properties and JCASC to define the system configuration of Jenkins. Did I understand correctly? That is correct. I probably know a little more Jenkins than I should. So I use job DSL less than I just used the Jenkins API to create jobs. Sorry if you flinched a little bit, but sometimes I prefer to just create jobs and configure things through the groovy API of Jenkins instead of job DSL, but it also works with job DSL. I've not seen projects do it both ways. Great, well, so which means you're not limited in the style of automation you're using to address job definition. You've just described two perfectly valid ways to do job definition. That's great, thank you. Yep, and you always have the option of saving your XML, but I prefer the as code solutions. Okay, and you just mentioned the technique I personally use, save my XML. So thank you, you validated my bad behavior. That's excellent, thank you very much, Stephen. I'm always here for you, Mark. All right, I have not seen other, oh, Bitbucket or Stash support. You mentioned it earlier, Stephen, could you reiterate it again? The question was just asked, could you answer what the current situation is with Bitbucket and any pending futures? So like with GitLab and GitHub, like with all our libraries rather, it's a function of the first time someone needs it is when it gets implemented. So at this point, it's on the roadmap. No one's implemented the library yet. That being said, we largely rely on the Git CLI itself for our implementation of those libraries. There's really only one step that requires integration with the specific Git servers API. So if you open a pull request and you wanna know which source branch the pull request was opened from, like from feature-1 to master. To get the name of the branch, Git doesn't have a way through the CLI that I'm aware of to get that information. So we rely on the server's API to get that. So for the GitHub public versus enterprise, we use the GitHub SDK to connect to GitHub and get that information. For GitLab, we hit the restful API endpoint to get that information. For Bitbucket, we'd have to do the same. So the work for Bitbucket, if I had to venture an estimate is probably 97% done, we would just need to find that API call to figure out the source branch of a pull request. Well, and since that API is already being used by the branch sources to do that same kind of thing or things like it, that seems very reasonable. Thank you, thanks very much. Thank you. Steven, you've been absolutely wonderful. Thanks for being so good at this. Thank you very much for sharing the work that Booz Allen Hamilton and that your team has done to make pipeline templating so powerful. That's really elegant. Is this open source? Can people contribute? People definitely can contribute. Please join the Gitter channel. The JTE follows the same contribution guidelines as the Jenkins project itself. Our documentation has all the information needed to get started. If there's something in particular someone's interested in working on, feel free to open a GitHub issue with feature requests or join the Gitter channel to talk about it. I watch that channel very actively, so I'd love to talk to you all. If you do end up thinking that the templating engine meets some of your use cases and would be a good tool for your organization, we've got an adopters file. So if you wanna share your use of the Jenkins templating engine, just open a pull request to our adopters.md file and it will be automatically pulled in and listed as part of our documentation. Excellent, thank you, thank you. I think that covered all the topics that were on my mind. Is there anything else you wanted to say in conclusion, Steven, before I end the recording and put this on the archive, I will post a link to the Jenkins Gitter channel and to the templating engine plugin Gitter channel once I've got the recording saved to YouTube. Steven, anything else? Just thank you very much for the opportunity to present. I look forward to hearing some feedback. I hope this resonated for everybody and please feel free to join the Gitter channel. Thanks very much, Steven. Thank you to everyone. We're gonna end the meeting now. Thanks a bunch.