 Good afternoon. Good evening, everybody. Before we start, I have a quick question. Anybody, if you can raise your hand, you know what CICD is in the room? Anybody know what Jenkins is? We're in the right room. It's good. Welcome. We are the last session of today between you and all the parties which are happening this evening, so we hopefully won't keep you too long. We're talking today about the OpenStack plugin for Jenkins and a decent number of all the sessions that we've seen up until here and a number of the discussions in the user summit and the operation summit and the design summit has been about how to use a CICD environment to build OpenStack and to continuously deploy and continuously integrate against OpenStack. What we actually did is we're taking a slight different view of the picture and using OpenStack as our compute resource to do a regular CICD workflow, which is developing applications. So, please. My name is Naama. I work in the CI infrastructure team in Cisco. In the past few years, I'm working in multiple cloud environments lately concentrating on OpenStack. With my team, we're building tools which will ease or enable deployment, configuration, and testing in OpenStack environment. And the rest of my team is a nav and a dl, which you can see the pictures here, and the emails. In case you use the plugin, we'll show and have questions or issues. So, you have the emails here. And my name is Meish. I am a platform architect for Cisco in Israel. We both came into Cisco through an acquisition of a software company called NDS approximately two and a half three years ago. Where we primarily are a development shop, relatively big development shop, doing media applications for service providers, providing content. Confirm you on Twitter and so on and so forth. My email, of course, is on the front page. A tiny bit about Cisco. Cisco has very, very significant commitment into the actual into OpenStack and to cloud and to the community. We have three main verticals or areas where we try to focus on helping organizations and people use OpenStack or connected with OpenStack. Number one is the fact of building OpenStack, of how we provide contributions into the actual infrastructure of building OpenStack environment itself. The second part is how to use or make things easier to use with Cisco equipment and in general of OpenStack, which has contributed into horizon monitoring and use cases such as those. And the last part, which is Cisco specific, is how to connect clouds one to another, be it from your private cloud to a service provider cloud in the public realm or between two different private clouds with the technology. But today what we're going to be talking about is actually the fact of the left block over there is the building of OpenStack and using this OpenStack as part of your infrastructure. Okay. Our environment is made up of a lot of RPMs and a lot of components. People keep on continuously building RPMs and components from a number of different teams geographically dispersed all over the globe in different time zones. They keep on providing their code, which is checked in continuously through a Jenkins workflow into a repository. And eventually after the whole process goes through a number of iterations and a process that we have, we hope to have some kind of a very well oiled machine, which works very, very well together, and has to be timed and pieced together in a certain way in order for these things to work properly. It's not a very generic product. Our products are very, very tightly coupled and very specific to what they need to be doing. And eventually after the whole process goes through a number of iterations and a process that we have, we hope to have some kind of very well oiled machine, which works very, very well together, and has to be timed and pieced together in a certain way in order for these things to work properly. It's not a very generic product. Our products are very, very tightly coupled and very specific to what they need to be doing. And we had a certain requirement that we needed to do. What we wanted to do was we wanted to make some kind of a CICD pipeline using OpenStack as our compute resources. And why specifically OpenStack? Because the requirement we had was we need to deploy our applications on OpenStack either as a service which we provide to the customer or as an environment that the customer primarily already owns. It needs to be deployed natively on OpenStack. In other words, we don't want to start installing instances or applying instances and start doing the configuration and adaptation afterwards. We wanted to do everything through the native OpenStack tools. And with our CICD process, we wanted to have what we call the minimal viable product, which we knew would be working, would be the minimal product which our customers could use or at least for our testing purposes we could use and say this is something which we know which works in a highly available fashion, in a scalable fashion. Provide these smoke tests or these acceptance tests after the fact to see that everything has been deployed. And it's not only deporting one or two RPMs as you'll see further on, it's a certain very decent process which needs to go through. And of course, if all these acceptance tests had passed, which sometimes they do, sometimes they don't, we would pass it on to the next stage in our pipeline that somebody else can take what we have built as a product or as a suite of products. And they will be able to use that as a fully packaged product which they could continuously or deliver or integrate against on the next stage of our pipeline. So just for terminology, so that we will all are more or less on the same page. We're talking about components, we're talking about RPMs or applications which are packaged. We work with RPMs. Everything is built on a continuous basis whenever a developer checks in the line of code. We ran our stage zero, stage one testing to build the RPMs to see that everything passes. And what we were doing, we were collecting all these RPMs on a continuously basis several times a day or sometimes also several times an hour in order to collect these products which the developers were creating or fixing and maintaining and package them into some kind of what we would call a component. Each product was a multitude of components. In other words, we had component A, B or C which made up the whole product A and we had a number of products which made up a whole solution which we provided to our customers. As a fully packaged solution, we wanted to provide this as we said in a workflow. So we provisioned using heat products. If we go back a minute, the column in the middle, there's products which are combined from multiple components. This is one YAML file that hopefully deploys this product. Having multiple teams create components that are joined into one YAML file, one heat template creates a few challenges. If it's one huge YAML file, one monolithic stack, it's very hard to maintain it. If multiple teams do changes, it's hard to track what changes were done and which changes we want and which we don't. The versioning is also a problem. If we want to roll back one component, one version, then we're stuck with a template which has changes of multiple components. So we have Git and Git Merge is both really great, but it is still a problem. So what we proposed is a modular approach. In the modular approach, we expect the heat template to include only the deployment model to define which resources, networks, floating APs and of course virtual machines we want to create and what components we want to install on each VM. The code itself of the installation and configuration of each VM will not sit inside the YAML file inside the heat templates. Instead it will be put in the Maven repository we use Nexus as a sibling to the RPM together with the RPM. And then we'll have to collect it and somehow orchestrate everything so that when we actually execute the heat template, we'll know what code to execute. We gained two things from this approach. First, we set the ownership of the deployment code to the team that develops the code. So whenever the developer does any change in the component that requires changes in the configuration, he is responsible for changing the configuration code also. And also we gain versioning. The installation code sits together with the component, so it's versioned with it. If we roll back from one component, we roll back the provisioning code of that component and we can use the same heat template no matter what version we use. We do not force this modularity. The plugin that we're going to show will support also one monolithic YAML. And also if you want to put all your use puppet to the provisioning, if you want to put all your puppet scripts in one Git repository, it will also be supported. But if you have the puppet code sitting together with your components, then we'll support these two. So it does make things a lot more complicated, but we had a very complicated system. We needed to put a number of things together, and as Naima says, we needed also the modularity. So we looked at a number of solutions, and specifically we all were already using these tools. We used Jenkins for our workflow. We used Nexus at our Maven repository. We used Git for some of our source control. And we were using, this was the last requirement, was using deploy everything on OpenStack. And we looked for a number of solutions to see if there already was something available. And at the time when this whole project started approximately nine months ago, a year ago, less six to nine months ago, there was no Jenkins plugin for OpenStack plugin for Jenkins. There was no OpenStack plugin for Jenkins, at least it wasn't very, very viable. There were a number of other options which were semi-supported or in better stage. But the biggest problem we had out of all of these solutions, which were, for example, JCloud library, or we were something which was called OpenStack 4J, which was also Java library which we worked in, which was being developed. There was no supported heat. Everybody would start a Novaboot instance or spin up an instance, but there was no body which would support heat. And even more so would not support heat with nested stacks underneath, which is what we have in our environment as well here. And also we needed an end-to-end solution that does also the packaging and putting everything together in addition to the stack creation. And as we said, we at the moment support, this plugin at the moment out of the box comes with support for our Nexus Maven repository. It can also be adapted and expanded to do Artifactory, which is one of the things which we're going to be doing very, very soon. But if anybody would like to contribute because it's going to be an open source project, you're more than welcome to expand it to whatever kind of repository you want and expand the project as well. So the plugin includes two steps or two build steps that we can use to define jobs. The first one we define what our product is, what component it includes, what other dependencies it has, what is the heat template. It goes automatically, so leave it as it is. Then when we execute this job, when we build the job, we generate a product artifact. Then we define the deployment and when we execute the deployment job, it is deployed on OpenStack. Let's see what happens in each step. When we define the product, we define what, as I said, what RPMs are acquired, what are the dependent products. Maybe we have two products, one that deploys a database, which may be used in multiple products, and then another product that uses this database. So we'll define the database as dependent on, let's say, the website product. And then when we'll deploy it, we'll deploy both products. We define also what heat template is used and where to take the puppet scripts from. The puppet scripts which are not put in Nexus together with RPMs. When we build this product, what happens in Jenkins, we first resolve the versions. When we define what components we want, we usually, especially in CI scenarios, say, get latest or get latest snapshot. So first we'll resolve what the version, what is the latest version. We can also use fixed versions, of course. Then we download all the dependencies, the ones that are not in Nexus. We generate three artifacts, a POM file that describes this product, a tar file that includes the heat template and also the external dependencies, which Yam will use if there are RPM dependencies. And also an RPM which includes all the puppet scripts, which will be used for the installation. And then Jenkins uploads these three artifacts to Nexus. We'll see all this in a demo in a few minutes. We then define the deployment job. When we configure the deployment, we configure three main things. What we want to deploy, where to deploy to and what Yam repository to use for the deployment. What product we deploy, what is the version that we want to deploy now, where to deploy, we put all the open stack credentials, all the parameters that the heat stack requires. And then the Yam repository, in order to do the deployment, we put all the RPMs in a Yam repository, configure the metadata and then all the VMs which are created, the Yam installer configured to point to this Yam repository and we'll install from it. And then when we deploy from open stack, we'll first see if the stack already exists in open stack and then we'll delete it using rest commands. We'll download all the artifacts from Nexus. We'll take also the dependencies from the tar file. We'll move all the RPMs to the Yam repository. We'll load the stack. We'll set the parameters. If we have set parameters in Jenkins, we'll call open stack rest call to create the stack and then we'll wait until the stack is complete. No, this is not right. When we do stack creation, we have a few limitations or a few standards we require from the stack in order for us to be able to load it. And again, this is a part where you may contribute and make it more generic. When we clone the heat stack from a Git repository, we expect multiple heat stacks to be in the same Git repository. So we expect a folder called after the product and we expect three or two files in one directory. If the product is called XX in the example, we expect the main YAML file to be XX.YAML, the environment variables to be put in XX and VAML, and then another directory with all the rest of the YAML and bash files and whatever is needed in XX.YAML dependencies. In our proposal, in the modular approach, these dependencies will include one bash file which is a few bash files. One of them will be generic and will be used in all our stacks. And what it will do, and this is the bash file which will be called for every VM we create. What the script will do, it will install Puppet and all its dependencies. And then it will call PuppetApply to apply, oh, it's first, sorry, before that it will install the RPM containing all the Puppet scripts. And then it will call PuppetApply to execute all the Puppet classes that we want to be installed on this virtual machine. So these are the requirements on the structure of the heat stack. When we create the stack, we load these files, we pass the YAML and then we replace the parameters according to the parameters defined in the Jenkins job. And then we call the rest call to create stack from open stack. This might seem slightly a bit about Chinese or Japanese, I don't know how many of you speak of that. But when we get to the demo, it will become a lot more clear. As we said, we have a number of components which are make up a multitude of products which come to some kind of a solution. So each product would actually, I would think of more each component. Or some, it was divided in different ways that some of the components had their own heat template and would have multiple heat templates which were dependent one on top of the other. So as we said, we needed to nest stacks, in other words, get information passed from one specific stack to the other in order for us to deploy a full solution and test with a continuous workflow to see if we were doing things correctly. For example, if you had a web, if you had one, one of our applications would be a web application and the other application would be a database application. We need to pass the parameters between the two different stacks completely to allow them to talk to each other and test the full product in the end. That is something which you'll be able to see in the demo. There's something called a product create step where you can see where we allow, we can define different product dependencies in the actual workflow that it will allow. Know exactly that it needs to instantiate or which order these products should be deployed one after the other in order to get the stack built correctly. And of course, the thing which manages the whole workflow itself is Jenkins. It's deploys then one after the other and how we pass parameters between the two different stacks or the two different heat templates. In this case, two different stacks specifically. We would take output and input parameters. And by doing this, we had one of the requirements was in order for this to work. We needed to define that the output parameters from one stack would be identical to that of the input parameters coming to the next stack. So for example, if I had an IP address variable, which was passed out from my first stack, I would pass that as the same variable. And this value into the next stack and continue the deployment that allow us to do wiring between two different stacks itself. So we prerecorded the demo since we're afraid of network issues, not because we're not sure of the stability of OpenStack and our environment. But this is almost the only cheating we've done. I'll be very frank about the cheatings. Let's try to change them. Oh, yeah, this is better. You guys see or not at all? Yeah? Okay. Okay. So. The other one. This one is. Yeah. Oh, so it's stopped. Okay. So in this demo, I'll, it doesn't know it doesn't know it doesn't know it doesn't. Okay. There we go. So in this demo, I'll show a deployment of a very simple product that includes only. This is the beginning. This is not this. This is not the right. You have the right file. I'm sorry. I'm sorry. Hold on one second. My apologies. One second. Right here first. This is the right one. Can I just make sure? Okay. Forgive me for one second. Okay. Even if you record the demo. There are issues. Okay. Okay. So we'll define a very simple product that has only, it doesn't, does it run? Only one component which is NGINX. We'll start with looking NGINX in Maven, Maven repository. And you can see here that we have two artifacts. In addition to the RPM, we have also a targe Z file that includes the puppet scripts. By the way, we have a puppet Maven plugin that generates this puppet file. We'll upload it to open source shortly. Let's look at the puppet file. It's a very simple class that just installed the package and then it changes a little bit the landing page index.html. It uses our parameter NGINX user, which we'll set. We'll see how we pass it from Jenkins to the heat template, to the puppet, to the deployed application. So this is the RPM and its puppet script. Let's now go to Jenkins and first we'll create a folder just to make things nice and clear. We'll call it demo. It's a folder. And I should click okay. And no, no. The other save. Yes. We'll define our job, which will be called create. It's our freestyle project. And here we add a build step. This is a build step that the plugin provides, which is create product. We can see what parameters we need. First we'll give it a name, which is the artifact ID. We'll call it demo product and a group ID. And we set a group ID that Maven Nexus knows. And then there are three Git repositories. One for the heat template, which is required. And it's supposed to be according to the style that we've seen. Additional deployment script. We don't need them right now, but we can use, if you use bash, you can use this repository. And a repository for the puppet script. You can put here all your puppet scripts if it's not modular. I'll stop it for a minute. And you can also put here only the shared libraries, puppet classes, which are shared between components, and then put you product specific classes in Nexus. We'll now continue. Yeah, this is space. Why doesn't it move? Okay. So we can have dependent product, but we don't need any. So we'll add now artifacts. We need only one artifact, which is we set the group ID and artifact ID. We set the version to latest. We could have set it to a specific version also. And we can add, if we've had more dependencies, which are not in Nexus, we could have added them there. They should be accessible by HTTP. And then we'll package them into the same file with the heat template. We don't have any dependencies here. So that's all we need. Let's now build this job. And let's look at the console output of Jenkins. So we can see here, first, we find what the last version was, and we set and then we increase the version. You can define which part of the version to increase. Oh, sorry. So the next thing we do, I can see from there, we resolve the version of NGINX. It's 1601, as we've seen. We then download the Git repository that we defined. We take the tar file, not the NGINX component. It happened during deploy, but in order to create the Puppet RPM, we collect all the Puppet scripts from the components, in which case it's only this one. We then generate the three artifacts. The tar file with the external dependencies, which will include the heat template, and if we would have had dependencies, they would have been there. We also create the POM file that describes the dependency and the RPM, which includes all the Puppet scripts, one from the Git repositories, and the one that we created from the components. And we finished here this job. Let's look now at the Nexus to see the product that we created. We created now artifacts which are uploaded to Nexus. So we can see here the three artifacts. The POM file, the RPM, and the tar. The POM file defines the product, group ID, artifact ID version, and the important part here is the dependencies. We have only one dependency, the NGINX, and we can see here the exact version, not latest, because this is the product that we want to deploy. So everything has to be a specific, not latest. We should be able to redeploy it even when we have newer versions. The next thing is the tar GZ file, which includes the heat template. Let's open it now. We can see here in the archive, it will require now a few clicks. I think for two more. Now it will work. So this is the heat template. Let's open the main YAML file. You can see the names, the name pattern that we talked about. We'll open our demo.yaml. Oh, no. I'll just go over there. So these are the three. Demo.yaml, demo.env.yaml, and the dependencies. And now we open them. In the YAML file, we can see the standard parameters, resource group, floating IP that we define, and then the web service virtual machine we create. And we can see here that we passed the nginx parameter, the nginx user parameter, which we've seen in the puppet script, is passed now from the heat template to the bootstrap script, which will use it to put it as higher data, which will be used in puppet later. If we would have had more dependencies, they would have been put here also in this star file as a repo folder, and will be used later in the deployment, but that's all we have here. And the RPM file includes the puppet scripts. Let's open it too just to see our beloved nginx puppet class. We'll need less clicks here. So we can see here in the models that this is our nginx model with some more classes that we had in the puppet base, Git repository. So this is what we have in Nexus. Let's go now and create the deployment job. We have now a packaged product. We can always go back and deploy it. So we define a new job, which is called deploy, another freestyle job. And we'll need here a built parameter, because we don't want the version to be part of the configuration. Each time you deploy, you want to deploy a different version. So we define here a built parameter, a string parameter, we'll call it version. And then we'll add the build step, which is deploy product. Three things we have to define. What to deploy, what to deploy it to, and what YAM repository we want to use. So what we want to deploy, we'll put here the group ID and artifact ID. The version will be transferred as a built parameter, as we've seen a minute ago. We're to deploy too. You can see here our Jenkins credentials. You can also test them here. We have the DNS server. Jenkins also creates the DNS records if required. And then we can add advanced parameters. We can see here both the standard parameters. We have the subnet ID, private network ID, and we can add our own user defined parameters. In our case, as you remember, we had the nginx user parameter. So we'll add it here. And we'll set the value to mesh in order to my colleague. We need another user defined parameter since we use a floating IP, so we need the public network ID. So we'll set it here also. In addition to the user defined parameters, we'll now have a few more definitions of what image we want to use, what flavor we want to use, if we have a scaling group, what is the min and max. We'll add now availability zone. The names are added as prefix to the names, some requirement of our environment, but we don't need it here. And we'll set the YAM repository. As part of the deployment, all the VMs will move them to some YAM repository. And then we set the metadata and all the VMs we create will use the artifacts from there. In order to push the artifacts to the YAM repository, we need the private key. So we'll put here the private key. You can read it really fast and then break into our system. Okay, so let's now... This is our mistake. Let's now build. When we build, we have to give the version number. It's a build parameter. We'll set it 25, the version we just created. And now let's look at what happens in the console. So, yeah, my eyes. That's what they used to be. Okay, so the first thing we do is we delete the stack if it exists. In this case, it didn't exist, so we see the status is undefined. We download the product artifacts, the tar file, the RPM, and the POM. We then extract the tar file to get the heat template, to get the dependencies. We download the RPMs. We have only one RPM, the NGINX one, so we download it now from Nexus. And then we move all the RPMs, the Puppetscripts RPM, NGINX RPM, and if we would have the dependencies, we move everything to the YAM repository. We configure the metadata. We load the stack. We override the parameters, the user one, all the parameters we've set in the job, and then we call OpenStack createStack. Let's now look at the YAM repository. We can see now that it started, and we're waiting, we're pulling OpenStack every minute to see when it's done. Let's go now to the YAM repository. We can see here the new RPM, 25, it includes the Puppetscripts and also the NGINX that we moved to the YAM repository. So when we do YAM install, after we configure the VMs, they'll pull the right components. We'll now go to Horizon to see the stack. I know it's an old version. So we can see the demo product. Now in progress, we go back to Jenkins, and this is the other cheating that we did. I skipped here two minutes. But this is it. So in a few seconds, we'll see that it succeeded, but actually it took about three and a half minutes. Of course, if the stack fails, then the job fails. So we can see here that the stack is complete. Therefore, the job succeeds. We'll move now to OpenStack. We can see that the stack is complete. If we go and check the output parameter, I set the output parameter to the URL of NGINX, and when we go to this URL, we can see hello, Mesh, which we passed from Jenkins to Hit, to the puppet scripts, and this is it. Thank you very much, Emma. And of course, as we said, we prefer not to do this live because of all the success records of the demos this morning and the keynotes. I don't think we want to risk it so much. Which we knew in advance, of course. So going back to our presentation for one second, you can find the plug-in over there. Open source to download it. Please contribute, add your... And of course, these links and the presentation, everything else will be posted after the... Usually they'll take approximately a day, whatever else it is, but they'll be available. The presentation will be available, which has additional screenshots inside, describing all the processes without the actual video, so you can see more or less what is going on. The second link is Cisco General GitHub repository, where we have a decent amount of work going on on OpenSec and a number of other projects as well. I would advise you if you have the time to go through to see if there's something you can find useful and also, perhaps, if you would like, because it's an open source, contribute back as well. There's one fix I want to push, so wait one day before you do pause. Okay. Or before you send me bags. One of the things we want to... We will hopefully add to the plugin is support to Artifactory. This will make a few changes in the UI also, because Artifactory can act both as an amazing repository and a young repository, so if you use Artifactory, you don't have to define both, so it will be a little bit easier. So we hope to do it in a few weeks. So we have a few more minutes left over before everybody goes to have a nice evening. If you have any questions, please feel free. It would be better if you can come up to the microphone for a second so everybody else can hear, and it will be caught on the recording as well. Thank you. Hi. So I was been working on trying to put together something similar to this. In particular, we have Nexus also, and I noticed in recent versions they at least advertised that they host young repositories as well. I was wondering if you had actually tried that or seen that. No, I didn't know, but it's a very good idea. Make the Artifactory work even better because it will be the same implementation for both. And I had another question. Is there a particular reason why you packaged the Puppet code directly rather than using, I'm assuming you're using masterless Puppet? Yes. This was our requirement of our organization. People did not want to support our master, and they said within UVM you can do whatever you want, but no other infrastructure to be supported. And this masterless method really worked well for us. You can do on every node whatever you want. You can install multiple instances with different versions. So I think it worked well. I know that the WebEx project also used masterless Puppet. And one more clarification was that the requirement actually was that we were not allowed to assume that it would be a web server where we can pull files from through curl or any kind of download of a file. It has to be packaged in a certain repository as an RPM. You would use the ARM installable to get it. This is why we packaged it as an RPM and not just a TARGZ file. We could do git clone inside the VM, but the assumption is that from the VM you don't have access to any internal repository. And the rules were in our organization, the only thing you could use is the ARM install. No other access. Yes. You had a question, sir? I'm just curious. So now that you have Jenkins and you're able to effectively package your application and then create a new heart and basically VM and then deploy it onto it, I assume you guys are going to the next step where you then run scripts or run some kind of automation to do some testing on it. And I'm curious if there's the follow-up where you then, if the testing succeeds, you delete the VM. Is that something that you guys will be doing for this, not this particular plugin, but something you might be doing already? Right. The R tests which are being written right now to run against the new stack, we decided not to kill the stack immediately after the test, but to leave it alive until the next one. So if any test fail, the developer will be able to access that same instance and check the logs and see what's wrong and rerun the same tests. So there are two options, either kill the stack in the end of the test or in the next run kill it. That's why in the beginning of the deployment job, we check for the existence of the stack and then we kill it. Yeah. I actually suggested this exact same thing in our company where we're trying to do something. I'm definitely going to be... I'm glad. Any more questions? You can actually use the Jenkins to trigger the deletion at the end of the successfully view of the packet of products. So that's one way of validating that whatever we have been testing is actually good to go and ready to deploy for any other environment. Right. What we didn't show here, we have a workflow job. There's a workflow plugin to Jenkins that activates all of the jobs in order. It first creates the products and then deploys it, then run the tests, and then if the tests succeed, it promotes the product. We first, when we upload it to Nexus, we upload it to staging repository. And then if the tests fail, then we drop the artifact. If they succeed, we release it. So these are other build steps which exist in the Jenkins plugin. We just didn't show it now. It will a little bit too many details, but yes. We do the same thing. I don't know if it's good or bad. I'm glad that we do the same thing. It's a pity that we repeat the work. That's a surprise, yeah. So thank you everybody for joining us. Hope it was useful and enjoy the rest of your evening.