 OK. Morning for everyone who just woke up, like probably me. Good evening. Noon to the rest of us. My name is Miguel Cinega. I'm going to be talking about continuous integration and employment using OpenStack. On December, eBay recruited me from EA and to join the cloud team. So I've been working for the past four and a half months into this. Right now, we're going to go through how we're planning on doing it. The presentation is pretty much about an application that I'm developing to go over and be able to deploy pretty much applications. The thing is that at the beginning, everything started to be used. We needed something that was able to go ahead and deploy OpenStack, like do stuff with continuous integration and everything. We'll look at the OpenStack CI. But after that, there were some other teams that they were a little bit interest. So we tried to actually make it a little bit more generic. So let's get started. As you know, the same marketing stuff as always. So I'm pretty new about it. So what you see there is what I know about these guys. Let's just skip to it. I have some slides, so I don't want to run out of time. The agenda is going to be pretty much what you see in there. It's going to be enter flow. That's the name of the application. The flow flow, flow code replication, package and artifacts, distribution packages, infrastructures go, deployment to production, screenshots, and a little bit of the roadmap. So first of all, enter flow from beginning to end. This is for what I've seen in my experience what most people have. And that wasn't until Google released Garret. Before that, it was just Jenkins. And you pretty much deploy stuff into a web server or an exos repository, something like that. As you know, Jenkins is really tied up with Java projects. So most of them are, it has a lot of options for Java only. Although there's a lot of plugins out there. So what we needed is pretty much some requirements that was a single interface. So developers are not jumping from one system to another trying to figure out where to use it or how to use it. And there was going to be scalable, simple to use, developer friendly, and generic. So most of the other developers could actually jump in there without any problems. So we decided to go with Fluo, which is pretty much a web UI, an API, a single pane of glass, hold on, oh, there I am. Cloud instance provision and decommissioner allow users to configure the system on the fly and has a role-based access configuration. How it works, Fluo is pretty much, you can see it as the management piece between Garrett, Jenkins, and a bunch of things that you'll see in the middle into it. It uses CRMQ to talk to other components, which doesn't have API stuff like HTTP API. Basically, it's just loading. Well, it's not actually loading. It has a supervisor model in the back end that actually uses to coordinate all these services. Every time the developer can go ahead and say, you know what? I want to create a new job in Jenkins, something like that. Or you know what? I want to create another step into the flow that I'm using. So if there's something that is not API compatible, we'll basically just tell ZeroMQ, go ahead, put in place. The workers go ahead and do whatever they need on the server that has the actual system in there and or the actual component and just reconfigure everything on the fly. The other thing is the recycles or provisions, the servers. Why we're putting this here. One of the things that most of the time that I've seen is that most of the people have their development settings, their development infrastructure, their testing infrastructure, and then you go into production, Q&A, or something like that. That's a lot of money for some companies to have three environments. So a lot of them, they don't even use it. They're just like, OK, code it, production, there you go. Or as soon as it becomes something in development, out of the nothing, that server became production. So why are we using OpenStack? Because we can just pretty much provision on the VM, do everything in there, run the continuous integration, run the unit testing, run everything, and pull everything out of it. And as soon as it's done, pretty much destroy the VM. So that's what this stuff is pretty much doing, is just telling Jenkins, you know what, I got a new thing here that I needed to plug it in there. And after you're done, pretty much go ahead and destroy it. So now going back to the configuration file that I'm putting out right there. Well, if nobody else knows out there, GitHub provides a service that is called Travis CI, which I love, basically allows you to just create a Jamel file, put it into your code repository, and they'll basically do everything for you. They'll run your unit testing and everything like that. But what happens when you need to go over and do it like not on the internet, or you need to have it somewhere where it's pretty much private, or you don't have access to the internet? Well, that's another feature that we're putting in that place. So what is the configuration file? Basically, specification of requirements. It's just a specification of requirements file. And tells the developer, actually, you instruct through the configuration file what you need to do on the system itself. We'll see an example right now. So this is an example of the configuration file. As you can see, you can specify which language are you going to be using. So if you go and say, OK, I'm just going to run scripts. And the scripts are actually wrappers to whatever I'm going to be running. You can just tell it bash. If now you can just go over and tell, OK, I'm using Java 1.7, or I'm using Python, or I'm using something else. Any language, as long as you have the interpreter in the VMs, they're going to be pretty much set up. So now the package section. Basically, that's everything. You can go over and tell it APT, Jam, Python, I mean, or Python PIP, or Jam. It basically supports RPMs, Damian packages, Jams, and PIP at the moment. So all the things that you specify in the package installs pretty much gets deployed before executing your job. Now, for each of those steps that you actually have on your flow, you have to review name of the step and the name of the script. What this stuff does is basically just telling the system before you execute anything at all, go ahead and, I don't know, start my SQL, or pretty much load something into the database that I'm going to be using to run the unit testing after it. After that, it goes the system through the review script, which is basically the what are you going to be running. So you can specify it there. OK, you know why I'm developing OpenStack. Tox dash b e pi 27, something like that. If you're developing something in Rails or something in Ruby, run rake. If you're developing something in any other language, just specify it in there. The system will go over and execute one by one in that order. But at the same time, it's not restricted to be just something serial. If it goes over and says, OK, you know what, I have this guy that is running unit testing and this other guy just committed something into it, into the actual repository in the system, the system will just spin up an RBM and start executing the other stuff in parallel. And at the end, we have the notification sections, which basically just send out some email after it. Or right now, I have send out emails, HTTP hooks. And probably we're going to set up something to be able to some kind of notification system either. But that's pretty much the flow configuration file. So let's keep moving into the run of time. So this is the basic architecture of the application. I thought it was going to be a little bit bigger, but it's not. As you can see, we have the components right there. It's a fluid application and pretty much cloud provisioning instances. We have Gary. We have Jenkins. We're right now relying on Sule from the OpenStack CI to be able to use or scale a little bit in the way that we're executing all the jobs. We're hosting everything on Galera, my SQL, basic three-node configuration. Something dies. I mean, you still have two of them. It has integration with Savic, so you can go over and monitor anything that you want in there. And we're basically using Puppet at the moment. The reason why we went with Puppet is because that was there when I came in. It wasn't like, OK, you can go over and use Chef or something like that. I mean, the company was using Puppet. And for repositories like Debian Package and RPMs, we're using M-Repo and pretty much our sync to just sync some stuff out there. So let's go through the flow. What are these things? So by default, you can go over and, well, actually, you have generic flows that a system provides in there. But you can go ahead and create your own flow. Where is basically a flow? The basic flow has five, kind of like six steps, which is review, which gets activated as soon as somebody commits a new change into Git or into the repository. The approval, which gets activated when somebody goes ahead and clicks, OK, I approve this change in Garrett. The integration you can call it any time just by putting a comment into your own history of Git. Or if you go into, like, Garrett and set up a comment, you can just go ahead and trigger it that way. We have the release and the periodic. Pretty much what it's doing at each of those steps are those five steps that are put in there. It goes builds a cloud instance, reads the configuration file, executes the scripts defining their report back, and destroy the instance. So how we're doing this, we're pretty much talking to Nova. That's pretty, that's it. It's not something fancier down there. It's not something magic trick. It's just basically telling Nova through the API, go ahead, build this stuff, have it in there running for some time, and there you go. So on the right side, you can see pretty much what is, which actions are going to execute every, or which actions trigger it, and pretty much each of those steps right there. So one of the things that I've been noticing a lot is that a lot of people doesn't actually do reviews. And I've got to be honest, I mean, before jumping into eBay, I was with electronic cards. I was given not support. I was helping the application teams, which were around 150 different application teams working all together to come up with games, to be able to have standardized. When you have a multiple environment where you go and say, you know what, I mean, I have this team that is doing its own stuff, this team that is doing its own stuff, and this team that is doing its own stuff, the best and easiest way to go over and have control of everything, not actually control, to come out with quality code is have a review system. Whether, I mean, I know that some people will go ahead and say, you know what, I mean, that's going to take me a lot of time to go over and have somebody just specifically saying, oh yeah, I approve the thing. I approve the thing, I approve the thing. But trust me, I mean, it's going to help you out a lot, specifically if you have a huge environment. So in order to have quality code with this basic workflow, like I mentioned that you can go over into the system and create your own. If you don't want to have review or approval or whatever, just destroy it or just create a new one. You define the name of the step or stage, which basically is the name inside of the application. And the system will go ahead and say, okay, you know what, this user is using a three-step review process. That's fine. I don't care. The configuration file, you just specify, like I said before, like you put it here on the example. If you have three steps in there, put it there. If you just want to skip one of the steps, don't put anything. I mean, it's simple enough to just go over and skip the steps or not. One of the things that, since Garrett is behind everything, as soon as you get the report back from Jenkins, we'll basically go ahead and say, okay, I approve this change or approve not. So in order to have quality code at least following the whole basic flow, this is what you need to have. You need to have a review that basically gets triggered as soon as the developer or somebody else commits a change or reviews a change into Garrett. The approval would require somebody to actually go over and approve it. Probably it's going to be the project lead or somebody else and the build. Why the build? I'm putting it there because the build gets activated or at least that step gets activated as soon as the Garrett system says, you know what, we're cool with this. Let's merge it. So most of the time builds, it's just you can use it for, I don't know, create packages or you can pretty much tell, send out notifications or something else. The rest of the flow, like I said, integration release and periodic, you can go over and trigger them any time you want. The period basically is just like a cron job where it's executing every single time at 2 a.m. in the morning or something like that. The way that we're using the periodic stuff is to, at the moment, to pull down changes from the OpenStack projects every single day and merge them either. What is happening is that as soon as the periodic gets activated, we pull down source on like the source code from OpenStack. We try to merge it with our repositories and if something goes wrong, we'll basically, that step will come out with a notification saying, okay, you know what, developer, X1 or developer, John, go ahead and fix this stuff. I couldn't merge last night. So it's really, it's like I said, it's a bunch of tools, but in order to have quality codes, at least the three of them up there are needed. One of the things that I've been crossing a lot of the before is that when you put out a Garrett review system and if there's somebody like in the rush, they'll probably just go ahead and say, okay, I approve, approve, approve, approve, approve, approve. That's not kind of like the way of going, you know? I mean, the system here will basically allow and tell you, you know what, on the review system, you have Jenkins running the jobs. They will basically come back and say, I give a plus one here, or that's okay. But however, to go into the approval, the way that we configure the system is that you need to pass over Jenkins also and Jenkins have to give the approval. And you need to go over and pass somebody, like the project leader, like a human, and some human will go ahead and say, okay, I approve this thing. Because that way you're not allowing to go and somebody just click, I approve, and pass it over. It actually needs to go through Jenkins first and execute all the unit testing integration or whatever you have specified on the configuration file. So how we're replicating things, if it is one more or multiple projects. One of the things is that we set up the system to be able to replicate anywhere where you are, anywhere where you want to put your code. Inside of Fluo, when you create a new project, we're telling the developer, okay, this is your repository. You can activate replicas. If you wanna go activate replicas, we're just letting the developer know, okay, give me where are you storing. Whether if it is github.com, if it is your private github, or if it is a local github, just give me the path and which user I'm gonna be working with. One of the things that we, it's kind of like a best practice is you need to categorize your code. First of all, will it be open source, or is it private for a team, is it private for a company? Do we need mirrors? Because a lot of times you just end up deploying everything into one thing and out of the nothing you got legal the next day saying why did you put out this out there? It's not even possible. So in companies that you have a lot of restrictions, or pretty much you're actually working with a really difficult, not kind of difficult, a legal team that doesn't allow you to go over and just open source everything that you want. The best way is to just go over and categorize and say, okay, for this project on this branch, let's go over and put it here. For this our project for this branch, let's put it here and this branch put it here. It will help you a lot. It will keep your code organized. So one of the things that, that's the whole reason why I put it here in Fluo is that any project can have as many replicates as you want. You can go ahead and just tell it, I'm gonna replicate it on GitHub, on these two, three, four different Git repositories. I'm gonna replicate it internally and I'm gonna replicate it into my laptop. That's fine. The other thing that you need to keep in account is that this thing will actually work better if you just have one user. Let's say that you have a user that it's on GitHub and that user just make it a contributor to everyone, to any of the projects that you're applying to replicate. That way you can just go ahead and have one single user. If you need to audit something, everything's gonna be there. It's not like something, oh, you know what, where this change come from or who is actually pushing into that. The system will not go ahead and put the, like the commit history as that user. Literally, the system will just graph all the garbage history that you're putting, including users and everything and others, and put it inside of your replica. It's not something that you're gonna go ahead and say, okay, well, I mean, the system is putting everything as himself in there and we don't know who actually commit this change. No, that's not the way it actually works. It basically just graphs every single art or from the Git history and put it in there just like any other, just like if you were replicating or cloning your Git repository. So let's move on into packages. I'm already half into it. Okay, so packages, artifacts, whatever you wanna call it there. One of the things that please, it will make your operations seems really happy is don't go over and deploy code from source. Don't ever do it. That's just something that you should not do. It's just wrong, you know? I mean, the package managers exist on the systems for a reason. It's not something that you are gonna go, oh yeah, Git pull, I have the new code in there, there you go. No, trust me, it will make your life easier if you just go over and say, okay, I created my package. So where and when do I, I'm gonna create it? Things to consider, if you're using this application is that Jenkins will be building the packages. So if you need something in there to build your package, you'll have to put it in your configuration file to be able to go ahead and say, okay, I build the package. If I'm building with RPM, let's go over and put all the dependencies that I needed, all the build require, sold all everything that requires by the RPM, same deal with devian packages. As I mentioned, you need to make sure that every single piece of dependencies are in your configuration file under the whatever you wanna call it, your step where you're building your package. We can basically use the package install or the before step, the before dash name of the step or stage. The other thing is once you have it in that specific section, you can go over and build the actual package in the script section. Why we're doing this is because sometimes where you go over and say, okay, I'm gonna build the package and I have everything into the same section. Okay, something failed. How do you know which thing failed? I mean, it's easier if you just separate the stuff into okay, now I'm preparing the system and the actual step for building the package, the actual script is actually building the package itself. So one of the other things that is really helpful, especially if you have a huge environment, is define a version in standard. Don't go over and say, okay, I wanna go over and put something like beta, one, two, three, four, five, six, seven and there you go, commit that. No, no, no, no, no, no, no, no, no. Define something that is understandable for everyone. Most of the common things that I've seen out there in some other companies and that's something that we use in my previous job was putting all the test packages in that format, zero, zero, zero, X and that's pretty much it. That's all the test packages. Nothing will actually, in testing, will have something like a naming convention or something like that or a name like beta or dev or something in there. No, that's the only thing. Most of the times you actually need to put something into like, this is for testing or this is beta, you use the iteration for it, not the actual versioning. The two main stages that we're using with Fluor right now to build is the build stage, which basically is using the commit number plus the timing seconds for the package version. So in this case, you will be like, zero, zero dot on a bunch of seconds and then the actual package version. Why we're using time here? Because it will make it easier when you're deploying. When you're deploying through a package manager, the package manager will basically just go ahead and say, okay, I need to install now this new version of the test package, which is telling me that it's all than the previous one that I have here. Okay, so if you go and put some crazy name in there and you try to just update it, you won't be able to see it. So that's the reason we're putting the actual or you can put a timestamp or whatever you want but you need to keep all the names, the versioning of the packages incremental. That way it's gonna pretty much, you can be able to achieve like continuous deployment right away. If you have something that is not really critical and you're just trying to deploy a library or something like that, just leave it there and tell your puppet or chef or whatever, push this package to latest always. So as soon as that new package with the new versioning in there that has the time that is an increment from the previous one, it will go ahead and deploy it without any problems. You won't have to go over and figure out what the hell's going on in there. So for the release stage or before releasing things into production, use the tags of Git. The tags of Git are there for a reason. It's actually, I mean, it's way easier if you just go over and use the tag. Why we're doing this is because tags are pretty much to set up something that is really important. There's a lot of people out there that will go ahead and say, okay, well, I'm just gonna go over and create a new branch and from this branch and now on, it's something like, oh yeah, by the way, this is the production branch. No, no, no, no, no, no, no, no, no, no, no, no, no, no. It's easier if you just go over and have everything. It's a little bit more organized and you can actually figure out where are you at a certain point. Even if you go over and see the Git history, you'll see like, okay, now we're on this patch and on this one and this other one. Also, don't commit tags with names in there. Use the same version in standard that I mentioned. And in this case, it would be like 1.1.0 or something like that. So the next step here into achieving everything is distribution of packages. Shipping the code into different locations. So now that you have everything set up in your really pretty package, how do we get it into servers themselves? First of all, use simple things. Use secure object storage if you're actually using something in there. If you're using Amazon, you can go over and put in S4E. If you're using OpenStack, put it on Swift. If you don't have anything, just do a simple rsync. I mean, the system right now where how we have it set up is that all the packages get uploaded to a specific repository within the system itself. After that, if something needs to get replicated, we're basically using mReput to go ahead and say, there you go, all the data centers, we need to have a mirror on each of those data centers. Grab the latest packages from it. And we're basically just telling, syncing between the data centers or we'll be syncing between the data centers. That's pretty much up to you. But one good thing is that if you have a really fast development cycle, you can just put it like sync it every hour or something like that so you can go ahead and pull pretty much the packages anywhere. Now, for that part that says replication is done every five minutes, it's not really recommended when you have or when you're creating repository of packages. The reason is because if you have five packages, that's not a problem. It will do it on the fly. But in the case that you have 100, 200 packages, every single time you commit something in there and creates a new package and gets deployed, it needs to rebuild the actual index of the package, of the package manager on the repository itself. So you can go over and start, I mean, like I mentioned that if you have small, like a few packages, you can put it every five minutes. But you will soon see that, okay, by the way, I have this system getting replicated and it's creating the index for the package manager and then you have the other index in there and the other index in there and the other index in there. Because probably creating the actual index or inside of the repository takes, I don't know, 20 minutes because of the amount of packages that you have. But still, one of the things that you should go over and do is just replicate everything any single place you have it. An example here is that you have data centers with cloud, you have multiple clouds. Again, this is always depending whether if it is secure enough for you or I mean, if it is pretty much possible to go over and do it because sometimes you will go and say, okay, I want to replicate into S4E out there in the cloud, oh, by the way, I have property, intellectual property in there, why are you replicating everything? So that goes back to the previous thing that I mentioned, like categorize your code to see where you're actually putting in everything. That pretty much applies the same principle here, just categorize your packages, you're gonna tell it, okay, this is gonna be for PCI, don't go and throw it out there. It's just like putting in PCI and just tell it to replicate to this specific repository in there. So let's move on into, now we have the packages set up in the repository, cool part, infrastructure and scope. Okay, so this part is a little bit more about for the operations people, but it's still, there are some companies or I mean, on most of the companies, controlling puppet and controlling chef, controlling even salt, requires a little bit of knowledge on the specific language that is running behind. So one of the things is, at the moment we're using puppet, and it doesn't matter if it is puppet or whatever it is, your configuration management should go through the same system itself. Or if you have another review system, put it in there. Don't let anyone go ahead and say, oh, let's go and just change on the fly things. No, that's really bad. It will give you bad, no, no, no, not at all. So one of recommendation is put it through the system itself, allow the operations teams to be able to say, okay, we're gonna push this thing into it. Put it also in git or subversion or whatever you wanna have in there. Put it in a source control management tool. The reason why is because sometimes where you have a huge environment, you're trying to figure out what happened. So the most common thing that would actually, somebody will go over and say, okay, you know what, we had a downtown last night and it was caused by a puppet. Puppet pushed something. How are you gonna debug that? I mean, if you wanna try to figure out what exactly happened, one of the things that is gonna help you a lot is just go over and say, okay, I have the git history here that basically is telling me, go ahead and put it in somebody at 12 or before midnight, commit a change that deploy a, I don't know, a package on a hundred servers. So put it inside of the git management or source control management or whatever. Some guidelines that we're giving our puppet coders is create your configuration management code in a service-oriented way. What does this mean? It means that as you know, puppet is kind of like deploying stuff randomly. I think that's gonna change in the latest versions, but if you're still stuck in like 2.6, 2.7 or something like that, we'll go randomly putting it in there. Putting it as a service-oriented means that you can either use stages in puppet or you can use just recursive class and just go over and have three or four main classes. The one that is gonna install your packages, the one that is gonna config your application, the one that is gonna do some post-config section and the one that starts the actual service. So as you can see right there, that's the kind of like the structure of the module where we have the virtual classes, then we have another directory that has some classes that is common to everyone and then you have the model scene that are basically same deal and we're splitting it up by OS. That way we are actually building both of them and we're actually, well, I mean the whole idea is to be able to code it for both OSes that you have there. If you're using any other type of OS, just put it in there, create another directory and put the same code in there. I know that most of the times puppet will go ahead and say, you know what? You just only need one thing to go over and deploy it all across any OS and yeah, I mean most of the resources are available to work that way, but in case like Ubuntu and Red Hat, kind of like a real failure, the package names change. So that way you can just keep going and have everything really structured. The other thing is create your common modules and use parameterized classes. That will give you at least a little bit of a structure when you're deploying and just saying or actually making it a little bit easier to understand and okay, I have this node. I'm gonna go over and deploy this class which is gonna install JBoss and these are all the parameters that I'm passing into it. In the backend, I mean, Puppet is doing everything in there and the other thing, use also, always test your code. On Chef, there's, if you're using Chef or for all the Chef guys, go with MiniTest, I think it was the name of MiniTest, I don't remember, well, create tests for everything. On Puppet, validate the syntax. Try to keep the syntax a little bit clean. There's a lot of links out there that you can go around and use and also run it as no operations mode. What is doing the last part is basically just going over and telling okay, execute everything, like if you would actually go over and do it, but instead of actually doing it, just simulate it. And that way you can actually see not only syntax errors, you're gonna be able to see dependency errors where you can figure it out if there's a package that is requiring something else that is not there at the moment that Puppet is trying to put it in place. I'm almost ran out of time. So let's go to deployment now. So it's go time. First of all, centralize your configuration management. Put it in a central place, replicate it all across the different data centers, clouds, or wherever you have it. Have a mirror of the configuration management, but don't modify that mirror, it's just a mirror. Put it everywhere, anywhere you want it, but don't modify everything. That's the reason why you're putting everything into GitHub. You're committing to GitHub or you're committing to GitHub or whatever and just put it, let Puppet go over and get pulled the stuff in there. But don't go over and say, okay, I have this environment and I have these specific options for this environment, put it in there. That's just gonna give you more problems down the road. Keep it in sync. Usually, keep it in sync from like a get pulled doesn't take any more time. So one of the things that you need to keep it in sync as fast as you can, otherwise you're gonna run out of sync and you'll have these data center with specific things in your configuration management and these are data center with all our specific things that probably you needed both of them. You have an application here and here that they require the same package or the same changes at the same time. So keep them in sync and pull every two to five minutes. The tactical releases in a chronological meaning and keep it to the standard. Like I mentioned before, use things like 1.1, 1.2 that can stuff not be 0.1 or beta 1.1, 2, 3, something like that. That will actually make it easier for your ops team to go over and say, oh, I got the latest tag now. I'm not actually looking for, I have here 1.13 and up there it says 1. beta production. This is the final. Okay, so keep it simple, pretty much. And just keep in mind that the tag goes into the name of the package. This is pretty much when you build the package itself and you're using tags to create the package. Just remember it's going to be in there and you're going to help a lot the configuration, well, yeah, the configuration management tool and the package manager also to be able to figure out what do I need to go over and put in there. And lastly, not for all, the automatic deployment and scheduling. So it depends up to your business needs. If you can go over and say, okay, I have a really tight change management window, you can create a pre-approved ticket or something like that and let Puppet run at that specific night or that specific time of the day and go ahead and tell it, Puppet, on this day just pull everything from Git and deploy it. Or if you want to go over and you just need somebody to go and click the bottom, you can always go ahead and say, okay, wake up in the middle of the night and just Git pull from Puppet. Or I mean make a Git pull inside of Puppet so you can go ahead and deploy it right away. So it's up to the business needs whether you like it or not. Some of the things that is really helpful is if you have multiple environments like a testing environment or a QA, just set up Puppet to be able to pull the latest. And why you are doing this and not inside of a production because on the latest using the same configuration, actually the same version in standard that I mentioned that before is just keep, doesn't matter if you push a new package right now and another one in five minutes. Puppet will always go over and pull it down because it's looking for the latest one. That's one of the reasons why it's important to just keep everything chronologically. And like I said, some of the application screens, as you can see, that's what we kind of have right now. On the right side, you see all the projects that we have pretty much set up. The application basically takes all the output from the systems and it will go and let you know, you know what, that's okay, that's okay, that's okay. If something failed, it will basically just go and say, okay, it failed at this test. It will allow you to see the servers that you have and the latest jobs that are actually running in there. To be honest, that screen over there is huge. That's just like a half of it, but you actually see a lot of things. In fact, the screen, this one is pretty much the same deal. The only thing is that at the beginning, you start seeing a lot of graphics, like how much time it took to run a specific job, how much time it took to run X, Y, or C a specific job for this project, for this sort of project. Each developer will go ahead and has its own screen so he can figure out what's going on when they're running the unit testing. Some of the other screens is, this is the code review. Like I mentioned, we created the application to make it easier for the developer, single pane of glass, so everything you get in Garrett, you get it here. So in that case, you don't need to go into Garrett, figure out what's going on in there. No, nothing like that, just go over here into the application and you can review, see the changes, actually see the files. And if you click inside of one of those files, I wanted to put it in there, but I couldn't because some code that was sitting there. You'll see the difference between, okay, this is what I had before the commit, this is what I had after it. And as you mentioned, it comments all the way down there. So this is the review part, so you can see right down there, you can just go ahead and click minus two, I approve, minus one, zero, I don't care, whatever, and just submit it. So you don't have to be chasing everyone else, like okay, I need to go over into Garrett, no, just get into the application and do it. The listing of templates, what we're using in there is basically we're providing the developers something like basic flows. What I mean of flow is, remember all those stages that I mentioned it, that you have approval, review, build and that kind of stuff. We have multiple, well we have like five now, basic different flows that we're actually putting in there. Besides of giving the developers the capability of creating their own, we provide just like a basic stuff that they can go over and start using. And also, we're giving pretty much the output every single Jenkins job that we're passing into it. And also, pretty much a jump into the server itself. So in that case that you're actually seeing, okay, I need to debug something. I need to be able to go ahead and say, jump into the server. No, we're basically just telling the, we're gonna tell the developers, go here, there's the output of everything that's running there and you'll see the errors at that specific part. And if you need to jump into the server itself, just click on the link. That will basically spin up web UI kind of thing. It's not no BNC for the people out there. It's another JS application now which I don't even remember it. But basically allows the developer to jump into the system itself and create more and more, pretty much kind of like a web terminals. And they can go over and do whatever they want. We don't care because at the end, if you're putting in a static server, we'll basically get recycled in the middle of the night. That's pretty much helpful only if you're not using the automatically provisioning the commission that we're putting out there because otherwise it won't give you access to a server because it's already gone by that point. The roadmap that we have right now, like I mentioned, it's me and it's me. Let's see if we can put out a CI CD as a service so you can go over and put it in place and let everyone go ahead and just run. Don't worry about the stuff like, oh, I need to put out something specifically like this application in there. Or I mean, if you're using OpenStack, there are a lot of projects that are basically just allowing you to blueprint everything that you're gonna put in the cloud. This stuff is not gonna try to blueprint this thing. You just basically can go ahead and tell it if you have a server already done, specify the IP and which credentials are you gonna be jumping into it and the system will start using it. Integrate with other tools like Chef and Salt. The reason for this is because, I mean, not everyone likes Poppet, not everyone likes Chef, not everyone likes Salt. So we're gonna pretty much try to give support for all of them. What I'm saying give support to all of them is not like, oh, I'm gonna create the file for you so you can go over and deploy. It's not only that, right now we're taking information out of the Poppet servers through the API itself, so we can go over and query like, where this node is right now or give me the latest catalog of this node or let me know where, I mean, when was the last time or give me X, Y, or Z factor out of the Poppet. So that's pretty much what we're gonna be looking into. The other thing is container support. I know that cloud is fast and it's really fast, but they're not as fast as containers. Sometimes there were some problems where you can go ahead and say, okay, the cloud is taking a little bit more of time to go ahead and spin up the VM. The problem here is that that will actually increase the build time that you're working with or at least that Jenkins working with because it's actually waiting until the VM is ready to be able to register as a slave and then start actually putting up things in there. So the other thing that we're gonna be using is OpenBC and Docker. Instead of actually creating the VM, we're gonna pretty much just like have a bunch of, you can go over and have your VM seen there and just deploy containers. And inside of the container that is sitting on the VM, deploy the actual unit testing or whatever you're actually doing in your script. Ask block storage as an option so you can actually use our things and have integration with ironic cobbler, foreman, and racer. The reason with this is because sometimes you can go ahead and say, okay, I have a huge instance out there, but I'm doing my SQL databases test in there. So that's not gonna work at all. Whether you like it or not, how fast it is, sometimes you need physical hardware. So we're gonna pretty much just allow the user to be able to provision systems by themselves like physical hardware. The reason why I'm putting all of them in there is because a lot of you or a lot of the people here probably don't have OpenStack set up anymore. I mean, not yet. But the thing is that the application we wanted to go over and put it out there to be able to use it anywhere where you have. Whether you're using AWS OpenStack, CloudStack, or you just have physical servers, you can go ahead and use it. Now the other thing, Jenkins and Sol, not sure if we're gonna keep using it. The reason is that, I gotta be honest, Sol is a really cool project. It's really awesome. Allows you to pretty much scale Jenkins. But the thing is that there are things that are already built in, like Erlang. Erlang and the OTP framework provides pretty much the same deal that or at least something similar to what Sol is trying to use. So we're looking into, instead of having and deploying Sol and deploying Jenkins, have just one pretty much huge, kind of like a Erlang application that is gonna be going, that is gonna be actually creating VMs, destroying them and doing everything in backend. Jenkins, it's also a really difficult to scale at least horizontally. Even though you have masters, okay, I'm actually running out of time here. If you have masters, you can just go ahead and add more states, but even with that, it's kind of like difficult. And as you see, improve the UI. I'm not really a good designer, so that's something that I need still to work. So with that in there, I'm running out of time here. If you have any questions, go ahead and thank you everyone.