 Hello everyone, I'm glad to see a big crowd of quick survey and I know some people asked again but I wasn't looking at the time. How many of you all are in the federal space, cool, how many in the commercial and then how many just checking out GetLab and not sure where you work, cool. Well we're going to talk about a story with a federal agency about five years with GetLab DevOps transformation and walk you through this agency that we partnered with, a bit of our story, the history and then go from there. So, quick introductions, my name is Scott Jaffa, I'm an architect focusing on digital transformation, I'm with Validatex Solution Engineering and Transformation Team, we work on modernization with clients and my focus is a bit more on how people work. So Agile, DevOps, SRE and teaching people how they can deliver better, faster, cheaper. Okay, my name is Slip, I'm a site reliability engineer at Validatex, my main responsibilities are systems and application deployment, configuration management and monitoring and performance stability. Awesome, and then a little bit about Validatex, we're a mature small business working across the federal sector, we try and work on mission-critical IT systems, large areas that need reliability and also transformation, we work in all areas of federal government from commercial, from civilian to military to some IC, we've got a little bit in everything and also we are a GetLab partner, so we're happy to talk to any of you all about that as well. So quick agenda, we're going to go through the project, a brief history of the project, a little bit about the approach we took, where we are today and then Zway is going to give us a dive into some of the features including how I'll talk about CI CD in his last presentation, we're going to talk a little bit about what CI CD looks like for Validatex and our customer. So 2014, starting a new project and we go into the client and they say, hey, build us an enterprise Linux environment, cool, what do you guys have now? Here's a Linux CD, great, Greenfield, we can propose what we want to do. They asked us to do so, we said there's a lot of DevOps practices that we think will be very valuable, two very key ones are infrastructure as code, where all of the work that we do is software development processes for building out the infrastructure and then continuous integration to get them the speed in the testing, again referencing back to Harold, there's a lot of government controls that he just talked about so going to skip over that, but you get the idea of the level of work that we need to do. Told our client, gave our proposal, said, cool, go for it, so let's go ahead and get started. We were a small group, just a small team getting started, this is actually GitLab at the time, they were also a small company about 14 employees when we got started, so when I meet people from GitLab they're like, oh, hey, did you know so-and-so is the one person in the US at the time working, so we have been working with GitLab a long time, grown with them and we'll kind of talk to that. We decided to go with GitLab a couple of reasons, one open source, there was the flexibility to modify and dare I say hack any little changes that we needed at the time and they really were flexible and also the price, the entry at the time in particular was very low, so it was, hey, even if we feel like this isn't going to work, it was a rounding error in our agency's budget, particularly for the small team that we were at the time. So a little bit about what we were looking for, we set infrastructure's codes, we wanted that merge request workflow, but first and foremost we needed source code management placed to store all our repositories, that was one of the questions we asked the client and they had a version control system, it wasn't going to work for us, so having a source code management that we could do, get workflow, merge requests, a couple other features that became valuable, we didn't turn them on on the first day, but the wiki feature for documenting and then some of the task functionality, we've actually moved to the Atlassian suite for those two for a couple reasons that we can talk to in questions if people are interested, but at the time for the small group, that was very valuable and then obviously CICD, because we talked about that already, and finally Mattermost, chat ops are very valuable, we're only going to touch on that here briefly, but I'm happy to talk and in this way is happy to talk after to chat ops at any point that we that anyone is interested in. So a little bit about this team, so I was founded this team and worked there for about four years and this way I can talk to you today, but they do system tools and monitoring, so you can quickly get the idea of systems that spread across the environment, we're talking about thousands of servers, having a code, having a deployment model is very important. So here's a few of the applications they use on Linux to give you an idea, Remedy, Oracle, Software, Siebel, Confluence, there's management tools, satellite, CloudForms, OpenShift, obviously, GitLab, and then Puppet is what our infrastructure is code is managed in and that's all backed by Git, and then from monitoring tools you can kind of see a list, so again like I said these are software that needs to get deployed everywhere, if you're trying to log into a machine and install your monitoring agent, RIT 1000, you're never going to be updating it, you're never going to be consistently managed, there's always going to be errors, by using an infrastructure as code model and having that central code they write at once gets deployed across the environment and away we go and it enables a lot more speed and agility. So quite a few tools, this team's grown a little bit, GitLab has grown too, checked with Leslie and team yesterday, they have 825 employees now, the team that is running this environment went from one, it's now up to about 10 engineers that run the infrastructure monitoring and some of the core systems management for this federal agency. So I know I kind of went quickly, gave an overview of the environment, the reason to do that is I want to turn it over to Zwei who's actually going to talk to the things you're really interested in which is how do we do these things. Okay, thank you. So just like Scott mentioned, for us to manage all these like complexity of the environment, it would be very hard and without infrastructure as a code and the other automation. But having this automation means we have to manage a lot of code and GitLab provides us a tool to enable those processes to manage this workflow. So first we can use GitLab as a code repository and a version control and then second we can also use GitLab CI to run a verification validation and then automated tests. We can also use GitLab CD to deploy to any kind of any environment that we have to development, testing, staging and prod just everything from the GitLab. So and I will go over on like how we actually implement those features in our environment. So our team has a group of GitLab projects where we store and keep track of various files for our application and servers and we also have all the modules and code libraries for our configuration management as well as hardening scripts, file rules and code libraries. So it's actually we mainly actually since we use a puppet as a configuration tool, it's pretty much a code repo that will work with our puppet master. So but even though we use a puppet so the processes and the process that we use should be work for any kind of configuration management tool. So and this is an example of the GitLab repo layout that we use. It's not exactly ours but just simple. So I would like to so we have a so GitLab repo and a GitLab project and how do we actually use it. So we use a simple GitLab flow. We have a production branch for a production environment and we have a master for a staging environment and then we would create new additional feature branches for to to develop and test for the new project. So we started with different workflow variations but we we came to the conclusion that if we keep it simple and then it would be easy to manage, it would be meant easy to maintain and it would be easy to troubleshoot and also easy for the user who use our GitLab with a match addict. So and I will go over brief overview on our GitLab flow starting from the test feature branch and then pushing all the way to deploying all the way to production. So at first so we would clone from our we'll clone the new feature branch we'll clone the master branch and then we create and check out new feature branch from the master and then we will work on the code and then to and then to update and then do all at the changes and then we will commit more and often just like GitLab Git practice and then if there's any additional changes that we need we would we'll be keep doing it until we get to the point where we're satisfied with our code and then when we're ready and then we would by then we will push it all the way back to the origin and then we that's where we can start running the test in our test or depth environment and then once we're good with the in our test environment we would march the same feature branch over to the master where and where we can actually run the in the test environment in a strategic environment. So so we would much submit the much request from a feature branch to the master somebody from different other team will do the peer review and then once the codes our master quotes get reviewed and approved it will be finally actually deployed to the master environment. So so once we get to the master and then after we finish with testing and verification everything in the master branch and staging environment and we'll do the repeat the same steps but this time we will much submit much request from master to protection and then we will and then we will do the same process that somebody will actually come actually review the changes make sure that there's nothing going nothing wrong with the code or nothing affect the production and then they will review and approve the code to production and then it will eventually gets deployed. So for the the to blind production we use as for that we manually push the code because so that we want to kind of control one and how one exactly that we're going to push it out to the production so we don't automatically deploy. So so now we kind of go over how we actually use a code repo and then how we do some of her deployed out to the various stages but how do we actually test it and then what if what if we fix an error and then and another error comes up and then we fix it again and it's kind of repeating and then it becomes a nightmare. So another thing is okay so we push all the way to production but how do we check the code will actually work and it won't break anything so and then that will bring us to our next point for GLAP CI CD and GLAP pipeline. How do we use that? So so I got this picture from GLAP website so so in the first stage like whenever we push a code from a local repo to feature branch or master of production that master quotes will trigger the the CI pipeline GLAP CI pipeline and then in the CI pipeline it will run through all the various tests of like unit testing integration testing and then once once all the tests have are succeed and pass without any issue and then it will get to the next stage for the CD pipeline and then in there it will also go through all the the tests and then and then it will finally deploy it to the staging or production or the any test branch that we want to run the code and code in so and without GLAP CI and CD these would have been jobs and tasks manually done by people like us so by defining those jobs in GLAP pipeline we can automate all those tasks and then all those jobs for us and and this help this help us avoid making mistakes reduce human error and improve our processes over time so how do we actually set up get like pipeline so we need to create a file called .gillap-ci.yaml in our report directory it's a hidden file but and then in the GLAP CI.yaml file we can set up various stages and jobs that we're going to test for and then we we can define scripts or commands to run on each job and we can also define variables before and after script and needed for the jobs as well and we can also specify tags but I will go over about the tag in later slides so in this example we have three different stages we have validation testing and deployment so the first stage is to validate our code it will go through it contains the job to make sure our code doesn't have any any typo or any any incorrect spelling on the in the code and it will also run through the syntax just to make sure the code follow the the actual code standard and then once and then once that everything is good and green it will run the next test which is the testing where we actually run the unit testing so in in the second testing stage we can do we can correct we can define the we can define a test so that it will will make sure to create or specify the user groups or it makes sure it will the code will open up the firewall ports or in the firewall or it will start the services properly or create the files or directories or configure the configuration and if all the stages are succeeded it will jump to the next stage for the deployment and then based on which deployment that we pick that we choose it will deploy either to the test of staging or it will deploy to the production environment so I have some example of two completed pipelines so on the left side you can see all the jobs in each stages and then they run successfully without any issues and eventually gets deployed for this we use continuous deployment which automatically deploys to master branch or any feature branch that you want to test so there's no manual intervention required it will automatically deploy and on the right side everything is almost exactly the same as the left image the only difference is where this is a pipeline to deploy to a production and then what happens is so so and then once once everything gets pushed and then tested I mean go through all the tests and everything is good it will be waiting at the deploy to stage and then and then we would go through all the get all the approval that we need and then check everything and then we can finally someone can manually push the button to deploy the code over to the production and but that previous slide it was everything green and all good but this slide so shows like whenever it's when you run into issues so in here you can see like we run into some errors in different various stages so what happens is like if you have the error in the first stage it will not it will stop there and then it will not go to the other stages after so so when we run into the issue we can go into the click on the field jobs up there and then go through the logs and then fix anything that you need to do and then and then push back into the pipeline it will go through and then eventually it should I mean it will pass and then we can commit the code and deploy to production or any environment that you need so we won't be able to do any kind of gilab CI CD without using gilab runners so just and then so gilab runners are like programmable programmable robots where you can and you define what kind of tests and things that you want is to do you write it and the gilab runners will go and run it for you so the the run over for those instructions and then go through all the tasks in gilab CI CI CD dot yaml we can add and specify the tags for runners to define which specific stages or jobs that you want to run so and then our runner will execute those jobs defined in in the gilab CI or yaml and then send success or fail test results back to gilab CI and that's how you can move to the next stage to full run the additional stages again so and if there's like a specific tag we can also create a specific tags for a specific runner if you have like if you don't want to share with any other project or task you can also have a option with using the specific runner as well so the flexibility with the runner you can run it on a physical machine you can set it up on a vm or a container and then and then it will still do the same job so and also we can use gilab runners with multiple projects or specific projects and we can also have dedicated runners for specific tasks or it can be used as a share resources for multiple projects and it can also run all parallel tasks and so you can speed up your pipelines and and then you can also once after everything is once a runner runs and finish up it's testing the job it can also help you auto deploy it to the any environment that you choose that you specify so gilab runners it will help us replace any manual jobs that we would have to do otherwise and then it can perform all the required tests so without us having to do anything so and once all the required tests has been successfully checked and verified it will just a runner will yeah deploy to the production and that on the image on the right it's kind of tiny but we cannot those are the like so we have the stages um whether a green pass or failed and then on the other one with the highlighted with the uh reds or the the text that we actually associated to the each runner to run for each specific task so some of them has like only one runners uh one one job associated to the runner and then some has like three tags three jobs associated to the runner so um and next up is metamost so we use metamost as our communication tool and we can chat with other team members and work on code revisions or brainstorm and we have various channels uh different chat channel for different groups and we can jump we can jump into any uh any of the channels to discover any any configuration management or deployment plan with other team and uh we also have gilab integration with metamost by using webhooks so we can define what kind of events that we want to get notified to metamost channel for immediate attention so that we don't have to kind of catch some but someone to approve or review our code and this was very well with sending out notification during opening the March request when pipeline fails or when someone approves or march the march request um this uh concludes my current uh get our current gilab implementation and i will go over uh some of our future uh plans and implementation with gilab so uh since we're deploying open shift uh platform in our environment uh we'll be using heavily on gilab c-i-c-d pipeline to build tests and deploy the uh open shift containers and we can also leverage uh open shift to deploy and run gilab runners on demand uh instead of building a separate vm source separate jobs and separate um projects um another new feature we like we're uh excited about is a web IDE um currently we have to remote into uh Linux vm with get client to work and develop on working on the development uh with a new web IDE um this will enable this will help us um start working on the code from the browser directly from our workstations so um as a final note uh i would like to say we learned a lot over the years um c-i-c-d definitely is not a plug-and-play but rather iterative process we've tried and let tested various methods uh there are a lot of trial and error that you can see like we tried different pipelines and various tests um this is only very um there are more but um so a lot of her uh trials and error as we improve her process but in the end we're always able to overcome uh those issues and we're ready for the next challenges and the new features from gilab um and i will also like to mention uh it takes uh people that are willing to take on the challenges along with the any tools and um new tools and technologies uh available and as well as the management uh who can understand and lead the way to achieve the goal and this concludes my presentation and i'm happy to discuss and share about our experience with gilab how if anyone is interested and thank you control board or was it up to you to decide when to push the button for production uh yeah so for the continuous deployment we already have pretty much because we are not actually pushing it to the production it's only at the staging point staging branch so we already let leave it um up to a decision to push it out yeah so for the delivery the right for production it goes through the change board and once that approval comes then it's just a simple hit play and then push it out but that gives us future flexibility or gives us future flexibility to have the change board process when it becomes more automated once that approval comes then hook back to gilab okay okay was it hey um can you talk a little bit more about any integrations you may have done with matter most and what that was like okay so uh for the matter most so we uh define like um so which projects that we want to get notification so we go in there and then we would uh and then we would get the um uh the key to be able to kind of integrate the what one of us we can select the um to enable like there's like an option to enable matter most integration and then you would set up the secrets in the matter most and then and then you can also define what kind of events that you want to get notified on so you check those and then and then once you get those set up it you just pretty much get a notification so there's like what talks and then the yeah the secrets anything that can use webhooks you can integrate into there so there's that there's some monitoring tools that can send messages directly into the channels and such so there are a lot of integration where you can enable and then configure as you um need before they define those are um different so it's marriage in the comment so that once you set up the CICD how does the next new application how much you can reuse the setup from the script um how yes um so what we do is so since like uh so uh for the new application we will have to write a new code and since we use a puppet so while just kind of follow that uh method so for any application we will write the same similar script like for whatever so for this type of application so we need this type of packages we need this type of configuration so we'll configure everything in the control repo and then we would we would also write a test cases so this package should exist this user must exist for this type of services and then all those we define everything and then we just it will and then it will go through the um go through the the various tests that that you um define for that specific application right and then and then we'll just push it push it out to the test uh to the test environment and then make sure everything is installed configure and then running as you expect expect in the the lower environment so for so any type of application we'll go through the same process so and the majority of things already exist so we just have to modify the application specific configuration and then integrate it into the CICD pipeline yeah from a practical perspective how similar your different projects are if they're very similar it's going to be cut and pasted if they're like this if it's infrastructure that's one language if it's i'm moving from dot net on linux and i'm going to be doing java on windows well you're probably going to have a lot less commonality to your individual ci scripts that will need to get modified so it's it's variable anything else anyone else they're going to use do you have any plans to leverage groupings and subgroups so you can do cross group pipelines yeah so i think we that was one of the things we debated and actually cut out so the infrastructure code is actually a group and a subgroup um and get lab there's probably 150 repositories that actually make up that environment so a lot of the things that you saw are group level or subgroup level but for keeping it simpler and i'm sure is way or jonathan um who's the team lead on this team now can get into that in a little more detail afterwards