 Welcome to the final part in our series on CI CD. I'm Suri and I work on the content team here at GitLab. We're thrilled you're joining us to learn how to use CI CD to fuel innovation. If you have questions during the presentation, please use the Q&A box at the bottom of your screen. We'll also dedicate some time to answer your questions at the end of the presentation. If you have any technical difficulties, please use the chat function and I'll do my best to assist you. Today, David Astor, a solutions architect and John Woods, a technical account manager, will help us learn how to use CI CD to build better software. So let's get started. Welcome, David and John. Thanks, Suri. I appreciate that. Good morning, good afternoon, good day to you all friends. As Suri mentioned, my name is David Astor and I am a solution architect with GitLab. So thank you very much for spending some time with us today to learn about GitLab's CI CD solution. On the call today, we probably run the gamut of GitLab experience, right? There's been a lot of excitement lately around GitLab, so some may be brand new, some may be seasoned veterans and others may be using GitLab, but not to its full capacity. So I'm very much looking forward to really discussing and of course, for eventually showing how we are more than just an open source code repo. You'd be surprised or maybe not by how many times we actually talked to organizations about GitLab and we hear things like, I didn't know you did that too, or are you telling me that I don't have to use these three other applications that we've been paying for? So we're going to talk about a lot of that today. John, I think you can switch slides, please. So I want you to keep what I just said in mind as we go through the demonstration today and we also want to make sure we answer any other questions that you have. So as Suri mentioned, please make a liberal use of that question and answer dialog box and submit them so at the end of the demonstration we can address. We'll definitely have some time set aside to talk about those. All right, John, one more slide, please. Now to back up what I mentioned a moment ago, I want to present the Forrester Wave from 2017 that ranked GitLab CI as the best in breed Among some fairly strong contenders, you can see us at the top right there. The interesting thing about this is that we didn't even have a CI solution about 18, 19 months ago. But as you can see here on the slide, through the power of convention across our user base and constantly working with them, again, open source, having contributions from them, continuously releasing on the 22nd of each month, we're able to become the recognized leader in the space. So before we actually jump into the demonstration, it occurred to me that not everyone may know what CI CD actually means. Again, as I mentioned, there may be some new folks on the call. So I'd like to give just a few moments or take a few moments, excuse me, just to cover some of those concepts. John, one more slide, please. So there we go, it's given away the mystery there. If you haven't already solved it, the CI stands for continuous integration. Now, in its broadest sense, it really means you are continuously testing what's being checked in. You'll see that we highlighted the testing area of this GitLab SDLC here, a lifecycle, really to show where that occurs. I want to side note, all of these stages that you see can be represented and completed within GitLab. That goes a little beyond the scope of this particular webinar. We'll be talking about some of these, but I certainly encourage you to check into it at some point. And then of course, ask us any questions about it during this demonstration so that we can address them. John, one more slide. Now, continuous integration was really born out of the necessity to reduce wasted time fixing problems and large merges. As organizations started making multiple smaller batch merges every day, or per day, excuse me, it became important to have an automated way to really verify what's being checked in. It essentially eliminates the whole, I don't know, it worked on my machine problem. I'm sure we've all said that sometime in our careers. But because GitLab is already the best in breeding source code management solution, it only made sense to utilize the CI functionality within it to assist with your builds and automated tests. It's really just included, or it is, excuse me, included as what we offer. There's nothing new you need to add on. It's just part of the application. So you can have your multiple developers checking in code from their local repos, and you can have confidence that those tests are being run against that code, and it's automated. Again, just part of it's right next to your repository in the application. It also has the benefit of allowing you to find errors and issues quicker in the cycle where they are, quote unquote, I guess cheaper to fix, right? You don't want to get them into production and then only find out about them later, and then have to re-engineer them to get back in. So you want to do them as soon as possible. Now, additionally, the GitLab CI system helps offload a lot of the work from the developer. You can test across different operating systems. You can set up new environments on the fly. You can integrate with tools like Docker and Kubernetes to really take advantage of the cloud. Next slide, please. Now, the CD portion of our discussion can really mean two things. It's a continuous delivery and or continuous deployment, both of which we'll talk about. You typically don't hear people referring to it as CI, CD, CD, though. My theory is that we took a lean approach to it and eliminated the extra waste and narrowed it down to just one overloaded CD. Really continuous delivery, which you can see here, just focuses on the next two stages, excuse me, after continuous integration. You'll see it there represented in our workflow. John, next slide. So what continuous delivery actually is? Really, okay. So before that, you had your automated build, right? You did that in a continuous integration. So you've got that running constantly. It's passing all the tests every time a developer's check code in. That's great, but you really shouldn't stop there. And we don't either. With continuous delivery, you essentially have the capability to take that build and have it at the ready to deploy out to an environment. Now, you may want to automatically push to a staging environment. Let's say for final testing before it goes into production. You may want to have your stakeholders participate in review and checking everything there. I want to run integration, UAT, performance testing on this environment. Absolutely do so. And within GitLab, you can configure it then to have one button to press. Once it's all tried and tested, one button to press to move that into, well, let's say production. Now, there are two things that I want to mention here that GitLab helps with. First is the configuration for these interactions is kept directly in the GitLab code repo and it's versioned accordingly. And John's going to spend a good portion of the demonstrations. They're talking about that. And secondly, GitLab can spin up ephemeral environments as needed. We call them review apps and they're just there to verify what's been checked in at that time and they only exist for the time needed. So this means that there's no more waiting for integrated testing environments to be spun up. And that really plays well with the whole shift left movement, allowing your developers to run tests, review security vulnerabilities, and really get feedback as soon as possible. So next slide, John. So lastly, we come to continuous deployment to the second of the CDs, if you will. And it's really just an evolution of continuous delivery. And as you can see here, just move one stage over relating to our production area of the STLC. And last slide, John. And the key thing here to know is that with continuous deployment, you essentially remove the manual checkpoint before pushing to production. It's really the pinnacle that highly effective organizations are striving for. So companies like Spotify, Netflix, they're doing this to effectively shorten the time to fix issues and get product improvements out. The cycle of development to testing to deployment is highly efficient and it frees up your team to really focus on creating value and not repetitive tasks. So again, not everybody's doing this. It's a very effective way though to cover that creation of value. And what we eventually like to see organizations do. But so I've talked long enough though, and hopefully giving you a bit of a background on what CI CD means and why it's important. So I'm going to kick it over to John now to see actually how we do that in GitLab. So John, over to you, sir. Thanks, David. As David was talking about, GitLab CI CD is built right into GitLab. And the goal of GitLab CI CD is to provide an integrated scalable flexible and self service tool. It's easy to set up and maintain. And it requires a little intervention from an administrator in order to perform the tasks that you need for your project. For those who are new to GitLab CI CD, or actually CI CD in general, the concepts of integrated and self service are worthy of highlighting. Most CI CD tools are standalone. Third party tools that require their own machine or virtual machine and require their own installation, upgrading, licensing and ongoing maintenance. Additionally, there are usually gatekeepers or administrators who restrict access to these tools. GitLab removes these barriers and gives developers access to those tools to do the work that they need to do. GitLab CI is made up of two main parts. The dot GitLab CI YAML file, which is sort of the brains of the operation, and the GitLab runner, the body that does all the work. We're going to start with the GitLab CI YAML file. This is the pipeline definition file, which specifies the stages, jobs and actions that we want to perform. As you can see here, this file is checked into our repository and this provides us with a number of benefits. First of all, the file is versioned, which means pipeline changes can be tested in branches and support any changes to your application's code. Similarly, if you need to go back to an older version, the associated pipeline will be exactly how you left it for that particular release. And because it's under version control, it's easy to diff the file between versions for easy troubleshooting. It also means that there's a large number of ways to work with this file. Nearly all IDEs have a direct integration with Git, if not GitLab itself, so you can use your favorite editor. And of course, the classic command line interface is possible as well as our integrated web-based editor that I'll be showing you here today. Finally, when the GitLab CI YAML file is committed, we run automatic lint tests to confirm the syntax of the file. So let's take a closer look at this file and how you can define the pipeline and integrate with a wide variety of tools. So at the top of this file, we defined a few global defaults. First of all, a Docker image to run our commands within. In this case, the official Maven image. Next, a few environment variables to help our Maven image run smoother. And some cache settings, which allow folders to be persisted between jobs to help increase performance of your pipeline. Now, this particular YAML file uses a lot of containerization. But as we'll see later when we talk about the runner, you don't have to use containers. GitLab CI CD works just as well if you have a dedicated bare metal machine or virtual machine that you want to use for your builds and tasks. So next up in our file, we define our stages and jobs. In GitLab CI, a pipeline is made up of a series of stages. Each stage then contains one or more jobs. There's no limitations on how many jobs a single stage can have. The stages keyword itself defines the orders our stages should execute in. Regardless of the order you define your individual jobs, your stages, your pipeline will run based off of your stages tag here. So our flows today is going to be build, test, generate some docs, deploy, and then trigger. You can have as many stages as you'd like and you can call those stages anything you'd like. Next, we start to define our jobs or the actions that we want our pipeline to perform. Our first job is to build our library and we will use our default Maven image for this. And you can see here that we simply invoke Maven as if we were running it on our own machine. That's because the script you see here is actually just a bash script. This provides a great amount of flexibility because you can now automate anything you would normally do on your machine via the command line. So again, don't get worried if you don't know how to use containers but think about how you're doing some of your tests right now manually. Those commands you're typing in manually or those scripts you're running can be translated directly into the GitLab YAML file. Additionally, if you have scripts that you use you can include them in your repo and reference those scripts directly from the YAML file. So after we've built our library, we're going to test it. Our test stage includes four jobs, our unit test, some static analysis with code climate, and a couple of security tests. Again, we're going to leverage Maven to run our first test but now we're going to include Jacoco to generate our code coverage reports. We then output the code coverage percentage into the build log and the last step in this job is to take our code coverage reports and persist them with GitLab's integrated artifact repository so the results can be used by other jobs or downloaded through a browser directly. We do this by simply specifying the folders that we wanted to save. While that's happening, we'll also be running static analysis with code climate. Here we're going to overwrite the default image and we're going to utilize the default Docker image. We then use that to run Docker and Docker and then we're going to execute our code climate image to analyze our source. Once that's done, we retrieve our JSON report and persist it as an artifact. Next up, we have our two security tests. And for this first one, we're going to utilize our default Maven image again to run our app through SonarCube. The important thing to note with this particular job is that this test is only run when branches are merged into the master branch and not on every commit. For security products that charge by lines of code scanned, this can be used as a cost saving measure. However, next up is GitLab's integrated static analysis security test or SAS for short. Again, we're going to be using the Docker image and Docker and Docker. We're going to download GitLab's custom SAS container which is going to analyze our code, detect the language our code is written in and perform the associated test for that language. In this case where my app is written in Java and we're going to be using the FindSecBugs test. The results of this test will then be saved as an artifact in a JSON report and then the results will be displayed directly into the merge request. So you can see if your application has any security flaws. Next up is our documentation stage and we're simply generating our Java docs and again retaining those as an artifact. Next up is our deploy stage and now that we've tested our library, we're ready to release it. We're going to use Maven to again publish our library to our package cloud server. Now if you're paying really close attention, you'll notice that we're using a variable here that we didn't define above. That's because this is a credential token which shouldn't be checked into the repository for everybody to see. Instead, we've added this as a protected variable in our project settings which only administrators can view but developers can use. Next up, our pages job is slightly unique. In this special job, it works in tandem with GitLab pages, our static site hosting feature. With pages, deploying a static site is as easy as creating an artifact. We do that here by specifying that we're going to utilize the artifacts of our two previous jobs, our unit tests and our Java docs. This job then copies those into a single directory structure and persistent as an artifact. GitLab pages will then take this and deploy it out to the integrated hosting service, providing an easy and automated way of posting, in our case, our code coverage and documentation. Finally, we have our last stage trigger. For this job, our Java app is actually made up of two components, this backend library and a front-end service which uses it in another project altogether. At this point of our pipeline, we've confirmed that all of our tests pass, which is a great start, but what about downstream projects that utilize this backend? Well, this job kicks off what we refer to as a cross-project pipeline. We take a stock Alpine image, install Curl and then use it to trigger the API webhook to start a new front-end service pipeline in the other project. So confirming that downstream projects are negatively affected by upstream changes is as easy as a few lines of YAML. So let's take a look at how all this, how all that looks once it runs. I'm going to jump to my pipelines page and my project. And here we can see a pipeline in action. I have here a previously run pipeline of our merge into master. And we can see that the green checks indicate that all my tests have passed. We can even click into a particular job to view the job log of how it ran and if there were any errors, exactly what those might be. So let me click into my code coverage test and we have an entire job log of everything that ran. Additionally, there are manual retry buttons beside each job. In case of a job failed, you just want to try running that particular job again. Because all of this is built in and integrated with GitLab, we got a lot of extra information as well, including the merge request that generated this pipeline and any GitLab issues associated with that merge request or that pipeline. We see the number of jobs that ran and their total elapsed time. And finally, the particular Git commit associated with this pipeline. And finally, within the merge request itself, we're going to be able to see the results of our code quality, code coverage, and our security tasks. Of course, GitLab can integrate with a large number of third party CI tools and you can receive a pass, fail from those pipelines, but you won't get the depth of information that is provided by using GitLab's integrated CI CD tool. So as you can see, using the the dot GitLab CI YAML file supports our larger goals of GitLab, scalability, flexibility, self-service, and ease of use. A developer can integrate with any tool they need without having to worry about installing plug-ins or involving the administrators. They simply provide the required container, virtual machine or bare metal machine to run the script in or install their own runner, like I said, on bare metal machine with the associated requirements. Integrating with static analysis and unit testing frameworks is just a few lines of YAML code. And we also have a collection of templates which can be used to help users get started. So I'm here in my repository. I'm going to add a YAML file and I can have a template available for us. For those who develop on Android, you can select the Android template and you can fill it out as needed. We provide the basis of an Android template and you can add to it or remove from it as needed. Now, at the beginning of this demonstration, I mentioned that there were two parts to GitLab CI. We spent all of our time looking at the YAML file. So let's talk about the GitLab Runner and take a few minutes to talk about this important part of GitLab CI. The GitLab Runner itself is a small portable application written in Go which we build for a wide range of forms. It's essentially the worker B which picks up and executes the jobs you specify in our pipeline. In our CI CD settings page, we can see where your project owner or master can take a look at the runner configuration for their project. And you'll see we have two categories of runners, both shared and specific. Shared runners are runners that have been provided by the administrators of the GitLab instance. I'm going to have seven that are available for us, provided by my administrators. And by allowing administrators to provide a shared pool, there's a number of benefits. First of all, a consolidation of infrastructure, whether on cloud or on premise, clusters and credentials can be centrally managed by the administrators. This also reduces the effort required to set up CI CD for each team and each project. But there are some cases where an administrator is not provided a shared pool or those shared runners don't meet your needs. For those cases, we have the ability for any development team to connect their own runners. They simply download the runner application, enter in the shared runner, URL and specific project token during registration and you're on your way. So the benefits of specific runners are many. First of all, self-service. Instead of needing to file a request for a new piece of hardware, wait for the response, justify the changes and cost and have a PO put it eventually filed, the dev team takes two minutes with the old machine from the cabinet and off they go. The dev team is happy and more productive and the infrastructure team is happy. They don't have to worry about managing a flock of Mac minis for the iOS team in their data center. Second, it provides a lot of flexibility. If you need to run jobs on an ARM device, perhaps for Android or deep learning, it's as easy as installing the runner on Android. You need to run something on a mainframe like Linux on Z, build the runner and away you go. If you don't support an operating system, there's an SSH executor that can log in and run bash commands. Managing hardware and software dependencies when something like a container is not possible could not get any easier. Last is scalability. A handful of auto scaling runners on GitLab.com routinely process over a thousand concurrent CI jobs and you can install an auto scaling runner yourself as you can see right here. By installing a run, if you have a Kubernetes cluster associated with this project, you just install a runner on Kubernetes and it will auto scale as needed. So to summarize, GitLab CI is flexible. If you can bash it, you can automate it in the YAML file. You have self service runners with no external plugins to manage and maintain. And you have SAS scale CI with auto scaling runners. However, even this can be intimidating to users. David, I feel like there must be an even easier way to get started. What do you think? Yeah, that's great stuff there, John, and absolutely there is. So, yeah. To answer your question, yeah, we do have a way to make that easier. The auto DevOps is what that is called. I like to refer to auto DevOps as essentially the set it and forget it ability within GitLab. You know, if you don't know your SAML from your YAML or you think Kubernetes is a character on, I don't know, good mithrones, auto DevOps feature really attempts to make things as easy as possible for you. The feature really allows you to establish best practices for building and testing, applying the move to the cloud native and really helps clients move faster without steep learning curves. Actually, John, if you can jump to the next slide, I'll show a little bit more about this. The left-hand slide of this slide shows the technologies that many organizations are actually working with today. On this call, you may see some of the members on this call may see some that you're using. But with auto DevOps enabled, you actually don't need to configure any of that. It's just ready to go. There's one button configuration you connect for that, or set for that particular project and you're off and running. And you'll see on the right-hand side that all of those technologies are really realized by the functionalities on the right. So you're going to get things like auto build, auto test. It's going to detect the language that your project's built in and run the appropriate Herokuish build packs against that run the tests. What's especially useful on here to point out are things like auto sast and auto-dast scanning abilities to make sure your code is not being deployed with any known security vulnerabilities. John talked about that in the sast job in the YAML file he created. These are just automatically part of the system that's there for you. We've all read too many stories about organizations releasing code that's vulnerable. So really GitLab is committed to making sure that you're not in tomorrow's headlines for releasing code like that, right? Quite frankly, nobody else in the industry is doing this. So we want you to really get a good sense of how you can utilize the CI CD capabilities within GitLab. And this is just a great way to start if you don't have everything set up right away. Definitely turn on auto DevOps. So really coming to the end here, we hope we've opened some eyes and shown you that GitLab really goes way beyond just a source code management and how easy it can be to start utilizing or using our solution for your CI CD needs. So I'm going to stop talking now. I know he's been answering some questions throughout, but we really like to see if there's any additional questions that have come in. So we'll turn it back over to you, Saris, if any questions came in. Thank you so much, David and John. Let's see if we have any questions from the audience, but we also have a few other questions that have come in. Our first question is, I don't use containers. Can I just use VMs and bare metals machines to run my job? Yeah, so as I was showing during my demonstration all you need to do is install a runner on that VM or that bare metal machine. And maybe you can use tags, which I didn't talk about too much, just a little bit more advanced, but you can specify tags for your runners and within your jobs. So a specific job will be run by a specific runner that has the appropriate tag. We have another question here from Matt. How should an encrypted file a .pfx for instance be used with CI variables? I might need a little bit more information on that. So an encrypted file I suspect you could decrypt it within the YAML file as needed by including the encryption key as part of a secret variable. And that can decrypt it and then you can probably use it in that fashion. I'm not familiar with that particular file type David or Lee, I'm not sure if you guys have anything to add to that. Hey, this is Lee here. I'm also not familiar with that, but I'd imagine if you have an encryption key, you could store the key as a secret variable and use that to decrypt. You can also potentially build a specific Docker image that could decrypt it. There's kind of five or six different ways depending on your threat model and what the permissions need to be. So that's something that you would want to explore understanding more about who should have access and who should be able to decrypt. Those are my thoughts there. David, if you have anything else, feel free to add. I appreciate it, Lee. I think that covers what we're going to say. Yeah, one thing I actually mentioned that sort of Rangabelle is GitLab has an integrated container registry. So yeah, you could certainly build your own custom Docker container that includes that file perhaps already decrypted and you can reference that file from within the integrated GitLab container registry. Right, or you could also have that file or that container have the means to get the decryption key from some kind of vault or any other type of key store then go from there. So there's kind of, like I said, there's always a way to work around it. So it depends on what your threat model is and things like that. Yeah, I would consider a secret CI variable and if not, you can build the Docker image and go that route and see what it could do for you. Our next question is can I reference jobs from other projects, YAML files? David, do you want to start with that one? Yeah, absolutely, I can take that one. Good question on that. So oftentimes organizations, if I understand that may want to have like a set of standard jobs at all projects want to take advantage of over a set of all jobs, they want to take advantage of. So yeah, the YAML file that John showed earlier one of the comments you can use in there is a include tag essentially and what that does is allows you to call out to another YAML file. Let's say you've got a YAML file from a larger group project or another area publicly accessible, you can set the jobs up in that YAML file and then in internal ones, you can use that include file to really just include the name of that YAML file in the other ones to reference and utilize those jobs. Our next question is what happens when my job or test fails? Yeah, so normally the pipeline will simply stop at that point. So generally a stage won't start until all jobs from the previous stage have successfully passed. So you'll see in your pipeline page that your pipeline has stopped with a big red X. I think I might even have an example of this back in my pipelines page. Yeah, we can see here down below that somewhere along the line something in my code quality test failed. It didn't run for whatever reason and so that stopped my pipeline right at that point. However, there are configuration options that can allow jobs to fail but still you can't do that. So I think that's a good question for you. But that's slightly more advanced usage and you can and that's all part of a documentation as well. Great. We have a question here. Can I use GitLab CI to execute Automation Suite which is developed in Java in Selenium Test NG framework? I mean essentially, yes. You can run from the command line. You can install the GitLab runner on that particular machine if needed and it will run the commands that you have in the script. So yeah, the short answer is yes. The long answer is with a bit of work and with some configuration you can absolutely make it work. When using a secret variable containing a password that cannot be committed to the repository what is the recommended way to have GitLab CI insert that password? Right. So we talked about secret variables a little bit and let me show you where those exist. So within your project settings only a project master or owner has access to these project settings and within the CI CD settings you have access to secret variables and as you can see here that package cloud token that I was referencing before is here and it's hidden. I can reveal it and there it is and it can be protected and only run on specific environments. But yeah, this is the way to hide credential files. You make them available for your developers to use within the YAML file but you hide the actual value of that variable within the project settings. Our last question My GitLab instance is live and accessible over the internet on HTTPS with IDEP, Auth and Two Factor. Is it secure to deploy to production machines from that instance? Yes. I'm just forming the answer in my mind here. So the server is available on the internet through HTTPS you use LDAP and so it was the last part Siri. And there's Two Factor. Oh Two Factor, right. So even more secure so that's great. So I mean that certainly restricts who can access your server from the internet and that's great. And yes, I mean you can certainly use deploy from that server to a package cloud server to Artifactory to a local Kubernetes cluster or a Kubernetes cluster running in the cloud somewhere. You can absolutely do that. It doesn't really have a lot to do with how you authenticate with the GitLab server. It just restricts who can access your GitLab server. And you can make your deployments a manual task perhaps so it does require human intervention. So instead of doing continuous deployment you're doing continuous delivery perhaps to a staging environment and then maybe there's somebody who manually deploys that to wherever your deployment environment is. Thank you so much for all of your questions. If you would like to learn more about DevOps please visit www.about.gitlab.com slash DevOps. Thank you so much for joining us. If you've missed any of the webcasts in the CICD series please visit www.about.gitlab.com slash resources. Thank you so much again. Thank you David and John and Lee. Thanks everybody.