 Welcome to the third part in our series on CI CD. I'm Suri and I work on the content team here at GitLab. We're thrilled you're joining us to learn how to remove barriers between developers and operation. If you have questions during the presentation, please use the Q&A box at the bottom of your screen. We'll also dedicate some time to answer your questions at the end of the presentation. If you have any technical difficulties, please use the chat function and I'll do my best to help you. On this next slide, you'll see our agenda. Today, Latsi Vademsky, an implementation engineer, will help us unite development and operations teams. So let's get started. Welcome everybody. Thank you Suri for the introduction. I hope you're having a great day in whichever time zone you're in. My name is Latsi. I've added a pronunciation there for your reading needs and I'm an implementation engineer with GitLab. And what that means is I help our customers run GitLab in their environments. I also do trainings around Git workflows, GitLab workflows, as well as continuous integration and continuous delivery deployment. And also provide migration assistance coming from legacy source control systems. Today I'll be talking about some of the foundations around CICD, defining those terms, and talking about some of the barriers to adopting these workflows or practices. And then talk about some of the solutions that GitLab offers. So if we go to the next slide, CICD is kind of a funny territory. There are many definitions, academic papers, talks, and they blur the lines in terms of where one begins and one ends. But they do have a few common attributes and I'm going to try to address some of these. And just so you know, coming from and I will be speaking from experience. Prior to GitLab, I was a software engineer, a Rails developer, did quite a bit of front end work as well, as well as DevOps. So I've touched a lot of different parts of the software delivery cycle. I came from the startup world, so most of my experience has been with smaller to medium size teams. So I'm going to very much speak from that experience. But when we define these terms or define these practices or disciplines, some people call them, one common attribute is this concept of smaller. How do we make the work we do in smaller pieces so that we can reduce risk? And if I were to kind of rename all of this CICD, I would call it continuous risk reduction. And there's a reason why that hasn't stuck. But that's another way to think of it. How can we make things smaller so we can reduce risk on a number of different levels? And then I would say another common attribute is automation. What can we automate? There are plenty of human interventions, but apart from those, can we automate many of these pieces? And why do we do this? Why do we do smaller? Why are we reducing risk and why are we automating? And it has to do really with we're done doing deployments on evenings and weekends. I've been there. Many companies are still there. And there's no reason for it. We can change our workflows so that folks can close their workstation and go home and get a night of sleep and come back refreshed and be making meaningful changes the next day, the next day and the next day. So no more weekends and evenings. And then another reason we're doing this is it's great to run experiments in production. And I know that sounds crazy and risky in and of itself, but some of the most meaningful lessons that we get in building software and deploying it into the wild is feedback. So how can we capture feedback in our production environment? Can we also capture feedback in maybe a staging environment? But how do we close that feedback loop with our customers and continue to mitigate risk even at the feature level? So if we go to the next slide, we're going to begin defining some terms. What is continuous integration? So let's start there. And if we move to the next slide again. Here is the entire software deliveries lifecycle in one graphic. On one end on the left side we have the ideation phase. This is where we begin to size features and or user stories and start to attach various milestones or issues to those all the way to the right where we're doing monitoring deployments and also gathering feedback. CI really fits in the middle. And if we again go to the next slide, it encompasses, yes, automated testing, but also the code and commit workflow. And why is that the case? So let's go to the next slide because and as MPJ of Fun Function fame, one of my favorite YouTubers out there, characterizes this commit workflow as seeking to avoid the monster merge. Monster merge is, you know, in the old days, you were handed an assignment or you accepted an assignment. You would retreat to your coding cave and start developing start building and you would emerge, you know, days, weeks, maybe even a month later with your pristine solution, beautiful all packaged up. You deploy it into production and all of a sudden everything's broken. You've broken the the the deploy you've broken maybe some of the work on your colleagues's local machines, as they've incorporated your changes and you're at an impasse. You there's merge conflicts to resolve. And why does this happen. This happens because as you're beginning to gather these changes. Over time, code has changed beneath your feet. And so, how do we avoid this monster merge. Well, one way and this is what CI purports to tackle is we make our merges smaller. And why do we make them smaller. If we go to the next slide. The goal is really to have smaller haystacks. So when a problem does occur, which inevitably they do even when we make them smaller. So the proverbial needle, it can be found quicker in a smaller haystack. So, CI. The goal of CI is to create more frequent, hopefully daily commits and in smaller chunks. And with that, we have our testing suite that's automated. And the work that is merged is sound for others to iterate upon. So smaller haystacks, smaller units of change, allow for problems to be found faster. So if we continue to continuous delivery. And so the next slide, and then next slide again. If we look at again the continuous delivery lifecycle, we're now to the right of CI and testing and committing. And this is really the code review and deploying to a staging environment. Now, if we go to the next slide. Continuous delivery is very much about delivering minimum viable change. And why do we do this? Well, the idea is sometimes we can't fully forecast the effectiveness of a feature. Or the desire for a feature. We might have customers coming to us. I, you know, asking for this or this to address a business challenge. But as we all know, sometimes customers are not always right. Sometimes our most innovative minds are not always right. So how can we mitigate basically premature optimization? Jumping way too far ahead to the right and building a car when maybe the car is not exactly the answer. And if we go to the next slide, Google glasses is a great example. However, and for those of you in the tech news know that is Robert Scobal, Scobalizer. How can we basically incubate these innovations and get feedback earlier? So this is where continuous delivery provides an opportunity to let's say deploy something to staging where we can begin to share both internally and externally a feature and begin to iterate immediately based on real world feedback. So again, small iterations allow us to validate and also frequent delivery allows us to shorten the feedback cycle. And next slide. This I borrowed from Jess Humble's CICD presentation. And I found this very sobering that basically, when you sit down to tackle a new feature, one can flip a coin to potentially determine whether this will be all in vain or not. Something to think about as you sit down and you work on new features. You know, will this actually be useful and use continuous delivery as a way to continuously check those assumptions. So to continue to define what we're talking about here. Let's move on to the next slide, which is continuous deployment. And if we go to the following slide, we again are back to our software delivery timeline. And this is now where we begin to check the efficacy of a particular deployment and approximate production as close as possible. And the reasoning for this. And if we go to the next slide is to make sure our software is always production ready. We can easily break production with a configuration change. Oftentimes with a coding change, breaking production is sometimes weeks. You know, if let's say a database updates happens only on a weekly cadence, we won't know for up to a week, but with a configuration change, I can break production immediately on deploy. So this is a way to smoke screen and discover those problems earlier is frequent deployments to a system that as closely aligns to production as possible. And really, this is to avoid a release gone wrong, smaller batches of change reduce risk of releases gone wrong. And this also allows you to roll the system back. At the end of the day when you check in your code. If you break the production deployment in your deployment pipeline. Close laptop revert your changes come back tomorrow do it again. If you're doing this continuously and daily. This avert any kind of weekends or evenings and is really the point of continuous deployment. So the next slide. So what are some of the challenges or barriers to adoption. I would say they fall into potentially two camps. One is cultural. And the other is technical or technology related. I would say culture culture is really the hardest thing to address. As organizations, we are on various scales of adopting some of these practices based on our institutional structures, you know, for a 10 person startup or 20 person startup. It's easier to change culture than a larger enterprise organization. So this is really the hardest one to address and we're all at various levels at any given point. And when it comes to something like CI one one way that CI is quantified in terms of like are you actually doing it. Three series of questions is everyone contributing to the main line repository at least once a day. Are you confident in your battery of tests to ensure functionality. And when production is down. Is there any person that whose job is more important than to fix production if you're not addressing all three of those, then you're probably not doing CI. And again, this is not a this is in no way a ding on your organization. This is a series of questions that one needs to ask themselves when prescribing to the the CI definition. And then when we talk about the challenges to adopting continuous delivery. It's really about what is the context and to what degree or extent is CD implemented. And that that can vary. And one question that one can ask themselves in terms of the continuous deployment delivery. Definition in one's organization is how long does it take your organization to deploy a single line of code in production. So again, these are all questions that one can ask themselves to as a barometer of how CI or CD is characterized or implemented in your organization. And, you know, this, you know, coming from my background, I've seen a number of scenarios play out when it comes to CI CD. You know, the, the, the adventurous CEO that loves to demo half baked features on a dev machine. Now that makes the devil that takes hostage to the dev machine. We don't now we don't as a software development team we don't have access to that we weren't quick enough to get to give them a demo URL or demo box. And now our workflow is on hold. The DevOps team is slow to respond or wants a, a formalized handoff from development. So there's a number of blockers or scenarios that are barriers that that cut across culture and technology. But let's drill into some of the specifics around culture and technology so next slide. So some of these barriers might be, do you have a culture of testing and how, how vibrant is that culture of testing. So you potentially have only partial test coverage. So what is your branching strategy. Are you working on long live branches, or not committing to the mainline. This is very particular to get in some instances. We prescribed the get flow paradigm where masters protected, you know, everyone's working on feature branches that eventually get pulled into development, a release branches created and merged back in production. So, you know, codifying a branching strategy that is that is more frequent and daily is one way to address that but having a codified branching strategy is key. To achieving a continuous integration paradigm. And also what is the division of labor. Do we have a rigid dev versus ops organizational structure. And how do we also. How do we mitigate the weekend warriors, those tasked with deployments and freeing up their evenings and weekends. So that's another cultural barrier. And the next slide, technical barriers. Disperate tooling. Maybe we have a lot of different tools in the tool chain. And these come with their licensing costs. These come with their maintenance costs. Potentially we have a number of different legacy version control systems that we are running in parallel that also require maintenance and integration. And then the fuzzy distribution of concerns in deployment environments is staging truly staging, or does dev sometimes take on the staging responsibility. So, being able to really define and create clear buckets of functionality that that encapsulate their own concerns. So what are some of the solutions and going to move on to the next slide here to address some of these challenges. So here at GitLab. And if we look at the slide that shows the software delivery cycle again, but now we have a lot of tools you've probably worked with or currently work with. This is crazy. I've, I've done this. I've been here many, many, many times. I've wired many of these tools together, make sure they can talk to each other when a issue is opened over here. When I commit, it's, you know, closed. This requires a lot of work to keep all of these applications happy and talking to each other and working. This is really actually, in some cases, the norm. And this is tough. This is tough to maintain organizationally. And if we move to the next slide, GitLab is really trying to bring all of these tools in that software delivery workflow under one roof. And you might be saying to yourself, oh, who can do all of these things, you know, in any given step? Great. Oh, this looks like vendor lock-in or, yeah, trust me. If you're a gearhead like myself, there are so many other tools that you still need to monitor even after all of this is taken care of. So, you know, give yourself a break and try, at least with the continuous integration, continuous delivery workflow to do something that ties it all together and avoids all of these disparate tools needing to be maintained and paid for. And if we were to dive right into how GitLab addresses CI CD from a feature standpoint, it all really begins with our GitLab CI YAML file. This is really the heart or the brains of the CI CD workflow. You define stages of your build and then jobs within those stages. Jobs can be your testing suite. Jobs can do a number of different validations. And the way these stack up, let's move to the next slide, is with the GitLab runner, which executes those jobs. The GitLab runner runs on a separate server ideally and is responsible for the workload that the jobs define. We move to the next slide. At GitLab, we offer pipelines as a way to visualize and track the progress of your builds. This will be what the runner hooks into. And if we move to the next slide, another feature that we offer as a way to, let's say address something like continuous delivery, is review apps. If configured against any given branch, a review app will provide you a quick way to validate the feature that you're currently developing. So this would basically give you a URL in the application and once committed to, one can share this URL with your colleagues, with a customer to basically navigate to your app and check out whatever feature or piece of business logic that you're trying to iterate upon. And if we move along, we also offer something called canary deployments. And again, this is similar to review apps in the sense that we're trying to shorten the feedback loop where review apps allow you to quickly share an iteration. Canary deployments, and this is the caveat here is your organization needs to be working with the Kubernetes technology or container orchestration. This allows you to basically deploy a production version of your changes, partially to your entire customer base. So if you have 10 servers, you could designate five as canary deployments. So the load balancer would potentially send part of your customer base to those five with the new changes. And then one can run A, B tests, one can run a number of different ways to evaluate those new changes in the wild. So again, these are just quick vignettes of what GitLab offers in the way of CICD and frequent changes and iterations. And if we move to the next slide, cycle analytics, cycle analytics ties it all together. And really, this is a way to measure how your organization is progressing through the software delivery lifecycle. And if set up correctly, the idea or the issue that's created around a feature or a subset of features will basically kick off a stopwatch. So that when the first commit is submitted for that feature, we can see the time elapsed from the idea and the issue being open to the first commit. When a branch is merged, we can see the time elapsed between that first commit and the branch being merged. And from staging to deployment to monitoring, we can see all at all those stages or steps, we can see the time that's elapsed. So this is a way to benchmark how you're doing in that workflow at any given interval. So that's a quick.