 Alright, so welcome everyone. Thank you for joining today's webinar. We're very excited to have you on. My name is Timothy Tran and I'm a field marketer here at GitLab. Today's webinar will cover GitOps and cloud native CI CD best practices and will be presented by Rob Williams. He's a DevOps engineering solutions architect here at GitLab. Just a little housekeeping before we get started. This webinar is being recorded. The presentation and slides will be emailed to you after the webinar has finished. If you've got any questions during the presentation, please just type them into the chat box in the Zoom control panel below. I'll compile them during the presentation and we'll give some time for Rob to answer the questions at the end. We'll also have a polling question in the session. That'll pop up on your screen. All you need to do is pick an answer that is most applicable to you. Without further ado, let me introduce today's presenter, Rob Williams. Rob is an experienced DevOps engineering consultant who in the past has worked with large enterprise organizations to build and deploy web applications, as well as roll out digital workflow transformations before joining us at GitLab where he's been helping to bring modern DevOps practices to even more companies. Rob, take it away. Thanks for that, Tim. And then welcome to everyone who's joined us here today. Now, the main topic of this discussion is going to be around operations and GitOps. However, the software development methodology that's used by the development teams is often a hugely important factor and heavily impacts how you develop the different systems. So as a way of a brief introduction, I want to talk about the nature of software development in the modern software factory. Software development is sped up. Developers are moving with agility to deliver software, and this concept has really been summed up really well for me by this quote by the founder of Netscape. He said, cycle time compression may be the most underestimated force in determining winners and losers in tech. Now, to me, this is because each iteration that you can get gives you the opportunity to deliver real value to your customers. If you're off track, you learn about it quickly before you have to make significant investment that takes you down the wrong path. Competitiveness often depends on how fast teams can deliver code, which is to say how quickly we can create value. Software is accelerating the rate of innovation in all industries, increasing competitiveness, and competitors are emerging faster and faster every day. Product development teams are adapting new methodologies to help them iterate faster. And to get an idea of these different methodologies, first I want to look at where we've come from. So we started with waterfall development and the biggest similarity here is to how we developed hardware, but this is for software. We saw a lot of design and requirements happening upfront, and we had long cycles where ideas can start to get stale because they're not being enacted fast enough. In order to enable more digital transformation, we saw developers adapting this agile methodology where you can develop software with smaller and more frequent changes, allowing more and more iteration to achieve a more optimal and adaptive solution. We got more and more comfortable with these smaller iterative changes. The silos between the operations and the development team needed to be broken down. And growing out of the agile movement, we see DevOps emphasizing the need to consider not only the code, but the delivered service that the customers actually use. We need to start thinking about hardware, like software, and having the ability to leverage the code to deliver smaller changes in an automated fashion. And this is where we really see companies start to become cloud native as a concept. When your development team can make small changes to the code and work with the operations team to dynamically provision infrastructure as required in order to enable automatic tests and deployment of changes. Then we start to see exponential increases in the amount of changes that teams can start to implement. So that's sort of the progress here, right? We had waterfall as the standard and still is the methodology that's used by a lot of teams, sequentially and long build cycles. After that, we saw agile building smaller changes, fast iteration, lead to other issues where development goes fast and ops was not able to keep up. So the automation and automation into that process, breaking down the silos so that automation and collaboration allowed us to take a great step forward with more stable, faster, and accelerated build times. Now, what we're seeing is containerization, microservices, and Kubernetes, cloud native, and the extra layer of complexity that comes with that. Now it's not only complexity because it also enables iteration of speed and scale that's been previously impossible for most companies. And having talked about that, I just wanted to give some brief time to hear to you guys to answer the polling question here. So, where we're going to answer this question and then we'll compare some of the results that we see to those of a broader audience. So the question here is where are you at with GitOps. Haven't explored it yet, looked and not committed, planning to implement, and using GitOps today. So seeing a few responses come in, we're already at 75%, so I'm just going to jump over and look at the next slide. So these are the global results from a poll that we posted in at the end of June. And you can see that from the 400ish people that we respond to the Twitter poll, about 54% of them aren't using GitOps. Now, that's a little bit off from where we're seeing here, we're seeing about 34% haven't explored it yet, but that's to be expected given that you're here at a GitOps talk. But the key here is that there's a fair amount of room to leverage GitOps to create more flexible environments for you and your teams. And with that, I really want to start talking about GitOps. And there are really two main concepts that I want to cover here, that's the what and the how of GitOps. So to start with the what, and like a lot of tech terms, GitOps has many definitions. But I really like how GitLab defines GitOps, and I think that it's a good basis for us to be working from a level playing field. And GitLab defines it as an operational framework incorporating DevOps best practices and applying those infrastructure automation, which is a big long fancy way of saying it's DevOps for infrastructure. And it's really bringing this concept of automation and all the the burdens that come along with it to infrastructure. So there are three main components that provide the value in DevOps, and these are a core of the DevOps best practices. And these are the components that we're going to be taking into our operations practice in order to enable this concept of GitOps. So we were talking about infrastructure as code, so that's storing infrastructure as code in your Git repository in your source code management. We're going to talk about merge requests, so these are the change agents and essentially the order trail of when, where, what changed. And CI CD, where you're looking at the automation and the reconciliation loop and how we ensure that the infrastructure is kept up to date with our source of truth at any given time. To start with infrastructure as code, you really have to look at everything beyond the infrastructure because we can write the infrastructure as code, but then there's a lot of other things that need to be set up for each environment for each different version of the application that gets launched. And so you need to be able to look at things beyond the infrastructure and implement configuration policies as well as other operations tasks as code. And using the declarative nature of this code, you can define a desired state fear infrastructure store it using Git and leverage the advantages of version control. So you get to see a history of your changes you get to have localized testing branching and the ability to leverage the same extensively tested and utilize tooling as your development team use to manage the operations as well as the development of software. The next advantage that I want to cover is merge requests, and this is a very similar to a very, very tightly related to infrastructure as code. So merge requests are the change agents for code updates. They allow you to merge branches into other branches to change the source of truth for any given environment. Oftentimes you'll have the master branch for your production and then you can branch off there and merge back into allow changes to be reviewed and approval to be given. It allows you to have discussions and collaborate with your team about the change calling directly on change lines and which relate directly to your infrastructure configuration policies for that environment. Mode requests are often seen as a gate on the change. And then what that game allows you to do is enable a small number of people to enact the changes while promoting collaboration with a wider group enabling you to get changes from everyone will have restricting into a few who can enact. So the next thing I want to talk about was the the CI CD the the automated reconciliation loop. So every change that gets run. You have CI CD pipelines and these are specific to the branches that you're running and you can spin up new environments whenever new branches are made when that's required or restricted to validating code and deploying it when you're and restrict deploying changes to when it's version to the master branch. This flexibility allows you to integrate your operations environments with applications allowing your dynamic environments to be created and destroyed enabling testing on live environments as well as collaboration by moving these dynamic environments into your review process. When building the integration between the application platform and the CI CD tool. There's you can you can utilize either push or pool based agents in order to sync the the state of the cluster. The CI CD removes the need for your infrastructure to be manually managed. So to integrate these concepts into your operations workflow. You need to adapt not only these principles of DevOps but also the methodologies of utilizing them into the the GitOps workflow. In order to manage your infrastructure. You need to have a master environment branch so typically will have the master branch for your production environment. And when you need to change that environment, you can create an issue. You describe and document the change that you need there. Why it's needed. And then from there you can create a merge request. There was your team to collaborate and document the audit trail of the change, including testing dynamic environments to get created to test the end state and enable developers and operators to see what the end state looks like right in the change workflow. Allowing for collaboration peer review discussion and ultimately that gate of approval. Once you pass this review gate, you can merge back into the branch, the changes get enacted automatically onto the infrastructure and move back into that business as usual state that we all really want to be in. And now now we're really starting to stray into how we do GitOps. So I just wanted to bring up the banner slide to remind us where we're at we've looked at the what is GitOps. Now we're looking at the how to do GitOps. So, when you really think about that workflow, there's lots of different stages. And at each one, you need a different kind of tooling to enable your team to work collaboratively. And by utilizing a single platform for all these tools, you provide a single language for people from all different personas to work on right by using the Git management tool your developers work on you enable your operators to speak the same language to be working on a single pane of glass. And oftentimes this is what that single pane of glass looks like. And this is the implementation of that GitOps workflow that we saw a little bit earlier. So by utilizing this single transparent and unifying tool for creating and delivering applications and the platform they run on. You get a software factory that observes and defines adaptations that can be built and deployed automatically by configuring a dynamic scalable environment which incorporates each team's needs and allows for the collaboration between developers and operators. This allows for the ability to implement changes based on real world feedback. Although infrastructure is code is a large part of how to leverage this workflow by creating the the single pane of glass for developers and operators to work on. There are some tasks managed by the operations team, but don't really belong in Git. GitLab, however, does does integrate tightly with a lot of these platforms like Kubernetes to provide cluster management options. And by utilizing the managed Kubernetes apps and the CI CD pipelines, GitLab enables continuous deployment of applications, feature flags certificate management application monitoring security testing integrated environment as well as dynamic environments, all sitting within the same interface as your version control your merge requests and your CI pipelines. So now what I'm going to do is I'm going to quickly go over some of the the managed apps that GitLab offers that allow you to integrate your application platform with your version control and your merge requests. The first managed app that the GitLab is offering is Prometheus. Prometheus is an open source monitoring and alerting toolkit that was originally built by SoundCloud. But since then it's been adopted by many, many companies and has a very active user community. And it's actually the second hosted project in the cloud native computing foundation after Kubernetes. By utilizing this managed app, we enable automated collection and reporting of system metrics for each application that's deployed in the cluster directly within GitLab so that you have that context between your source code and your metrics. So context is also important when you're looking at logs. Logs are a universal tool used by operators and developers alike to diagnose, find and fix errors in the platform as well as the applications. So the cloud implements elastic search to collect, sort and display the context for each of the different pods within your Kubernetes cluster, groups them with the associated environment, which is also associated with the appropriate source code. Again, maintaining that log context so that you have all of the information that you need in the correct spot for your developers and your operators to work together to provide a cloud native solution. So before you get into tracing user sessions, we implement Jega. This is a distributed tracking system developed by Uber. And it's actually the seventh project hosted by the cloud native computing foundation joining Kubernetes and Prometheus as we've already talked about. It's often used for things like monitoring and troubleshooting microservices based distributed systems. So, as far as managed apps go. Jega is still somewhat of a manual setup. So you can see these orange features are the ones that are still on our roadmap. But once you have Jega deployed into your cluster, GitLab then brings back the tracing metrics into GitLab so that you can display it with the context alongside your other metrics and your logs and your error tracking. Because at GitLab, this functionality is based on integration with Sentry, which aggregates errors. What GitLab does is it aggregates errors that get found by Sentry, servicing them inside the GitLab UI and providing tools to triage and respond to any critical issues. GitLab really leverages Sentry's intelligence to provide pertinent information such as user impact or the commit that caused that bug. Throughout the triage process, users have the options of creating GitLab issues on critical errors to track the work required to fix them or without leaving GitLab. Then, once you have these errors, you start to see incidents and processing alerts during a firefight often requires responders to coordinate across multiple different tools and to evaluate different data sources. This is often time consuming because every time a responder context switches to a new tool, they're confronted with a new interface and different interactions. It's often disorienting and slows down investigation, collaboration, and the sharing of findings with teammates. Actionable alerts and incidents accelerate the firefight by enabling efficient knowledge sharing, providing guidelines for resolution, and minimizing the number of tools you need to check before finding the problem. The reality is that although some or all of your teams may adopt the Dev or GitOps workflow, the tools, the processes, the culture, oftentimes they optimize locally but still work sequentially, handing off work from one team to another where you have different teams working in different tools. And this really limits user organizations from really realizing the value of GitOps where teams need to work together and have a single conversation from inception to production around smaller deliverables. What's really needed is a way to get teams to work together at the same time to collaborate on a single pane of glass. This is to design, build, test, deliver, and monitor code changes. They need to work concurrently instead of sequentially on the product or service that they're going to be delivering to the customer. Full stack developers are responsible for front and back end code, but they also need to be able to monitor and deploy the changes. The operators need to be able to monitor their operations, but they need to be able to enable the developers to work with them to get applications deployed quickly and continuously. And that's really what GitLab offers. GitLab is driving this radically faster cycle times by helping GitOps and DevOps teams to achieve higher levels of efficiency across all stages of the lifecycle. What concurrent DevOps really makes possible is the product, development, QA, security, and operations teams to work at the same time instead of waiting for handoffs between teams. You can work concurrently and review changes together before pushing any sort of configuration or infrastructure changes to production. And everyone can really contribute to a single conversation across every stage of the software development lifecycle. Only GitLab really eliminates the need to manually configure and integrate multiple tools across each project, allowing teams to start work immediately and work concurrently to radically compress the cycle time across every stage of the lifecycle. So I hope that you have a bit more of an understanding of GitOps and where it fits into the lifecycle. We'll have them very quickly. I believe that there's been some questions that have come through and I'll just hand off to Tim to vocalise them and I'll put a little bit of some answers for them. Thanks for that, Rob. Yeah, we've had a couple of questions come through. First one is how can I deploy on an AWS EC2 server when a private key is required to SSH into the server? No containers involved? Yeah, definitely. So the GitLab CI CD jobs can utilise environment variables so you can either set them directly within the CI CD jobs themselves or you can integrate with a third party secret provider such as Vault or you can store it within GitLab itself. Wherever you'd like to get that secret from, you can store it wherever you like. We integrate with a few different third party secret providers. Cool, thanks for that. Another question from the anecdotes, when the amount of data being stored in elastic search grows, there are problems with scaling up of ES. Just curious to know if you folks have faced similar issues and if there are any specific ways you used to handle it. I haven't seen too many issues, but of course as any sort of data store grows, you're going to experience some issues like that. Archiving logs, moving them into secondary storage, cold storage is often what we see customers utilising logs that way. So after you get past your one, two month, you start moving them out of the immediate GitLab storage and then only available through more complicated access means. And that's oftentimes cheaper for customers as well because they aren't paying for the hot storage that GitLab uses, the high frequency, highly read data. Great, thanks for that. Another question is when it comes to deployments, one of the problems they have faced is with the storage of credentials securely. Do you have any tips to help secure this? Yeah, definitely. So at GitLab, you can protect variables to only run on certain branches, production master branches, and those variables so that you can have the separate variables for your master, your production environments as well as the development environments where it's not as secure. And GitLab actually offers the ability to mask those in logs, as well as, again, going back to the third party secret providers, Vault from HashiCorp is one that we work with very closely with. So Vault is a program which offers secret management across the board and has very close integrations with GitLab so that you can pull secrets directly out of Vault into all of your CI CD jobs that run through GitLab. Great, thanks for that. What is the best practice or standard way to deploy a Kubernetes cluster using GitLab CI CD pipelines? So by defining your Terraform files, for example, say you wanted to use Terraform, one of the, you can deploy it by writing Terraform code and then writing your GitLab CI CD pipeline to validate, plan and apply that within the cluster. Oftentimes, if you're using a public cloud, GitLab has very tight integrations with specifically Google Cloud and Azure, but also AWS and other public clouds to deploy a managed GitLab Kubernetes instance directly from the GitLab user interface without the use of the CI CD pipelines, and then you use the GitLab CI CD pipelines to deploy all of your applications to that managed, that GitLab managed Kubernetes cluster. And by utilizing the GitLab managed cluster like that, you also get access to all of the GitLab managed apps that I was talking about. So that's the Prometheus and your Jega and Elastic Search and all that sort of thing. Cool, thanks for that. Is it possible for public projects to be hidden with pipelines to be hidden from the users? So I believe that if you have access to a project, then the pipelines will be presented to them, right? Because that's part of the uses to create visibility. But there is like, if you're concerned about the security, then you can mask variables and set up the masking within there as well as limiting the amount of logs that can be generated by the CI CD jobs so that you can prevent certain kinds of information being displayed within pipelines. Cool. Is it possible for GitOps implementation concept to be on-prem with server and infrastructure? Yeah, definitely. So you might have to have your GitLab instance managed separately. But then after you have that, then you can use GitOps to manage an on-premise server and infrastructure very easily in the same way that you would use it to manage cloud-based. The way that we use the cloud-based is by running the cloud-based commands. You can run similar commands for on-premise infrastructure. Does GitOps offer error log from development environment or production environments? Yep. So GitLab integrates with Sentry to access some of the error logs. I know the slide on that earlier, so I believe these slides will be getting sent out after this. There should be a bit more information in there about the specifics of the Sentry integration. But we do integrate there and utilize the intelligence to pull out a lot of the interesting information from those error logs and display them directly within GitLab for each environment. Cool. What kind of security tools do you support out of the box? DAS, SAST, mobile testing, what do we have out of the box? Yeah, definitely. So from an application security standard point of view, I mean, we do have SAST, DAS, as well as container scanning, dependency scanning, and license compliance. On top of that, we are branching out into infrastructure security scanning, so scanning for very specific languages such as Terraform. We do have a SAST for Terraform, but that's the extent of it. Part of the advantage of GitOps and as part of GitLab is the AutodevOps experience, which enables DAS, which is a very big infrastructure problem in a lot of cases, so we implement DAS on every single code change that gets made. So we spin up the AutodevOps pipeline in a GitLab managed Kubernetes instance can spin up environments for applications for every branch that gets made in a web application and run DAS on that application on every single commit that gets run so that you're always up to date on the current DAS scanning results. Great. Next question. We've been using GitLab and GitOps, however, finding it difficult to track failed Kubernetes deployment. Any suggestions on how to manage or get better visibility? So I'm going to assume that that's a failed deployment from the managed GitLab state. So a lot of this will have to do with the logging and the specific scenario. It's pretty hard to suggest something without knowing exactly how they're currently tracking and their level of logging and that sort of thing, but that would be how I would go about is to examine the logs and that's really where you're going to find a lot of the information. Great. Thanks for that. Do people commonly use the AutodevOps pipelines or do they roll their own? I'm thinking about container pipelines specifically. So we see a lot of customers using the AutodevOps pipelines. The best part about the AutodevOps pipelines is that their templates, their GitLab CI template files, and you can include any one of those jobs in your pipeline to mix and match with what you need for your environment. Specifically with container pipelines, the Autodebuild functionality to build up these Docker images and automatically apply the security scanning is very, very valuable to a lot of our customers. And even if they're not using the entire AutodevOps pipeline, then we'll see them using specific jobs like the environment spin up jobs to deploy DAFs or the automatic build jobs to build their projects before they go on and do their own custom pipelines. Okay. Next question. Does GitLab provide any tools to review actual changes in the infrastructure that are being made in a particular change set? Speaking of Terraform, you keep Terraform sources in Git. But what about an actual execution plan that is going to be executed when a feature is merged? Well, this comes back to the dynamic environments. So when you spell a merge request, you can utilize a CI CD pipeline on that branch that's related to that merge request and create a dynamic environment that has an idea of what that infrastructure is going to look like. So by using Terraform, you can deploy these test environments within your infrastructure and have a review of it so that you can have people going forward and looking and comparing the changes and commenting on it by looking at this test dynamically generated environment. Okay. Yeah. Next question is that does GitLab provide a way to avoid usage of secrets to be hard coded? Let's say, is there a solution that supports retrieving secrets from SSM parameter store or something from AWS? Yeah, definitely. So, again, this goes to the secret management that we're talking about earlier. So we integrate closely with Bolt from HashiCorp as well as like storing in AWS, we can run AWS commands from CI CD jobs at any given time. As well, you can store the CI CD in the secrets as variables on the, at a GitLab project level so that you're not hard coding those secrets into your code, but you're writing a variable in a project that then gets loaded into every CI job so that you have the secrets loaded that way. Another question is we've been seeing some awesome security tooling around GitLab. On similar lines, is there any plans to have IAC vetting tooling, something like SAS for IAC, on similar lines to config validator from Google's Forseti? Yeah, definitely. So SAS for IAC is definitely like on the roadmap. So I've been, I'm following the issue at the moment for a couple of my customers, they're looking for Terraform SAS scanning and it's definitely on the roadmap. It's coming up very, very shortly, most likely. Great, that sounds good. Another question is, I have the impression that GitLab develops services which are already available from cloud provider. Isn't it risky to dedicate resources in that space instead of filling the gaps? I assume that if you use GitLab, you would anyway not be possible to set up everything and you can easily move from one cloud provider to another. So people may prefer to stick to the tools of the cloud provider anyways. What do you think? Yeah, definitely. So what GitLab provides is a way to integrate a lot of open source tools into your pipeline and doing things, integrating with the cloud providers options already there. So rather than offering things that some cloud providers offer, we're integrating with those cloud providers offerings. So for Kubernetes, GitLab doesn't have a cloud Kubernetes offering that we do, but we integrate with the cloud Kubernetes offering. So a lot of the services that we have aren't actually available from the cloud provider, but instead we're offering these DevOps solutions that go along with your cloud infrastructure. Cool. For on-prem CD deployment, would you recommend Ansible slash Chef or some other system? Yeah, Ansible and Chef are both my top two actually. So you've definitely hit the nail on the head. You're on the right track. GitLab as well has a bit of functionality around the Ansible Chef movement and that's definitely where we're looking forward in the future for our configuration roadmap. So that's where we'll also be seeing some movement in the future. That's great. If configurations are maintained in a file rather than EMV and is ignored in Git, can you suggest the best practice for this scenario? I'm going to show you this. It's maintained in a file. You should still it like if you're maintaining configurations in files rather than environments, then the best scenario really is to store it in Git. That's really the whole point. You've got your GitOps and the advantage there is that you have the source control. You can see the history and when configuration changes. You can see who's approved those changes so that you get the traceability so that you get this order trail as well as the version history so that if something does go wrong with your configuration, it's easy to revert back and you have that safety net. Great. Next question. How can we design upstream downstream in a better way? Going to create a project adding upstream as dependency and define downstream as YML. He feels that it's not fulfilling the purpose of design of data ops feature. He wants to design to run some project in a sequence order. How can he do that? From that what I'm taking is that you have one project where you want to run your CI pipelines and then based off that you can have another one running other CI pipelines. The CI pipelines really fulfill that push architecture where you have your source of truth in your Git repository and then the CI pipelines take that source of truth and apply it into the environments. So what GitLab offers as well is a multi-project pipeline. So if you have these multiple projects where you need to have one pipeline run and then after that you have another one. You can do that from within GitLab by utilizing multi-project pipelines and trigger another pipeline to run by finishing the pipeline of one. So you can get some pretty complicated pipelines in GitLab now with like acyclic ones and directed acyclic graphs. It's pretty funky, the level of customization that you can do in the pipeline part. Yeah, it's quite endless. There's a lot you can do. So coming up to the last few questions, if an organization isn't yet cloud native, is it recommended to follow the supported ecosystem slash stack that GitLab supports cloud native on-prem? So is it easy to port? So moving from a non-cloud native into cloud native isn't an easy transformation for any business, but the ecosystem or the stack that GitLab provides works very well just as well for a waterfall based or like these early monolithic applications. You still need the way and the ways to implement GitOps is that you need these source code management, you need your merge requests, and you need your CICD. So even if you're not in a cloud native environment, you can implement these DevOps principles to gain the benefit of compressed cycle time even without implementing things like microservices and cloud native infrastructure. You can still gain the advantage by implementing code and getting that traceability, the auditability from your merge requests and the automation from the CICD even without going into these complicated cloud native application infrastructure discussions. Great. Can we simply use scripts in the CICD pipelines to use Kubernetes commands to deploy to the cluster? Is that recommended? Yeah, effectively. So how I like to think of each individual CICD pipeline is as a set of scripts, right? So each job basically acts like a script, right? It executes a bunch of commands on the machine that it's running on. And then, so based on that, you can do your Kubernetes commands, you can do your AWS commands, you can do your Google Cloud commands, you could do Bash commands. So based on that, there's really a lot of flexibility in how you want to execute these things. But yeah, to get back to the question, it's definitely recommended. I like to think of each individual CI job as a script. So you can definitely execute scripts from within there. Great. Does GitLab provide analytics where we can see metrics like productivity? Yeah, definitely. So within GitLab issues, you get the milestones and burn down charts. So those are some very basic ones that come with other products like JIRA. As well, you get to see time tracking on individual issues and contributions for the different developers as they contribute to different issues and different epics and that sort of thing. So yeah, so by utilizing the single tool, we do actually feedback a lot of the productivity metrics directly into GitLab. Great. That's awesome. GitLab has similar Jenkins pipeline features. That's a very basic question. They're asking, now we're using two different tools for deployment. Does GitLab support or software tools, for example, .NET Core for deployment? Yeah, definitely. So you can deploy any sort of code using GitLab CI. So we, GitLab CI and Jenkins are competitors. Gartner has been rating GitLab as slightly above, so I like to keep quoting that one whenever someone comes at us with that, like start talking about these Jenkins features. And yes, we do integrate with lots of different languages.NET Core, definitely. So you can definitely deploy all those different languages to various different machines and different ways to access the GitLab CI CD. Great. Next question is, is there a way to prevent previously executed CI pipelines from rerunning to avoid undoing or changing the existing infrastructure? That's interesting. So the previously executed pipelines, they shouldn't run again. So how you would set it up is that you would set it up with individual users having the ability to execute pipelines. And those would be the users that you're allowing to enact changes within your infrastructure. So if they're the only people who can run them, then we still want to be able to run them in case we need to revert something, right? So say that you deploy to change, you would then need to undo that change. So as long as you correctly restrict who can execute those pipelines, the ability to undo the change is actually a required feature from an auditability perspective. Great. Last couple of questions for you, Rob. What level of support is there for Bono repos, particularly with respect to auto DevOps? By Bono repo, it means multiple related project components or images in the same repo that needs to be deployed to the same cluster, not different projects. So with respect to auto DevOps, it does start getting a bit harder when you have multiple different project components or images all in the same repository in the same source code. And in that case, what we often see is our customers utilizing the GitLab CI but building their own custom CI scripts. So yeah, the auto DevOps experience, it also requires either a single image or a single language and a build pack to be associated with that. So if you can design a Heroku build pack that'll work with your application, then GitLab can by all means utilize that to use auto DevOps. But out of the box, these Mono repos, these projects with many images or projects within them are generally a bit harder to support with auto DevOps. Great. And last question before we wrap up. How does GitLab support monitoring and alerts? For example, external targets like Ops Genie? So I don't know Ops Genie specifically, but in terms of monitoring and alerting, we went over a little bit in the talk, right? We were talking about Prometheus, monitoring the metrics. We've got Yeager tracking the networking and these sort of things. We've got Sentry, managing, error tracking and this sort of thing. So there's lots of different management options within GitLab that come all the way back to within the GitLab user interface. But at the same time, GitLab doesn't stop you from exporting your monitoring to external targets as well. So with your cluster, even if it's a GitLab managed cluster, you could still deploy Ops Genie within there to run the monitoring and alerting in addition to having your GitLab manage the state. Great. Rob, thank you so much for the presentations and answering all the questions on the spot as well. Once again, thanks everyone for tuning in today. We hope you found it valuable. Again, a reminder that the presentation will be shared to you after the webinar with an email in the coming days. And I also just wanted to mention that we are running our APAC GitLab Connect event from the 28th of September to the 2nd of October. So please feel free to register and learn everything more about CardNative. So thank you again and hope to see you soon.