 Thank you, Andy, for submitting the first questions. So Andy asks, can I use this for multiple projects for the solution? So William, perhaps you want to jump in on that. Yes, Agnes, the answer is absolutely. So there are several ways you could use multiple projects. Within GitLab, there is a construct called a project. And essentially, a project is one repository, one issue tracker. And so I'm not quite sure if you mean multiple projects in the GitLab sense in the sense that you have multiple repos. Or if you just mean, like, I'm working on a lot of different things and I want to manage them all in GitLab. The answer to, I'm working on a lot of different things, want to use them all in GitLab, absolutely. But multiple repos, the answer is yes as well. Let me go ahead and share my screen for a moment. And somebody who does this really well is the CNCF. And so the Cloud Native Computing Foundation is underneath the Linux Foundation. They're an open source foundation that has oversight over a lot of open source projects, including Kubernetes. And so you can see here is a dashboard of the CNCF CI working group where they do cross-project, cross-cloud testing on a lot of different projects. Kubernetes, Prometheus, Core DNS, Fluent D, et cetera, Envoy. And so this working group tests all of these projects, which are all in different code bases. And they want to make sure that I can deploy to AWS, Azure, GCP, bare metal, et cetera. And so this pipeline graph is showing you these cross-projects and cross-clouds and they are actually using GitLab to do so. And so this is the CNCF they have their own instance of GitLab. And you can see this looks very similar to the demo screen where this is a job that has triggered here. So I can see, for example, here's one that succeeded. Here's one that failed. And I can actually go in and look at the log from that CI test and figure out what was it that actually failed. And so GitLab has some really powerful capabilities that allow you to run pipelines across multiple projects. This works really well for a use case like this with CNCF where you're just saying, I essentially want to integrate multiple projects and see how they all perform together. Or for example, if you have a microservices architecture where you have multiple microservices and you are running a pipeline per microservice, but then of course you want to have an overarching pipeline that then even could kick off or trigger other pipelines. All of that is possible with in GitLab. That's a bit of advanced functionality. As you can see, folks like CNCF have built some of that functionality. And if you're really interested in digging deeper to that, what I'd recommend is dropping a note in the chat or following up with, we'll send out an email with a video and follow up and say, hey, I would love to schedule a time to dig in and do a specific deeper demo on some functionality. So we can understand what your specific use case is. Looks like we have a lot of questions coming in. Thank you. So yeah, you have Python, C sharp, et cetera. Absolutely, Andy. So if you have one project that's in Python, you could even run say auto DevOps. It's going to detect that it's Python. Today, I have to think for a moment. GitLab, auto DevOps. Our language support for our language detection, we use Heroku build packs and we support a certain set of languages. I'm seeing if I can pull it up here quickly in the docs. I'm not finding the link immediately that shows, oh, here we go, currently supported languages. The idea is that today we're going to support Ruby, Node, Clojure, Python, Java, Scala, PHP. Unfortunately looks like C sharp is not currently on the list, but we do have the ability to create a custom build pack. So for example, if you wanted to have the auto DevOps pipeline run on say a C sharp project with something that's really nice about auto DevOps is it's built in a modular fashion. So the idea is if there's no Docker file inside of your registry, it'll just go ahead and create a Docker file. If there is a Docker file inside of, not registry, but a repository, if there is a Docker file in your repo, then it's gonna use that, right? So here's one, there's no Docker file, auto DevOps creates it, but if you have a Docker file there, auto DevOps uses that. Similarly, if there's no GitLab CI YAML, auto DevOps creates the pipelines for you. If you have a repository where you have a GitLab CI YAML, it's gonna use that GitLab CI YAML. So a lot of this is really composable and flexible. There is like the click button happy path in several supported languages, but if there's anything unsupported, if you wanna tweak a little bit, you wanna add your own Docker file, you wanna modify the CI YAML and extend it, you want to add a custom build pack for Heroku, all of that is part of the system. And we have some pretty robust documentation that will walk you through how to do, for example, like custom build packs and those sorts of things. So it says, how can we choose the coverage of these SAST scans? And Frederick, I am going to admit my ignorance here in a moment to say that some of these, like SANS 25, et cetera, I'm not familiar with those acronyms. So what we have with GitLab is that we are using various built-in functionality where it scans for you. And so our built-in static application security testing uses certain frameworks as a scan tool. So for example, if I have a Ruby on Rails, it's gonna use Breakman. If I have a JavaScript, it's gonna use this ESLint security plugin. And you can see the links now. Normally what you would have to do is if you wanted all of the security scanning, you'd have to go and integrate troubleshoot, keep this up to date, et cetera. With GitLab SAST, it automatically does all of this for you. However, GitLab CI is completely composable. So for example, let me show you an example of the GitLab CI YAML. This is what AutodevOps is running. This is the file. And you can see it sets a certain amount of stages here. This is the build stage. And in fact, I can even go in the file to the SAST job. And this is what the job is executing. And so for example, if our SAST scanning doesn't cover something that you want, if you wanted PCI or you wanted DSS scanning, you can actually add it as a job to the GitLab CI YAML. And then it'll run that job every time you run your pipeline. So this is completely composable. This is completely upgradable. It can allow you to add multiple jobs for other types of testing. We'll start. Yeah. Sorry not to interrupt, but I know we're at the top of the hour, but there's like some really good questions. So if you don't mind, and if folks have time, I think we can probably run a little bit over, maybe like 10 minutes. Okay. And then whatever questions we can't answer, we will get back to you. And we will send the recording for all the questions as well. Yeah, let me do this. I'll run through some of the more questions in the Q&A and chat as long as I can go for. And then afterwards, even if folks can't stick around, we'll email you the recording of the Q&A so you can catch up with it as well. These are really good questions. So Brian is asking, my primary role is development, and I have heard of Kubernetes, but I'm not super familiar with it. So is Kubernetes setup relevant to GitLab hosted services or only to the self-hosted version of GitLab? This is a terrific question, Brian. So Kubernetes is relevant to both, whether you're using the self-managed, hosted yourself version of GitLab or whether you're using gitlab.com. Both work with Kubernetes. At a super high level, you can think of Kubernetes as a hardware or an infrastructure abstraction layer. So if you wanna work with containers, for example, if you're using Docker and you put your workload in a container, this is a really nice thing because now your app is in a container, that container is immutable. You can test it. You can use it as a repeatable way and it allows you to scale your application because you can make duplicates of that image. But the problem is, is you need some kind of software that's going to schedule those containers and to orchestrate them. And so that's what Kubernetes does. So if you're not super familiar with Kubernetes, we have a lot of info on our website, for example, about.gitlab.com slash Kubernetes. This is a great resource to go and check out to learn a little bit more about what Kubernetes is, for example, like what is Kubernetes and some of the functionality for GitLab. The nutshell of it is that it's an infrastructure abstraction and allows you to really, really scale your applications. And if you're doing anything at all with containers, it's an absolutely must have. And so GitLab incorporates it both into its self-managed and its GitLab.com SaaS service. Really good question there. Question from Nathan is, how does the autodevops work? I noticed in the video that there were no Docker Compose file, so just trying to work out how it fits. Yeah, excellent question, Nathan. As I mentioned just a little bit earlier, the way that autodevops works is it, so it actually runs a GitLab CI YAML file. So the way that GitLab defines its pipelines is in this YAML file. So it's a text-based file. You can version it. This way, if you make changes to your pipeline, you can go back and see what the versions because it's all in Git itself. This becomes really nice because if you create tests in here, developers that are working in the repository can go and edit that file in self-service by adding tests or modifying flaky tests, et cetera. So we found that this text-based approach to defining and building pipelines is really helpful to many folks. So the way autodevops works is it actually runs this text file in the background. So for example, here's a repo where I have autodevops enabled and there's no CI YAML file. And so what it does is it goes and gets this template and it just runs that template. But as I mentioned earlier, if you go and say, I'm in this repo here and I wanted to say add a new file to the repo, I could say I wanna add a CI YAML file and then there's a bunch of templates for a lot of different languages like Docker or Go and it has some sample tests there and you can even see the autodevops template itself. So this template is what GitLab is running and you can actually dig into this file. For example, if we look at the build stage, what is essentially running here is we have a build stage. It's using this Docker stable Git image that we have. It's pulling that out of the registry. It's gonna call some services. Think of services as kind of like I need something like a database or an ancillary component that I don't wanna run within the same container. That's what a services. And then it's gonna run this script. And so this script is actually in the file. It's a setup Docker script. And you can see it's called many times. And then here it is where we're looking for some Docker info, a Kubernetes port. Then we're going to say this is the setup file and then we are going to run the build script. And so if I look at this actual build script, here is the commands that it runs. So you can see here's the actual scripting. So the way that GitLab Runner works is it's a shell-based executor. So anything that you can write in bash, you can execute from GitLab Runner. And so the way this CI job runs is it says, hey, go run the build function. And here are all the Docker commands that you would normally script, but they're repeatable. And you can actually examine how AutoDevOps does it in detail. And that template's available there. Let's see here. I have a C-sharp app that needs a licensing DLL to be added as part of the build. Can I do this as part of the CloudDeploy or do I need a custom build? So the way that you could do that is you can simply add a job to your CI YAML file. And there is, if you look at say here, GitLab CI YAML, there should be a configuration. So again, I think our docs are pretty darn great. They can always be better. And quite frankly, there is an edit this page at the bottom. So if you ever notice something missing in our docs, please submit a merge request and become a contributor to GitLab. But the idea here is here's some very robust documentation on all of the language and how to create a CI YAML file. And so that's all completely composable. If you wanted to add a specific type of check or specific type of job or let's say within your build script, you wanted to modify it to do certain things or to add certain things to your Docker image. All of that's completely composable. So GitLab is extremely, extremely flexible, really, really powerful CI tool. You can see that there are a lot of primitives here that lets you do everything from caching so that when you run pipelines a lot, you have artifacts cached to a built-in artifact repository that allows you to pass things between pipeline stages or between jobs and to download those artifacts after a build is complete. There's a lot of power here that don't have time to go into now, but definitely check out the documentation. And if it's compelling and you wanna dig in a bit deeper for your specific use case, we can kind of set up a call with our team follow up to the email that we'll send out. Does it work with other people? Does it work with other tracking systems like JIRA? That is an excellent question. So it does, in fact. So GitLab has built in an issue tracker that we believe is pretty darn powerful. It comes with the issue tracker. It comes with issue boards. This is a solution that completely replaced JIRA for you. Many of our customers do, in fact, use GitLab issues alone and completely have replaced JIRA. However, if you're an organization, you already have a lot in JIRA and you wanna just maintain using that tool, but you wanna use GitLab as your source code repository and you wanna use GitLab as your CI CD tool, you can absolutely do that and we integrate with several tools. For example, we have project services. You can see there's a several of them from Asana to drone CI. We integrate with GitHub. You could use GitHub as your repository and use GitLab as your CI CD tool. And JIRA is, in fact, one of our integrations. So we do integrate with JIRA and you can use GitLab and JIRA together. Let's see. With the review feature running code on GitLab before release, how does this work with internal apps? So I'm not quite sure if I know what all of the question is asking and Nathan, if you're still on, if you wanna clarify as I'm kinda talking, what I think you're asking here is when I create a merger quest, it creates a review app and that review app, and I wonder if I have an MR handy that has a review app on it. So that review app is running inside of an environment. So for example, let me see if I have here within my environments tab. So essentially you can have any number of environments you wanna define and you can define these again in your file here or like you define them in GitLab and then you define in the file what you want to deploy to. So for example, here in the YAML file I can have production where there's a production stage and it deploys to a production environment, but I can also have, so here I've set the environment to be production. I could set the environment to be, for example, staging or canary. And the idea is all of those environments, GitLab's extremely flexible, you can use this together with Chef or Terraform or Puppet, those kind of tools. If your infrastructure is like bare metal or your infrastructure is made up of VMs on let's say AWS, you can set those deploy targets and you can create an environment in GitLab and say this environment consists of this VM or this cluster of VMs or I'm using Terraform to manage that. You can call the Terraform scripts from the CIML, it's one way to do it. Or if you're using Kubernetes, then GitLab just integrates with Kubernetes and you can set a cluster as a deploy target and it will spin up environments for you. So this is in fact what this is, here I have a production environment, that's what's running in production, but this review app environment was created by GitLab on the fly for that merge request. And so when that merge request came in, it created an environment which was essentially the cluster, a namespace within the cluster that says like just run this app on my cluster. And so this is production like as possible. And then it exposes that via DNS. And so I think what you're asking about internal apps are something that you would want to be private, something that you would want to remain within your firewall where you wouldn't want that exposed out to the public internet. And the answer to that is when you are configuring AutodevOps in general or just GitLab review apps, you can set the DNS and what I'm looking for here is settings. Go to settings here and then I need to go to CICD. And what you can see is for the AutodevOps, I removed my cluster integration from this. So it doesn't have what I wanted to show. But the idea here is when you set up AutodevOps or when you set up your review app, you create a DNS entry and you say, use this DNS entry as the way that I spin up a review app. And so for example, if that's a public domain name, then of course it can be publicly accessible. But you can absolutely make that an internal name that only your internal domain name system is resolving. And so if I'm outside the firewall and I try to go to that domain, it doesn't exist. But if I'm inside the firewall, the internal DNS will resolve that URL and I can get to that review app. So hopefully that answers all the question about using review apps together, like inside of a firewall or inside of a private network. Really good question there. So there's one more, I think before we cut it off here, if we just have time on the recording, is it a good practice to create a new branch to modify the CIML and then merge it into the master branch? And that is a really good question. And the answer is absolutely, in fact, I don't know if there's another way to do it. I guess there is another way to do it, which is you could commit directly to master. But yes, I highly recommend against that. Just in general, I think most folks, unless it's a very, very small project, unless it's kind of you by yourself, you really never wanna commit to master. So just in general, any type of code changes, best practices is to create a branch for that change. You can go and modify in that branch, you can even then test out what is that pipeline gonna look within that branch. And you can even do some other more sophisticated things. So for example, if you wanted to test out the production deploy, you could create another environment to go and test your pipelines. But yeah, that's a really good question. I definitely recommend creating branches for any type of future changes. And we have a page called actually GitLab Flow. And this is one that is good. And actually I'll look at that page. So this documentation on GitLab Flow, I think it's pretty helpful. There's a lot of different types of ways to use Git. It is a super flexible workflow. Some of them like GitFlow are pretty complex. And so Git Hub has a very simple flow and GitLab Flow is a pretty similar, like a simplified version of that. So here's a page in our documentation that talks about when to branch and when to merge. And for when that's generally true for code, it's also gonna be good for if you're testing CI YAML or I would even go a step further and say, if you are testing, let's say a production pipeline because in your YAML file you can have, this is a review pipeline and this is a production pipeline and maybe they have some differences for whatever reason. You would even then create a separate environment like a staging environment to go and test that pipeline before it, or I guess you create a, there's a lot of different strategies for it, but in general branching, yes. So with that Agnes, I think we got through many of the questions, really awesome questions. Thank you everyone for attending and hanging out. And would love to just kind of keep the conversation going with everyone as we continue. Is there anything else Agnes from your end? If not, I'll let you wrap it up. Yeah, there are a couple more questions that are really good on something through the chat, but we can definitely email the questions and answer those questions back to you guys. So thank you so much. Basically, I will wrap up right now. So again, thank you for the questions and William for answering all the questions. This demo and live Q&A session is something new that we're trying, so we hope to hear what your thoughts are on today's session and would really appreciate your responses to our survey, which I'll drop in the chat. So Agnes, I was pulling the questions out of the Q&A. If we can keep the recording going, I think I can answer a few of the questions in chat. Will that work for you? Yeah, that works. Okay. For sure, yeah. So there are a couple of really good pricing questions. So first ask, are the features here available on all pricing levels? So the answer is no. We are showing features here across all of our different pricing levels, but we have really good documentation, I think, on what features belong in which level. So you can see here for our SAS and also for our self-hosted, the pricing is pretty simplified. It's the tiers and pricing are the same for each. So for example, our gold tier for SAS is the same as ultimate for self-managed. And you can go here to see all the features. And here's a comparison chart of, you know, for example, something like issue board configuration does not exist in our core product or open source product, but it does exist at the starters here. Or for example, something like milestone lists or service desk is something that's not at our starter level. You need to upgrade to the premium level to get that. And something like, for example, epics promoting issues to epics using epics, that is a feature that is in our ultimate level. So right here on our pricing page, you can just go to pricing and then go to see all features. And you can see a comparison of what features are in which plan. Are you able to build custom steps in the pipeline? Absolutely. I have shown that a couple of times, but it's powerful and we like that. If I have a separate open shift and AWS Kubernetes environments for different projects, can I manage the deployment pipeline in GitLab for these environments? And the answer, Kurt, is yes. This is something I love about GitLab, is it is 100% multi-cloud. So our Kubernetes integration allows you to add a Kubernetes cluster regardless of where it is. So that can be a cluster that's literally a mini-cube on your laptop. It can be a cluster that's an AWS open shift, GKE. It can be a AWS cluster on an EC2 instance using cops. Whatever it is, you can add all of those clusters and you can manage them from your pipeline. And you can actually set, there's some targeting. So I think for the demo, Dan showed something like an asterisk, which is like, every deployment uses one cluster, but you can actually specify with a REGX which environment to use for which type of deployment. It's pretty powerful functionality. So GitLab is fully multi-cloud and does support your Kubernetes cluster wherever you want to deploy it. Another question from Noah is, can I access the production environment to be locked down to a subset of users? Absolutely. So there's a pretty nice permission model within GitLab. And so for example, yeah, you can even set manual gates in there. So we've kind of showed the continuous deployment, you know, very mature. Everything I ship automatically goes, gets automated like tested and automatically go to production. But we know for a lot of enterprise environments, you want some manual gates in there. And so you can add in those manual gates and you can even lock down who has access to which environment. So for example, you might give your developers access to the dev environment and your staging environment, but only your SRE team or your ops team have access to push code to production. So that you can lock that down. Looks like Kurt is saying that he has a number of business units with numerous projects in each. Can I allocate permissions for a business unit to create and manage its own repos, including the pipeline's deployments and decide on permissions for other business units to interact with them? So this is a pretty complicated use case. I'm gonna give that one a tentative yes that I believe the permission model exists within GitLab to do what you want. I would highly recommend for that type of question to engage with our sales team so that we can sit down and really understand what's your infrastructure, what do you wanna accomplish? But at a high level, GitLab offers a concept of groups. And so for example, you can have a... Let's say you have an engineering group that's all engineering. You could have a dev group and an ops group. You could have a... Within the dev group, maybe you have one service that is your identity service and another service that is... Let's say your customization module and another service that is maybe your inventory service. And each of those services live in different repos that are owned by different teams. So that team would have the inventory service group or the identity service group. And anybody in that group would have broader permission sets to that repository, whereas folks in the broader engineering group might have a different set of limited permissions. So there's some pretty sophisticated things you can do with GitLab groups and with our permission models, it allows you to manage a lot of those things and to make sure that we cover all the edge cases there, we should kind of engage in a deeper conversation. So all that's to say is Agnes put the URL here. Please do go fill out our survey if you enjoyed that. I really enjoyed all of your questions and I think Agnes we can now wrap it up for real. Yeah, great questions everybody. Thank you so much. What a great session today. So before I wrap up, I just wanted to invite everyone to sign up for a free trial of GitLab Ultimate. I'll chat that link as well. And again, finally, if you have any other questions, don't hesitate to reach us via our sales contact page as William mentioned earlier at about.gitlab.com slash sales. That's all for today. Thank you so much for joining us.