 Welcome to the scalable app deployment webcast. I'm Suri and I work on the content team here at GitLab. We're thrilled you're joining us to learn more about the GitLab and Google Kubernetes Engine integration. If you have questions during the presentation, please use the Q&A function at the bottom of your screen. We'll also dedicate some time to answer your questions at the end of the presentation. If you have any technical difficulties, please use the chat function and I will do my best to help you. Today, William Chia, Senior Product Marketing Manager at GitLab, and William Dennis, Product Manager at Google, will present. I know you're all excited to learn more about scalable app deployment, so I won't keep you waiting any longer. Over to you, William Dennis. Hi, everyone. I'm William, a Product Manager on Google Kubernetes Engine. So we have a lot to talk about today. And before we get to the kind of really exciting thing, and the new thing, which is the integration with GitLab's auto DevOps, I thought I'd give you kind of a quick crash course in Kubernetes just in case you're not quite familiar with it. So before we talk about Kubernetes though, we need to briefly touch on why should you use containers in the first place, because Kubernetes is a container orchestration engine. And when we look at containers, it kind of helps to look at the history of app deployment. So, you know, a while back, the typical app deployment method was using a shared machine. So you might have a bunch of different apps running or running on the same machine. In this scenario, there is no isolation between the apps, so they can potentially cause difficulty for each other. There is a common set of libraries on the machine. And there's a tight coupling between the apps and the operating system. And this is true also, not just for deployment, but for development. So if you're developing a bunch of different apps, like front end and the back end, they're all using the same libraries, which can potentially be a bit of a problem. So to improve on that, we then saw VMs come onto the scene. And VMs helped this problem by adding some level of isolation. So you could now run the app with the libraries and the kernel all separately, so it wouldn't interfere with each other. You wouldn't have dependencies on the different versions of the libraries. However, this had a few costs, including sort of fairly large overheads. It was expensive and inefficient. A little bit hard to manage. Which brings us to containers. So containers is kind of like the perfect level here where you get the isolation between the apps. There are no common libraries, so each app brings its own dependencies. But there's also less overhead, much less overhead than with VMs. And there's also less dependency on the host OS. And again, this is true for development. So when you're developing with containers, you can be developing one app with one set of dependencies, switch over to do a different one with a completely different set of dependencies, and not have to worry about what's actually installed in your local machine. Which brings us to Kubernetes. So Kubernetes is a container-focused workload-level abstraction. It will work on public cloud, on various different public clouds, and on premise, on your local machine. It's incredibly portable, and it gives you this resource-optimized, microservice-friendly view of the world. I like to say that Kubernetes is at the right level of abstraction. So at the lowest level of abstraction, you have infrastructure as a service, virtual machines, or even bare metal. And at that level, you spend a lot of time doing operations. You care when a virtual machine crashes and becomes unavailable. You care if a physical machine completely breaks. On the other end of the spectrum, you have a Paz. I quite like Paz as I developed with them for many years. They sit at a very high level, make it really easy for you to get started, make it really easy for you to iterate on your apps. But then they can potentially constrain you as your needs expand. And what I mean by that is if you need to express some kind of deployment philosophy on a Paz, if the Paz doesn't support what you're trying to do, it can be extremely daunting because suddenly you have to work out how you can do this thing that's actually not supported. Kubernetes, on the other hand, is just right. It gives you the flexibility to express all kinds of different types of deployments from very similar ones to very complex ones with the power to manage most of the heavy lifting for you. So using the example before of the machine that failed, Kubernetes can actually solve some of those problems while giving you this huge amount of flexibility to express what you need. And so I think as people grow from a small deployment, a small company into a larger one, Kubernetes can kind of help you every step of the way. So the old way of looking at things, if you wanted to avoid those limitations of the shared machines that I mentioned earlier where there was no isolation, you would probably physically isolate your services and put them on different machines. This led to this kind of deployment density where you'd have, say, three machines running three different services, but those services are not necessarily utilizing the entire machine. Particularly when you scale, in this case, Server C was actually utilizing a whole machine, but we had to add a bit extra capacity and now it's wasting capacity. The benefit of Kubernetes is you get this bin packing so you can push all those services, all those containers, all the replicas of those services into a shared pool of resources. What's really cool too, and William will be talking about this later, is you can actually run production workloads, you can run continuous integration pipelines and kind of development-like review workloads all on that same cluster if you really want to get the full efficiency. And you can do that in a way where it's nicely isolated. And so, of course, the more you have, the more important this becomes. So Kubernetes helps you efficiently use compute resources and maybe one day it can even save your data center. So Google has actually been running using containers for a long time and we do actually believe that the efficiency from that has actually saved us the data center. So my hope is that your business will become big enough where this will be true for you as well. All right, so let's talk a little bit about the constructs of Kubernetes because I think it's gonna help with the next demo that we do on GitLab to kind of understand just some of the composition of Kubernetes. This is kind of like a really high-level crash course. We start with containers, as I've been talking about, but then Kubernetes actually groups containers into what's called a pod. So the pod is actually the smallest, schedulable unit. It can contain a single container, which is quite often the case, or it can contain multiple. And sometimes, for example, you may have like a tightly-coupled, too tightly-coupled containers that really need to be scheduled together. It could be the main app with some kind of logging what we call sidecar or something else that's added to it. That's why we have this concept of a pod just represents the smallest schedulable unit, which can be one or many containers. Then we have a concept called nodes. Nodes are really just machines, whether they're VMs like on Google Cloud or whether they're bare metal like when they're in your own data center. It's just worth calling out that you'll see the word node everywhere and when we say node, we're just really a machine. Then we have like the really exciting thing with Kubernetes, I think at least, which is deployment. So the real power of Kubernetes is from these deployments. A deployment is a statement of the desired number of pods that you want. And you give this to Kubernetes and it will schedule for you. So in this example here, we have two deployments, A and B. Deployment A has said, hey, I need two pods and deployment B has said I want one. And those have been scheduled onto the nodes. So how Kubernetes works is it's called a declarative style. So you will actually just declare to Kubernetes what you want. In this case, I want two of pod A and one of pod B. Kubernetes will then schedule that for you and deploy that and then observe the state. So what we're looking at here is an example where that has already happened and Kubernetes is observing that they do exist to have pod A and one of pod B, which is what I wanted. Now what would happen if the node were to suddenly become unhealthy and have to be removed? Well, Kubernetes would very quickly realize and observe that there's only one of pod A now and none of pod B. It would then bring up new containers to satisfy your desired state on healthy nodes. After which it could observe that there are now again two of pod A and one of pod B, which case is satisfied. The great thing is all this can happen while you're sleeping. You don't have to worry about this. You have simply declared to Kubernetes the state that you want your app to be in and it will do its best to achieve that. The final object that I wanna mention today is called a service. So a service is simply just a group of identical pods that host could be an HTTP front end or a back end or something like that. What you would typically consider as a web service. As those actual pods can be anywhere in your cluster, the service abstraction just gives you a single point that you can reference and then Kubernetes will figure out where to add traffic. Cool, so that's Kubernetes and that's true everywhere you find Kubernetes. Now I wanna briefly talk about Google Kubernetes engine and why I think this is the best place to run Kubernetes. So I kinda mentioned earlier, Google has been running production workloads in containers for a long time. In fact, it's more than 12 years. So we really know what we're doing when it comes to just containers in general in production. And we're the team that are really building the best of what we've learned there into Kubernetes itself. It's the same team that brings you Kubernetes engine. They're the ones that originally released an open source to Kubernetes itself a number of years ago and they're the ones that I think we're still like it. The number one contributor to the open source project. So it's really, you're getting this hosted Kubernetes service from a team that is actually building a big chunk of the Kubernetes open source project as well. And Google has I think the really, really the right philosophy when it comes to this. This is something that makes me honestly really proud to work here. This is a quote from one of our VPs last year at Google Next, where he says, we're gonna be the open cloud. We're gonna give you the freedom to join and leave and I'll take my chances. So what this is saying here is that when I try to lock you in, where we're giving you all these open tools that frankly run everywhere. And we just wanna build the best platform to run those so that you'll choose us and you choose us as the destination for your workloads. And we wanna win your business just through a really good product and nothing else. GKE has a lot of really cool features. I just wanna highlight three today that I think are really powerful. And they're all related to the concept here of automated operations. So I discussed earlier how Kubernetes is really nice in the sense that it can kind of manage a whole bunch of stuff for you so you don't have to wake up at 3 a.m. and answer some production outage. GKE brings a bunch of automated operations to the cluster itself. So one of them is cluster auto scaling. This was really nice because it means that you don't have to worry about allocating resources. You can just let us handle that for you. When you schedule a workload in Kubernetes, if there aren't enough resources available, GKE can actually automatically provision and attach nodes for you. At the same time, if you decrease and scale back some of your Kubernetes workloads, GKE can remove those extra nodes so that you're saving money. The next up is node auto repair. This is a feature that will constantly monitor all of the nodes that are attached to your Kubernetes cluster. And if it detects one of several unhealthy conditions, it will actually remove that from the cluster and replace it with a healthy node. So again, just taking these operations away from your concern. And finally, node auto upgrade. Kubernetes is a fairly fast-moving project. We have a minor release, which is actually, to be honest, fairly major every quarter, and various security patches throughout the year as well. So when you use GKE, the nice thing is, if you enable node auto upgrade, we can manage the upgrading nodes for you so you don't have to look to bring it. What about Google Cloud in general? So if you haven't looked into Google Cloud too much, let me just talk about some tech because it's the thing I find the most awesome about Google Cloud actually. This is a photo here of our network switches. So when it comes to a lot of hardware, Google actually doesn't buy off-the-shelf hardware. We literally design and build our own hardware. And the reason is mostly because some of that off-the-shelf hardware just did not satisfy Google's needs. So this is a generation of network switches that are custom built for Google data centers. And these were built to handle traffic like Gmail and YouTube. And your apps can take advantage of the exact same infrastructure. Then when it comes to real innovation, this is a TPU chip that we're building to power kind of the AI revolution. This is very unique to Google. And you can actually attach these TPUs to your nodes in GKE and run machine learning workloads on them. And finally, we're really building out our network. This is a photo of a ship laying undersea cable. All of that undersea cable has given us a network that looks like this. All the lines on this diagram illustrate Google's dedicated fiber links between its points of presence around the world. And this is a really, really unique thing amongst public clouds. So one of the reasons why Google has such a really fantastic fiber network is that we had to build this. We had to build it to handle YouTube. We had to be able to stream videos everywhere in the world. We had to build it to handle search. And now you get to use it, right? The packets traveling from your app to your customers are traveling on the same Google-owned fiber network, exiting our network only when it gets very close to the user in a point of presence that's local to them. So to recap, Kubernetes, I believe, is suitable for projects of all sizes and gives you the tools to grow and cater for kind of every aspect of your growth. Great, and with that, I'd like to turn it over to William to talk about the new GitLab integration with Google Kubernetes Engine. Hey, thank you, William. My name is also William, William Chea at GitLab. And what's so exciting is that using GitLab together with GKE really creates an environment for you where you just need to merge your code and then GitLab does all the rest. So in a sense, Kubernetes is solving all of these problems for you, as William described, and GKE is going to run and manage Kubernetes for you. But of course, you need a way to actually deploy your application code to that Kubernetes cluster. And so this is where GitLab CICD comes in. I'll just touch on these briefly because I wanna jump into some live screenshots and show you different parts of our interface and how things work. But in a nutshell, the value here of using GitLab CICD is first, it's very tightly integrated with your source control management. So if your code is hosted in GitLab and you're using GitLab CICD, everything is very, very tightly integrated and seamless. So for example, you can see your test results, you can jump into your deployment environments right from the same interface without having to switch tools or move around. So what GitLab CICD is gonna give you a way to verify, essentially to run your tests, a way to package up your code and a way to deploy it. Another benefit here is that GitLab CICD, what I'm gonna show today is what we call auto DevOps. But you configure GitLab CICD using a very simple YAML file. And so this is configuration as code, so you can actually version your CICD. That works quite nicely. And the thing I'll show today is auto DevOps, which is really minimal configuration or zero configuration and setup. So if you're someone who is new to CICD or just getting started or essentially, you want an environment like a pass, as William described earlier, where you just wanna ship your code and you want everything else to happen in the backend. This is what GitLab CICD and auto DevOps is gonna allow you to do but not constrain you with those other kinds of constraints of a different type of pass. Essentially, GitLab's gonna allow you to connect the GKE on the backend so that can run your Kubernetes clusters. So with that, William, if you wanna stop sharing for a moment, I'll kick over to my screen and we will jump in to a bit of a demo here. So the first thing I wanna show is that I am just starting with a really, really simple Rails app. So I'm gonna use Ruby on Rails for this demo, but essentially all I did to create this was literally I ran Rails new. I added a database and then for whatever reason, by default, Rails doesn't have a route at the root, a route for root. And so I just generated a welcome controller and added that as my root in X. So this is a really, really basic app. There's nothing fancy here and we're gonna actually add a Kubernetes cluster as a deployment environment and we're gonna enable CICD pipelines with just a few clicks. Of course, giving a live demo, if there's any kind of hiccups or some things take more time than others, I have a bunch of tabs open and I can jump to different states to kind of show you what happens. So the first thing we're gonna go here is to our CICD section of our interface and go to our Kubernetes. And here is where we can add a Kubernetes cluster. Now, as William was talking about, this is a really nice part about Kubernetes is that it's portable and you're not locked in. So today I'm gonna show you how to create a Kubernetes cluster on GKE but you can actually connect any Kubernetes cluster running anywhere into GitLab. So the integration with GKE is really the nicest. This is the one that is push button and click. So that's the one I wanna show off. So what we're gonna do is we're gonna sign in with a Google Cloud Platform account. So if you don't have one, you can go to create an account. And there's just a few caveats here. So in a nutshell, if you are signing in with a net new Google Cloud Platform account, you should have all of the access and the permissions to be able to create clusters. If you're part of a larger organization where you're using your business's Google Cloud Platform account, you may or may not have access to all of the APIs or all of the sections. So if there's some challenges here and you're part of a larger organization, you may wanna check in and make sure you have all the correct permissions. But if you're using your own Google Cloud Platform account, then when you just sign up by default, this should all work. So the first thing we're gonna do is we're gonna give our cluster a name. So I'm just gonna call this my production cluster. And you could have clusters for different purposes. But in a nutshell, one of the values of the cluster is that if you have one cluster that you deploy everything to, for example, if you had 50 development teams and they're all working on different stuff and you're all adding that into a single cluster, then you start to reap those bin packing benefits that William was talking about. And I don't know if you wanna chat for a moment, William, just on some of the value of when you would wanna have one cluster when you might have different clusters. Yeah, absolutely. It's really kind of personal preference, I guess. Certainly we see some organizations that really wanna kind of firewall everything and keep everything separate. So you could have like a dev test staging cluster and then a production cluster is completely separate. Kubernetes does have fairly powerful controls. So it doesn't even have like in cluster and in cluster permission model. So potentially you can actually have someone deploying workloads into a particular namespace, which is really just an environment in the cluster and not have the ability to kind of accidentally delete production, for example. But if you really, really wanna have that kind of complete separation, then that might be where we'd recommend multiple clusters. Certainly GKE makes it very easy if you'd have multiple clusters. We don't actually charge on a per cluster basis only for the nodes. But of course, like you were saying before about the bin packing, you'll definitely get like a better bin packing effect the less that you have. So for our purposes today, I'm essentially gonna create a production cluster. This is where I'm gonna install all everything I'm working on. And then for GitLab, we have the environment scope. Now the concept of an environment in GitLab is like a deployment environment. So you could create dev stage and prod environments. And then also environments are created for review apps which I'll show later on in the demo as well. For now, I basically, I'm gonna say I wanna deploy everything to this cluster. So this is why there's an asterisk here is the default. It basically says use this cluster for everything that I wanna deploy, deploy to this cluster. But I could scope down this environment scope to be a specific environment if I had more granular permissions. The next thing I'll add in is a Google Cloud platform project ID. Of course, we're always updating and improving GitLab. So in the future, we'll make this a dropdown. So it'll just show you your project IDs. But for now, you can click on this link here and go over to the Google Cloud platform console and you can see your project ID will pop up basically on the main dashboard. So I'm gonna take that project ID. If you have multiple projects, you might wanna choose which one and add it in there. The next thing we'll select for our cluster is a zone. So we can see here that there are several regions and zones that are offered by Google Kubernetes Engine and Google Cloud platform. And so my recommendation here is essentially to choose one that is close to you and close to you and close to your customers. This is gonna lower your latency. So I am just going to take the default here. The next step is a number of nodes. So as William was talking about, you can think of Kubernetes nodes essentially as the number of machines or the number of VMs that are powering your cluster. It sets a default as three, but I'm actually gonna snug this down to just one node because this is maybe just a very simple app. And so in the future, I might wanna add more nodes and I'll actually show you how you can enable Kubernetes Engine auto scaling. And so what, as William mentioned, as you add more pods to your deployment, which I'll show after I have this set up, then the nodes will auto scale. So for now, I'll just set one node and there's many machine types, but I'm just gonna choose the lightest machine type. So for a sample application, this is gonna be the most cost-effective way for me to just create a cluster. So I'm gonna click to create the Kubernetes cluster and it's gonna kick things off. Now this process takes, I don't know, maybe three to five minutes. So I'm just gonna flip over to another tab here where essentially I already have the cluster installed. So while that's kicking off on my other tab, when your cluster is finished installing, you'll get a success message. And then you'll wanna install some of these applications. And the nutshell here is that if you wanna use auto DevOps with GitLab, you need Kubernetes and it needs a few of these things, these applications, HelmTiller, Ingress, Prometheus. And so we wanted to make it very simple to set these up. So we've added these as a one-click install. To touch briefly on these, Helm you can think of as a package manager of sorts for Kubernetes. It allows you to install and manage other applications to your cluster. So to nutshell, we wanted to install this one first and then we're gonna install our other three applications once Helm is installed. Again, these should only take a few minutes, but I am gonna flip over to another tab where I have this set up just to kind of move along quickly. So here's another cluster where I've already set up where the applications have finished installing. And I'll just touch briefly on what all of these do. So like I said, Helm is a package manager is gonna allow us to install other applications. Ingress is essentially a way to access services within your cluster. And so in a nutshell, this is gonna spit out to you an IP address where you can access your application. And what you're gonna wanna do is create a wildcard DNS entry for that IP address that Ingress gives you. The wildcard DNS entry is gonna allow you to create review apps. So within GitLab, a review app I'll show in a moment is with every merge request, it spins up a unique environment just for that merge request where you can run and test that code live. It's really nice. And so the wildcard DNS entry allows you to spin up those review apps. For the purposes of this demo, I'm gonna be using something called xip.io, which is essentially a free wildcard DNS service where you take your IP address and you put it to .xip.io and then it will automatically create, it'll resolve that URL over the public internet to your specific IP address. But when you are obviously setting this up for yourself, you would wanna create a wildcard DNS entry in whatever is your DNS solution using this IP address. Of course, if you just wanted to follow along, you could also use a tool like xip.io. Prometheus is a monitoring solution. So this is gonna give you auto monitoring. It's really, really powerful. So out of the box, you get some default metrics, which if we have some time, I'll show. And then it's customizable, so you can actually add additional monitoring and this is a powerful solution. The last one is GitLab Runner, which you can install with one click. Now, since I'm running on gitlab.com, gitlab.com comes with a shared pool of shared runners for free. But I can also install an additional runner to my Kubernetes cluster. And so if I've exhausted my gitlab.com free minutes, I can actually use the runner. This is gonna run my CI jobs. So when I have a pipeline that's executing a specific job, that runner's gonna run that job. Essentially, the more jobs I have to execute, for example, if I have a large development team of hundreds of development teams and they're all committing code at the same time and all of those tests are trying to run at the same time, they could get backed up. So I would wanna have more runners to execute those jobs. So in a nutshell, that's the setup of the cluster. The next thing we're gonna wanna do is enable auto DevOps. And what I'll do is I'll click back here through to a tab to show a default state. And then I'll show you the finished state. So essentially we have our settings here and we're gonna go to CI CD in the gitlab interface. And it might give us just a moment. This is essentially gonna get us to, this is our CI CD settings. So auto DevOps, which is in beta, but is soon moving towards GA. All we really need to do is click on enable auto DevOps. And this is gonna do a few things. So if you have a CIMO file, it's gonna use that file and will actually execute whatever you put in there. But as long as there's no CIMO file, it's gonna use the auto DevOps template, which is gonna spin these things up for you. The other thing to put in is to put in your domain. This is gonna give you review apps and deploy your application to that domain. And those are the only two settings. So you enable auto DevOps, you put in your domain, and you scroll down and click save. And that is then automatically gonna spin up some pipelines for you. So let's take a look at one that I've previously run. This is an example of what I'm talking about. So this is a production pipeline. So as soon as you enable auto DevOps, it's gonna automatically create this for you without you having to do anything. And it's gonna have four stages to this pipeline. It's gonna have a build stage where it's actually gonna go and create a container image for your application. It's gonna store that in our built-in container registry. It's gonna build your application code. It's gonna then take that code and verify it using a test stage. And you get a lot of automatic tests out of the box. So for example, you get a code quality test. There is dependency scanning. So this'll go in and look at your open source code. This is built on the gymnasium engine who we recently acquired. It's gonna tell you if there are vulnerabilities deep within your stack. You get static security application testing. So this is gonna take your application code and check it for security vulnerabilities. It's also gonna scan your container and then it's gonna run whatever test you have in whatever test framework. I forget the one that Rails has off the bat but you can essentially use whatever kind of testing framework and add your tests in there and the auto DevOps template will run whatever test you have in your app. Finally, after all of this completes, you can select these different jobs to be either mandatory, they must pass or don't deploy or optional like if it fails, tell me the status and then it's gonna deploy to production. And then what's really nice is after that's even then running in production, the CI is then still going to run a performance analysis. And so for example, for a web application this is gonna tell you if the code that you deployed maybe takes a performance hit, maybe it's using more memory than it used in the previous iteration or there's a runaway process it'll actually after the code is deployed it's gonna run that performance analysis. So this is the auto DevOps and then I also wanted to show you what emerge requests look like. So, which I might actually have open. So essentially what I've done here in my code is I've made a change and so this is kind of my welcome page it's deployed to Kubernetes engine but I'm so excited about it I had to add an exclamation point. So as soon as I submitted a merge request to merge in that code it opened up this process for me and you can see that there's a lot of nice things here in GitLab. Of course we can look at what our pipelines are running on this merge request we can see what the commits were if there are more commits added I can have discussion where I can have my peers review this code and comment on it they can go into a line and say, you know, looks good and it'll tell me what line that was in the code and of course it has created a pipeline here. So let's take a quick look at the pipeline for the merge request you can see that this one is a little bit different this ran my build stage this ran my test stage those ones are similar to the production pipeline but then it deployed to a review app and a review app as I mentioned is a live running example of your application so here I can see it says welcome to auto DevOps deployed on Google Kubernetes engine which is very excited with exclamation point and you can imagine that this can be helpful in a lot of ways to see a live running version of your code this is where folks that are perhaps non-technical can go and do user acceptance testing and test out a new UX flow or design changes to see your app actually running live and how the different components interact without ever looking at the code and can use that review app and can comment back to you and of course this review app is running in our Kubernetes cluster on GKE there are of course after the review app is running it's gonna run not only a static security application test but a dynamic security application test where it's gonna run a set of security tests on the running application and then of course it's gonna give us a performance output which we see in the merge request is told us hey our memory usage increased in this case it's an initial commit so it's showing or it's an initial change where it wasn't running before so it went from zero to the full thing but you could see if I had another commit then it would show me perhaps a smaller delta or it's gonna give me that performance information there's no changes to code quality which is good these sorts of things are all gonna show up in my pipeline there so the next thing I want to talk about is our environments and so I'll show an example of what that looks like here so as I mentioned in GitLab you could have multiple environments I have a production environment and then this is actually my review app environment so when I merge that merge request it will actually have a cleanup stage in the pipeline where the cleanup stage will then delete that review app so every time I have a merge request it spins up a review app, can review my code can see it running live, it looks good I can merge the code and then of course I can run my production pipeline to then automatically deploy to production and so this flow is really it's continuous delivery at its best you don't have to run it that way you can put manual checks you can put stops in there to say okay I wanna go to a staging environment and then I wanna have a manual deployed in my production environment you can do those sorts of things as well but what GitLab enables you when you've added that Kubernetes cluster you can essentially create a merge request it tests everything up for you and when that looks good you can merge then it can automatically deploy to production so looking at my production environment you can see that I have actually four instances so these instances actually represent Kubernetes pods on Google Kubernetes engine and so as William mentioned you have the pods that are you can think of these as your container for the most part it's container or a set of containers but those are your horizontally scaled instances that you can add and so in this case today the way you update these in GitLab is you add an environment variable called production replicas so you can see I've added a production replicas equals four and that's added four pods you can see my review app is just running on a single and a single pod but my production app it actually has four pods and so as I'm hitting more scale and more load and I can even go over to my CI CD and environments and I can do some things so I can look at monitoring I can get a terminal into that environment and I can open it up and I can see what's running live and so you can see in my production environment I don't have that change merged yet but here's a terminal where I can look at what's running live on my environment this is really nice because you can introspect and debug and see what's going on without having a problem of well it runs on my machine this is actually the production environment of course you wouldn't want to give anyone a shell into your production environment you can snug this down but you have the same capabilities with your review apps so you could see this could be very powerful to go and do a shell on a review app that you could go and debug and look at that all in the environment that it's running and then of course monitoring which is not showing anything too exciting today but the fact is this comes right out of the box so I can see my CPU and my memory load and this can give me an instance of saying like okay is it time to increase and add more pods now what I do want to show you is is actually in the future we'll be able to add more pods not just by adding an environment variable this is an issue on GitLab that's currently open it is currently scheduled to ship in 10.8 and so now instead of having to add that secret variable or that environment variable what this is going to create for you is you click on scalable deployment and it'll just give you a box here where you can just increase the number of instances so that's a little bit of a step forward at GitLab we like to iterate step by step and so not only do we want to ship this iteration but also coming in the future will be auto scaling apps so you can get auto scaling today using GitLab using Google Kubernetes Engine it does require you to use the Kubernetes Engine command line tool called kube-cuddle and so if you ran on your command line for your cluster this auto scale command you could set it a minimum and a maximum amount of pods and it'll just spin them up as it detects load in the future for GitLab we essentially just want to make that a check box so today you can actually run this and get auto scaling or you can hard code it via an environment variable soon we'll be able to let you click to increase or decrease and then coming after that we'll actually add auto scaling as just a check box but you can get that auto scaling today and I basically just want to go over now into the Google Cloud platform to kind of tie the loop back around and just poke into one of my clusters here and so of course if I enable auto scaling like I said today you can do it via the kube-cuddle command in the future you'll be able to just check a box and get lab to do it and that's going to add more pods but of course as I add more pods I'm going to need more power to run those so essentially I can go and I believe it's edit and go here to my node pools and I can see that there's auto scaling so I've already via the kube-cuddle command enabled horizontal auto scaling but this is a type of you could think of as vertical where I turn this on and it's going to add more nodes so I can say only use one node or only use one VM if that's all I need but I could say if I really need a lot of more horsepower spin up more nodes as necessary and so when I save that essentially what that's going to do to bring it full circle and to kind of come back to this one slide as we're adding more nodes so these are our pods and as we're adding more pods or more replicas of our application to handle more scale it's going to pack them in and then as Kubernetes needs to add more nodes Google Kubernetes engine is going to auto scale these nodes for us so you can see Kubernetes is really, really powerful because you can start very, very simple you can add just one node, one pod with a few clicks you can enable auto DevOps and have everything configured for you essentially you commit your code and then it deploys to production and then as you need to scale you can actually take advantage of these auto scaling functionality and get going for you there so with that I'd love to just take it over to questions and but maybe one note just before questions I think to kind of pull it full circle I know William, you and I were chatting the other day just about kind of some of the conventional wisdom of having a startup is maybe hey I have a startup and don't plan for scale wait till things explode and blow up and then think about scaling Right I wonder if you just kind of want to close the circle here kind of talk about maybe how this lets you start and scale Yeah, absolutely so I think going back to what I was talking about earlier about kind of Kubernetes being the right level of abstraction and yeah I guess expanding that to talk about scalability so I think the thing about Kubernetes is that if you start with Kubernetes it gives you like a really good start there is a bit of a learning curve but it does work great for fairly small deployments and then the kind of satisfaction that you get from actually starting with Kubernetes even though it might be I guess slight overkill for kind of an early stage project the thing is that you can basically hit a button or you can just configure GK to do it for you you can just hit that button and just like scale massively when you need to and so it kind of really sets you up well for success and I guess I don't believe that anyone creates a startup or a company thinking oh yeah we're gonna stagnate at a certain user count we're gonna fail or whatever even if the hard truth is that a lot of ideas don't kind of make it big time but you want to be ready for that success like when it comes your way and so I think Kubernetes will set you up really well for that and the last thing you want I think is that that very moment at the very cusp of your success when all of a sudden you have a million users beating down your door trying to get in you don't want to take that moment to suddenly think oh gee we're gonna have to re-architect this application because it's not gonna work at scale like that is the very worst time you want to be thinking about that and so yeah that'd be just my advice generally is like Kubernetes is just really great for that you know and I don't want to scare people because it's actually quite easy to get started and you know I personally believe GitLab is a really great way to get started because it kind of sets all these things up nicely before you're in an automated way so yeah get up and running with Kubernetes you know spend a little bit of time looking into some of the concepts I know I did that very brief crash course there are certainly some great books like I recommend you know Kubernetes in action really fantastic book and just the Kubernetes docs yeah and you'll be set for success well with that we do have a few more minutes left and I would love to jump in to just a couple of questions and what we'll do is we'll answer some questions live please do continue to add your questions and then when we follow up we'll follow up with a video of this presentation and we can also add some more answers to that email as well so I can one of the questions of course is hey can I get a recording yes we'll be providing this later on there are some other questions that folks have added like does GitLab support other cloud platforms and of course yes it does so you can deploy you can use GitLab to deploy to any environment so you can deploy to bare metal you can deploy to virtual machines you can deploy to any type of cloud provider platform the thing to keep in mind is that the tight integration where we show today where you literally can just install with a few clicks that's only available on GCP so if you were using a different cloud provider many folks do and you can use GitLab to deploy your app to any cloud platform and we've just built some of the niceties specifically with Google Cloud Platform there's a question here on just controlling costs so I think you showed earlier the GCP Google Committee's Engine kind of configuration page what I would recommend from a cost point of view is just set a good limit on your cluster size so and this is assuming that you do have auto-scaling enabled so one thing is you don't have to have auto-scaling enabled if you just want a fixed number of machines then you can absolutely configure that auto-scaling is good because it can probably actually save you money by removing machines you don't need particularly if you're looking at developer workflows unless people are working 24 seven around the world I guess if you have a company like GitLab itself which does that but if you have periods of time where it's quiet or at least surely the weekend the auto-scaler can actually probably save you money but then yeah, if you're worried about costs just make sure you have a low maximum on the auto-scaler and that should control that for you I think I don't know, William if you want to talk about the number of review apps in particular I guess if you try and do too many review apps and there's not enough capacity it would probably just put them in a pending state anyway Yeah, it's basically that is gonna cause load if you're using a lot of review apps but because once you merge the code then they spin down and you're no longer using them so it's not like they're running in a perpetuity and I also showed you can manually shut them down as well so that's possible And actually like one of the nice things that I think about Kubernetes and GKE is that you can also over commit your resources as well so if you look at kind of traditional pricing models for PaaS they typically will be charging like based on the actual deployments whereas GKE is charging based on the number of nodes so you can actually over commit a node and you could be depending on literally how many CPU cycles the review apps need but you could be running a lot of apps on just that one node and over commit it as well which is something that's a bit harder to do with kind of per app or per deployment pricing models so definitely I think GKE is really great actually for saving money and it's definitely something that we wanna see you do as well like even though we make our money from the number of nodes you want we don't wanna charge you if the nodes you don't need so yeah. I see that Jorge put in chat that RSpec RSpec and MidiTest are what I was thinking of as defaults for Rails I think it's RSpec is what's in my app but I appreciate you chatting that to the group. There is another question here just on pricing as well on which of the features that we showed today are they all available on the open source version of GitLab or do I need to use the commercial version of GitLab? Do I need a particular paid plan in order to access the features? And so I did wanna touch on that briefly that I did show today essentially both freely available and paid features. The majority of the demo is all the majority of what I showed is available on GitLab Community Edition which is our free and open source version of GitLab or you can actually run Enterprise Edition without a license and it works the same way where you can run it free and get the features. So auto DevOps works in our free version and the GKE integration works in our free version. So what I showed were basically you can click to enable a Kubernetes cluster and create a Kubernetes cluster that works on the free version and you can enable auto DevOps and get those pipelines right out of the bat. That is also on our free version. Now what's gonna be in some of our paid tiers is how robust those pipelines are. So for example, the security scanning features that the auto security scanning those are part of our paid tiers and then another component is when I showed the deploy boards where you can actually look at an environment and see how many pods are part of an environment and then as I'm doing a deploy, it will show you live how many of the pods are running the new code and so you can track a deployment. That type of thing tends to be for larger organizations that are running in prod and running at a larger scale so that that features also in part of our paid plan. So that is a really good question. You can check out gitlab.com slash pricing and then you can, there's a see all features link and you can actually see all of the features what's a part of our free tiers and what's a part of our commercial tiers. There was just a clarifying question there. You said that you can run the enterprise edition with out of license, did you wanna expand on what that means? Yeah, absolutely. So in a nutshell, in the past there was a bit of a pain point where folks would install the community edition and then they would, you know, they'd start out with it and later on they wanted to upgrade. Of course, now you need to do a software migration in order to upgrade or even if you wanted to do a free trial you had to do a software migration. So it was really painful for people. So essentially what we've done is we've architected it such that you can take the enterprise edition software and you can just install it and run it like it's a community edition. Of course, they're different bits of software if you're thinking about contributing to GitLab or you want to open source or modify or adapt GitLab then the community edition is the open source version. The enterprise edition is proprietary but if you're just concerned with running GitLab and you just wanna run a free version of GitLab we recommend running the enterprise edition, running it for free, essentially without a paid license you can install enterprise edition and it will actually run with all of the same features as community edition. We call that our core plan and so the core features that are all open source you will have access to and then in the event that you wanted to do a free trial or in the event that you wanted to upgrade in the future you can do that very simply by adding a license key without needing to do a software migration. So there's a page off of our installation page that shows you the differences between those two but the recommendation there is just to when you're downloading and installing GitLab to use the enterprise edition you can actually just run it for free and get access to most of the features there. There's, what I did wanna touch on here there's a really good question is GitLab itself running on GKE? And so the answer is not today but very soon. So we are actually in the process of migrating GitLab.com over to Google Cloud Platform and specifically to run it on GKE and so there are lots of issues that you can find on that and if you watch our blog for the announcements we'll be talking more and more about that. So we're very excited about that migration. We're excited to be running GitLab itself as a cloud native version of GitLab running on Kubernetes. That's currently a migration that we have in process and so very soon GitLab.com will be running on GKE. Very exciting. Now I see there's another question related to price there is a free version of GKE. So we do have a free trial so you can actually use GKE for one year and you'll have $300 of credits so that you can spend over that year. So that'd be what I'd recommend if you wanted to try it out for testing and learning. And I have a good question. Yeah, I should point out as well you could actually go to about.gitlab.com slash Google Cloud Platform. And there's information there on, so everyone who signs up for Google Cloud Platform when you sign up for a new account you get a $300 credit to start. And then also you can get an additional $200 as a partnership credit from GitLab. And so there are links on our website. If you actually just go to our homepage there's a banner at the top you can go through and learn more about that additional $200 in credit. So we're excited about this integration excited about this partnership and that's another bit to help you get going and to get you started. Yeah, that's fantastic. So $500 to try it all out. There's a good question here actually regarding the requirements for other applications. So the user has a web product which requires J2E containers and other third party year-out packages. I guess that's relating to the auto DevOps component. Certainly when it comes to like containers and Docker anything I think I would think anything you can do in Docker any dependency that you need to bring in you can just typically bring in in the Docker file. But did you have any comments about dependencies and auto DevOps in particular? I do. So you will wanna check out our documentation for auto DevOps. We can share some doc links in the follow-up email. There are certain frameworks that are supported more than others and then there are especially like for our security testing there's some languages and frameworks that are supported. So you can find a full list of our supported languages, our supported frameworks, of course we're continually adding additional languages to those but you can find that all in our documentation. Thank you both so much for this wonderful presentation and thank you to all of you who have joined us. We hope that you learned a little bit more about the integrations versatility to speed up software development and delivery while also maintaining security and scale. I'm sorry we weren't able to get to all of your questions but we will send you a recording of the presentation so that you can take a closer look and if you have additional questions please feel free to respond to that email and we will answer your questions. Thank you again, we're so happy you joined us. Bye. Bye Bruce.