 So yeah, I'll start from those questions, but I hope to have more questions as we go, but I'll also sort of just take some liberties and discuss whatever I think might be relevant there. From what I understand, the basic idea is that you all have done the I2B demos and the exact luncheon demos, which are kind of the same, and you've probably seen lots of other demos, but there's a whole bunch of stuff we don't explain, which is very, very true. The demo is purposely, like, trying to be as short as possible, you know, in 10 minutes, it's supposed to wow you, but it doesn't explain how anything works. So there's so many pieces there. There's the Kubernetes part, there's auto deploy, there's, you know, more sort of normal questions about, like, runner architecture and stuff like that, which I hope some of you totally know, but some joining might not. So I guess let's kind of, I wonder if I actually want to share a screen and do the demo and talk about things sort of piece by piece, let me, so I should have probably set that up ahead of time. Let me pull up a screen. It's okay, we know how to set it up while talking as well. Exactly. All right, where did my window go, and hopefully you can see that. So, wow, there's so many different levels of this too, because there's the installation part, which is a whole other interesting thing, but most of the time you don't demo the installation, so maybe I'll leave that for later if we get to it. But starting with just sort of, you know, the regular project stuff. So if I recall correctly, the first step is basically to create a group. This is not that big of a deal. The one interesting thing here is this Creative Mattermost team for this group. And this works because this instance has been configured with Mattermost, which is not actually obvious, you know, because on .com, of course, we don't have Mattermost configured. But with Omnibus, we ship Mattermost with every GitLab. A lot of folks won't use it, but it's there, like the binaries are there, whether they're just running or not. And this has been configured to have Mattermost installed. And then because we know that Mattermost is installed, we can just automatically offer this choice. There have been a whole bunch of little things under the hood that actually makes this work, because we've set up OAuth, basically bi-directional, between the two automatically. And by the way, as an admin, I didn't have to do anything either. Maybe I should have shown you the setup part, but literally all you have to do is just install the Helm chart and just everything's there. Actually I will quickly show you that, Dr. Dujian, because it's new. This is, I don't know, a couple of weeks old. We used to have instructions for installing Kubernetes, and it only referenced how to install this GitLab chart and this runner chart. But the GitLab chart there was like a core chart. It was just the GitLab instance, and it didn't enable a whole bunch of stuff by default. We just made this official, the GitLab Omnibus chart, which, you know, Reb and others, it's the exact same chart we have been using for our demo, but it used to be called Kubernetes GitLab demo. It's been modified since then, but it is based on that. So it is everything that is in I2P, but are now available for our customers. If you install using this chart, you will get, well, heck, I'll click through and see what you get. You will get Mattermost, container registry, Prometheus. You'll get an auto-scaling runner, Redis, Postgres. You'll get EngineX ingress with Let's Encrypt. We should probably add that documentation. And this will work on at least these platforms listed. Well, listed somewhere, oh, right up there. Google and Azure are where we've officially supported this. It might work elsewhere, but we just haven't tested everything. So again, if you install this, Mattermost just works. And this option is enabled, and it just works. I frankly don't even know how you'd enable it manually if you didn't use this chart. I'm sure it's possible, but I don't know how to do it. And then obviously, like on.com or other places where Mattermost is not installed, you will just not see this option. And anyway, so because it's enabled, we've got OAuth going both ways. And there's some funky things in there about having admin control so that when I click on Create Group, we can actually go and use the admin account of Mattermost to go and generate what they call a group, I guess. Maybe they call it a team, sorry. They call it a team, we call it a group, same thing. Anyway, it creates that team for you automatically. And as soon as that's done, I can actually log into the Mattermost and I should be able to see there under the hood. I already had another team, so there's two of them there. But anyway, so now this team is created. Let me know at any time if anybody needs more details on that, but hopefully that's not for now. So then we create a project. And I do this off with enough that I have it in a note, which by the way, this page is changing. In 9.5, this is really interesting stuff, but in 9.5, so actually just getland.com, if I click on New Project here, I now actually can pick from templates or import. But I have this option to pick from the template. It won't change the I2P demo because we're still going to start from an even simpler app than one of these templates. But this is a cool thing that might make it a demo at some point. Anyway, sorry, back to this page. So nothing fancy here yet. Oops, I didn't give it a name. All right, so I think I'm going to skip through the issue board things and stuff like that because there's not much under the hood there that hopefully it's obvious how issue boards work. I don't know, I suppose I'll dive in a little bit here. Actually, one thing I didn't do yet is go into the settings integrations and add the matter most. This is again using that same OAuth thing. So it makes it super easy and there's not really much to talk about there. But what I will do is there's other integrations in here. So another thing that's sort of behind the scenes that might not be obvious is that Kubernetes was enabled automatically and Prometheus was enabled automatically for this project. And that's not a given. Again, because I installed the GitLab Omnibus or Omnibus GitLab chart, it actually configured these things for me. In order to see that, I need to go into the admin area and there's this service templates section. And this lists all the same services, but basically it lets you specify the exact same content of that service, the Kubernetes integration, but it doesn't add an instance level. Oh, something's flashing. Does that mean there's a chat? Oh, it does. What is a helm chart? Good question. I'll get to that in a second. So this service level integration, basically what it means is that this mechanism of installing happened to basically just go in and configure all this stuff for you. If you didn't use that automatic thing, I could, as an admin of the instance, go in and set this up for all of the people on my team. You obviously could just not do that. And then on a project level, you would have to go and copy and paste all these things. And these credentials are not obvious and whatnot. But because we did the installation all at one time and we knew the Kubernetes cluster, we knew the IP address of the Kubernetes cluster, we knew the tokens, whatever. We can just set all that stuff up automatically for you. Earlier versions of the I2B demo actually had you copy and paste all this stuff manually until we added this stuff automatically. So that's the Kubernetes stuff. And then similarly for Prometheus, we've enabled that. And this is a lot less scary. It's just a static configuration. But still that was done for us from the helm chart. And then it was configured at the instance level, which meant that going back to my project, the project had those automatically. So again, at a project level, or like if I'm using .com, for example, or anything else where you're not installed under Kubernetes, if you wanted to get the same cool stuff, you'd have to enable Kubernetes manually and enable Prometheus manually. It should be noted that you don't need Kubernetes in order to enable Prometheus. Prometheus does work in other circumstances. It works better with Kubernetes, but it does work. But in order to get any of the auto deploy stuff that we've done, you need Kubernetes. We only support Kubernetes. But having said that, everything that we do in auto deploy, you can still do to any other non-Kubernetes thing. It's just that it's not auto. You just have to do yourself. You have to write it. You have to figure it out. And a documentation will help. Log posts will help, whatever. Somebody have a question? I heard an unmuted mic. But all right. Okay. So Helm chart. Yeah. We're stepping back. May as well go to there. If I go back to this documentation, what this does is... This is going to be a long sidetrack, but it may be worthwhile doing. Kubernetes. I don't know how much you know about Kubernetes, but Kubernetes is the future. As far as we're concerned. We believe that Docker and containers is a huge part of the future. And that Kubernetes is the best way to do Docker in production. It is not entirely mature, especially as a sign of... It's just hard to use. It's hard to get started. It's hard to understand. It's pretty complicated. Docker itself is hard enough to understand, but like there's enough momentum on that that you can't stop it anyway. But even if you know Docker, that doesn't mean you don't know anything about Kubernetes. And there's lots of ways you can go from Docker to using it locally, using it in test toy apps and whatever, and then all the way up to production. Helm is a... Basically, it's a small little specific tool that is intended to just make Kubernetes slightly better. I'm not going to say it changes everything. It doesn't make everything easy, but it does make it a little bit better. So if you've never played with Kubernetes, this is like so scary. I don't even want to come and show you, but like actually let me... I'm going to do this. Give me one second. I am going to try to pull up the Kubernetes dashboard for this. Mark, it sort of sounds like the Helm chart is just a template you can use to set up Kubernetes. Yes, that's basically what it is. It is a... It's just a set of configuration files that is... It's a templating system that lets you say, I'm going to install these 12 services all at once. That's really all that Helm is. I suppose I should go to the Helm project and see what they define it as. It does not look like it. That is... Come on, seriously. Helm.io maybe? SH, yeah. Never want to get stolen. All right. It's a package manager. In some ways, just installing Debian packages with apt-get or any of that kind of stuff on a Unix box. It's a way to install something that somebody else bundles up and then put it on your whatever platform in your Kubernetes case without worrying about it. It's a little bit more than that. People are using it for their own packages. It's not just about third-party stuff, but it's... At its heart, that's what it is. One of the reasons this is necessary, I'm just going to try and skip over the details of what I'm doing here for a second, but I'm looking at the GitLab app. I'm just going to show you what the YAML looks like for the declaration of this, of how to deploy GitLab. It's got some meta stuff, but then it's got decorations. Okay, how many replicas? That makes sense, but then these like selectors and match labels, what the hell is that stuff? All these other template things, then there's a spec and it just goes on and on and it becomes this huge thing. In our case, it's particularly bad just because there's a lot of configuration that needs to be done and it's so many levels of indirection. Here I've got this external hostname, but instead of just saying what the hostname is, I point to another config map where the hostname actually is stored by another different variable. So you've got this environment variable GitLab underscore external hostname that is then referenced by, and I'll even pull it up, config maps. Somewhere in here is the definition for the external hostname, and that's where it's actually set. So it's just like Kubernetes uses these multiple layers and indirection, blah, blah, blah. It's complicated. It's hard enough if you're a technical person to know what value you're doing. It's just too much crap to deal with. And so Helm at least just means that one person figures out that crap and then somebody else can make use of it much nicer. So if I go back to our instructions for configuring everything outside of the preamble, we basically just have this one line and you're done it, maybe you haven't done it if you haven't done the ITV from scratch. But basically with one line, I am going to install GitLab. Oh, that's where your GitLab on the bus? On the bus GitLab. You may have not backwarded in that thing. It's not a condition. So this greatly simplifies how people need to configure stuff. And in our case, we're actually one of the sort of most complicated charts out there. Most of the charts are more simpler. They're like, oh, how do we solve just Postgres? It's like one binary or well several binaries, but one process that runs. But in this case, when we're installing on the bus, we're installing several processes, several pods, not a huge number. You know, if I go back to, well, I'll look at the entire workload for, this is what for the most part is installed with GitLab. It's the GitLab core thing and then Postgres and Redis. But it's also got, well, there's a runner somewhere. I think that's in the default namespace, maybe. On this instance. Yeah, so it installed the runner as well. But it also configured a whole bunch of other things like, there's nothing to show for Ingress, that's weird. But there is a bunch of Ingress configured. Oh, not the default namespace. Go back to GitLab. That there's, you know, we've set up several different domain names and all this other kind of stuff. Anyway, it's complicated, so Helm makes it easier. Helm isn't awesome, but it is good and it is popular. And so we've sort of chosen to use that currently, at least, until something else is better. And so all of our charts and everything else, everything's published by that. AutoDeploy is going to start using charts. We expect that developers themselves are going to start using charts. And it happens to be a convenient way. When I've got an application that I want to deploy somewhere, you write a chart that says, this is exactly how you deploy it. And if you put that chart into source control, we can actually detect that you've got that chart and we know then how to deploy your app in a custom way, which is pretty cool. But now I'm going to be on it a little bit. So anyway, hopefully John and Joel that that asks answers your question about Helm charts. Very much so. Thank you. All right. Okay. So again, okay. The Helm charts, it's actually really interesting. If you want to see the behind of the under the hood stuff, if we have time, I'll actually show you what the Helm charts look like. Because more and more, you're going to run into this kind of stuff and people will ask you about it. So it's good to at least know what it looks like. But for now, I'll keep moving on. So that's the admin stuff. So again, I'm not going to run through the demo because you'll know the demo. But I will just do a couple of pieces here. All right. So next major step then is adding the auto deploy template. This is nothing really too scary here, but just in case you're curious, we have a few templates. You know, we've got get ignore templates, Docker file templates. We've got a few things. Docker file actually is a good one that you all should know about too. But for now, the two main ones you care about, the Kubernetes with Canary and Kubernetes without the Canary, OpenShift one is going to die pretty soon, I think. It happens so work, so it's fine, but we're completely revamping how we're doing stuff. And we're just no longer going to maintain anything specific to OpenShift. And we're putting all our bets on Kubernetes properly. There's not much difference between these of the base thing, but the Canaries basically just add an extra Canary deploy step, which again hopefully you've done that, to be demo, so you know what that means. It's optional though, because this is actually worth understanding. Because if nothing else, everyone like every week or so, I get pushed for like, why aren't Canaries just the defaults and things like that? There are a lot of great reasons to use Canaries. There are also a lot of reasonable reasons not to use Canaries. And actually, if you're dealing with folks and you're encouraging to use Canaries, this is probably worthwhile knowing. Canary deploys, at least as implemented by default, specifically just add a pod with a new version of code and route between all versions of your code equally, or on a pod weighted basis. What that means though, is that like, let's take an easy example, but I have a version of my code that adds a form, subscribe to something, and you click on the subscribe button and it posts to a place where some action happens, the subscription happens. So imagine my version of code added both the form and the recipient, the receiver of the form. So if I hit the Canary and I see this new form, great, I hit submit. But then when I submit, I'm making a new call to the server and I may or may not actually hit the Canary again. I could hit the other previous older version. And then suddenly, because the older version doesn't know how to understand the form, I get a 404. That's kind of a shitty experience already. It actually could get worse though, because when you hit load a page, you're not making a single request. You've got CSS that loads, that has all these other attributes that then might load SVGs and other graphics and other things, so every one of those calls is then going to randomly hit maybe a Canary and maybe not a Canary. So if you made CSS changes that involved additional files, you might end up then with a page where like every third graphic doesn't load properly or something stupid like that. So hopefully that's enough to just sort of explain like this is complicated. So it works really well with Hello World, but we maybe want to be careful with other things. Exactly. It works great with Hello World, it works well with back-end refactors. It works well with, I don't know, some tweak to a performance, whatever. It works well for lots of scenarios, but it does not work for every scenario. So blindly just using Canaries all the time, it's actually a mistake. We could bite you worse than if you just didn't use them in the first place. So it's a tough one because as a company where I want to make it easy to promote best practices and tell people this is how you should do things, I can't just blanket say you should do Canaries. Now, you want to get a little bit more detailed, there are things you can do to make this stuff not suck. The easiest thing to talk about is session affinity. You know, like, because what I just described, the biggest problem was that when you hit one thing, you don't get stuck on it. Well, session affinity attempts to exactly solve that. You hit a page, you get routed to the Canary, and on the server side, it keeps track of the fact that you should be hitting that Canary. Actually, technically, it's going to throw it back in a header or a cookie or some other way. But session affinity would mean that you're basically sticky and that all of your requests are then going to go to the Canary. There is one way to solve it. And folks like, they're used to Java, are used to session, like Java actually, by default, like required session affinity because sessions literally didn't cross between. So if you hit the, you know, if you hit one pod and then you go to hit the next pod, you're logged out. Right? So Java from, you know, forever has basically assumed session affinity. But other, dare I say, modern languages, Ruby, et cetera, don't, they assume session sessions are managed they're shared and that each pod is dispensable. It's, it's not a pet. It's cattle, all this kind of stuff. Cloud native for the most part embraces that as well. So to some extent, using session affinity functions, but is distasteful. To be honest, I don't know what the current thinking on cloud native is. Maybe I overstepped saying that, but I know Ruby doesn't, and it's 12 factor. I used to work at Heroku so I know stuff very well. But at Heroku, we believe very much that like every instance should be totally independent and session affinity, you know, Heroku fought that for years, believing that it was unnecessary. Eventually finally gave in and offered it. But anyway, it is possible to configure your community setup such that you have session affinity. We do not by default. Nothing here has done that. But as a customer, you could choose to do so. And then just to throw on some more whatever arguments for the, for a lot of reasons, like if you're going to UI changes and stuff like that, canaries are probably not necessarily your best tool for that feature flags, actually are a great tool for that. At some point, I want to build feature flags into the product as a first class thing to make it easy for for customers to make use of feature flags on their own products. Feature flags are awesome, but it's not something that we're prepared to really push right now, but it is a compliment to doing canaries or whatever. To some extent, it solves the same thing, but it's kind of like a Venn diagram kind of thing where maybe there's 60% overlap. There's some stuff that canaries can do differently and there's some stuff that feature flags can do differently. Anyway, the net of it is that this isn't like a panacea that you just say, oh, we'll do this and it's awesome. So we still have to give people a choice. Going further, just as a heads up, in 10.0, we are moving towards auto dev ops, not just auto deploy, which will add a whole bunch of other stuff automatically and it's really awesome and exciting. But we're actually going to make canaries not default for auto dev ops for a lot of the reasons I just said, like you have to know when to use them and therefore I don't want people to get bitten because we made it default and you didn't know what you're doing. But also, actually there's another reason for that, which is, this is tangential, but good stuff to know. Continuous delivery, by the way, are we recording this? I never checked. Yes, you are good. Auto, sorry, continuous delivery versus continuous deployment. Hopefully you're familiar with those terms, but because semantically, it's really confusing which means which and for years in fact, I got them backwards. So I'm going to go back to sort of like Martin Fowler's definition of continuous delivery and continuous deployment. And in those words, continuous delivery is all about being able to deliver at any given time, but not actually necessarily delivering continuously or not automatically and immediately, but continuously meaning, yeah, you should be able to, you know, push your code out once a day or something like that. But it doesn't mean that every single change gets pushed out immediately whereas continuous deployment does. Continuous deployment says everything gets pushed out immediately. Practically speaking, what that means is that continuous delivery, you would usually automatically deploy to staging or QA or some other non-production server because you want to because the whole being able to deploy immediately means that you actually tested the deploy works. So by testing to deploy to staging, that usually is counts as that. But continuous deployment means that I'm going all the way to production. Once you're going all the way to production, some interesting things kind of happen. One is, why have a staging server if I just always deployed to production? So you might in fact just get rid of your staging server. But it also means if I'm automatically deploying to production, how do canaries fit in that world? Because you can just automatically deploy to canaries, I mean you could, but then you're still not deploying to production unless there's some automatic way to do that. We don't yet have any automatic way to deploy. Like one, you could argue you deploy to canaries then after 10 minutes check all your metrics and everything's good, then automatically deploy to production. That would be a pretty cool flow. We do not have that mechanism to do that automatic flow today. So therefore, if I want to do a continuous deployment flow, the only way to do that is to deploy directly to production. And specifically, my thought process on this was with autodevops, I want to be encouraging the best practices from day one. And the best practices for me would be continuous deployment. And so my thought pattern was something like I want to first start introducing stuff with auto deploy like that we've got in the IQP. We're mostly talking to companies that already have existing systems and they're adding something to it. But they've got apps. They've got, you know, all their systems in place and we're just trying to, you know, support it. And most folks just do continuous delivery. They don't do continuous deployment. It's fairly rare. And so it's safer. It's conservative, right? So auto deploy is conservative. With autodevops, we're getting a little bit more aggressive and saying, look, this is where you should be. You should just deploy to production. So therefore, if you're creating a new project, we're just going to deploy to production for you because that's where you should start. And only if you actually run into problems or whatever, then back off from where you should be. Which by the way is an interesting philosophical thing because a lot of, a lot of teams, frankly, start off. Like, you know, if you started developing software seven years ago, and yeah, you're using Ruby on Rails and doing all this modern stuff, but you're probably still doing everything out like manually. And you probably don't have even CI. And so, you know, you incrementally add CI, then you add continuous delivery, then you add continuous deployment. And this process, this transformation takes years. And what we're kind of do is, in some ways, turn that on its head and say, we're just going to give you where you should get to. Like we're just going to make it really easy to get to the final destination. And then, you know, only if you really need to, we'll back off from there. And, you know, might fail. We'll see how it all works, but I think that it will be a little bit more. I might encourage the best practices a little bit more. But anyway, point about that was then if I'm going to do continuous deployment, I have to deploy directly to production, in which case there's no, there's no room for canaries anymore. Not currently. All right. That was a long winded tangent. Part of the reason it's going to take an hour and a half. Is this too much crap? Do you guys want me to skip over these things? I think this is the type of conversation we need to have with our prospects and customers. So I appreciate it. It's all good. All right, cool. All right. So, yeah, I guess the first, going back to the demo itself. So, guess the first obvious thing, cube domain. This is an awesome interesting topic all on its own. The reason you need to specify a cube domain is that we obviously do, we manage a whole bunch of the DNS and stuff for you. And we need to know your base domain so that we can then construct all the app domains for the stuff above. All right. When I actually deploy this, it's called under, it's going to be like an under.itp.online and under-staging and under-review, whatever. It's going to create all these things and they're all going to end in itp.online. So I presume you use a wildcard DNS entry for that? Yes. And then you can figure out what's coming in and just sort the request that way locally. Just like with the pages. Yes. It's actually even cooler than pages. So I'm going to go into a little bit more detail. Cloud DNS, just to prove the point, itp.online. I have a wildcard DNS entry for the, basically for the entire dam cluster. And it's interesting that way too, because like there's multiple machines involved or whatever, but it's just a single IP address, right? Part of the beauty of cloud. That's all you need to do configuration-wise ahead of time. And then everything else is just magic after that. Close. So we, I mean, because it's a wildcard, star dot, we don't actually have to manage any DNS. What we have to manage though is ingress. And this is a term that you may not be comfortable throwing around, but it's going to come up a lot in the community's world. You know, in the old days, old days, whatever, in the old days, when you installed, you know, you created an app, right? But that app would sit on a box and that box would have an IP address. And you'd, you know, it may have an internal IP address, but it has to be eventually routed outside. So it has an external IP address. And you'd have a DNS entry specifically for, you know, my production app, like under-production, let's say, idb.online. And you'd have to have a DNS entry because it would be pointing to an IP address that was allocated to you by your IP provider. And, you know, and then you create a, you know, a staging app and you create a review app and whatever, and you'd have different IP addresses for all these things. At some point, you know, if you ended up saying, well, you know, maybe I can actually have all these review apps served by one server, but just running different things on it, you know, like if you ever didn't shared hosting using CPAN or whatever, you can have one IP address serve multiple projects as long as you have different DNS pointing to them. And then that one box would detect what the host name is and then route based on the host name to a different application. And so, well, the natural extension of that is then you just say, oh, let's just give one IP address to the entire thing, all the wildcards, anything.itb.online gets routed to this one IP address. And there's this thing that basically just routes to wherever it needs to go. All that stuff is done for you automatically, but it is really awesome, interesting deep magic that it's worth understanding a little bit more how that works. It's hard. Let me think. How can I demo what is actually going on? Well, several things are happening. If I scroll down to a deploy, a staging deploy, here I declare what I want my URL to be. And it's based on Kube domain, obviously. It ends there. And I've got some other thing in the star portion. First off, I just kind of have to get this out of my head. Because I only have a wildcard for star dot, I cannot use multiple sub-domain levels. I can't say under dot review dot 12.itb.online. That will actually fail because there's no DNS for that. This might be obvious, but I'm just going to go through it anyway. Because there is only a wild, a DNS entry for star dot, I can only use one string that does not have a dot in there. So if you look at these examples, the Canary has some slug dot. The production has some another slug dot. This one has a slug plus dash staging. This one has another slug and then two slugs, path name and the environment slug. Everything is basically separated by dashes or underscores or something, but not dots. That's actually really important. If you tried to create something otherwise, it would just fail. If, as a customer, you needed to have multiple levels, then you would need DNS changes. If I wanted to make staging dot under and production dot under, I can do it, but then I'd have to go back to my DNS settings and add another record, which wouldn't be the end of the world. I'd have to go and manually add star dot under dot IGB dot online, and then point it to the same IP address. I'm not actually going to do it by copying it, but you could do that, but boy, that's a pain in the ass, which is why I prefer this other mechanism, because it's all automatic. There's more to it than that, though, because we automatically get SSL encryption, HBS. I haven't actually deployed this yet, so I can't test it, but you've seen the demos. It works for, in fact, actually, let me see. Oh, no, we didn't. That's funny. We only use HTTP in these, but HBS would work as well. And that happens magically because of a couple of things. One is called the Let's Encrypt. If you aren't familiar with that. Let's Encrypt is awesome magic. Basically, it is awesome magic. Yeah. Basically, it's free. They're asking for donations or something. Yeah. Other than donations, it's free. And basically, it creates certificates for you on demand, as needed, and just automates the entire damn process. If you've ever tried to create a cert yourself, again, using C-Panel or whatever the hell, you know it's a pan-and-the-ass process. You, even like, the creation is sort of okay, but it's then going to email you at the who is information for the domain name you're using to verify that, in fact, you have the right permissions to do stuff. And then once you verify the email, you click through some stuff, then it'll go and generate things. And who's got four hours to make a new cert for everything you're going to do? Now, sometimes you might just make a wildcard cert. And that, to some extent, answers the problem. But even so, like, there's a few other certs involved that just, it's pan-and-ass. Anyway, Let's Encrypt just does all this stuff for you. And we have made use of that automatically, again, in the Helm chart. It's not something that you would have, well, you may or may not have in your Kubernetes configuration, but it's not based into Kubernetes. So you still have to make sure that you configure it that way. There's also another piece of it, which is the Kubernetes, I think it's called kube-lego is the component, which actually just ties all this stuff together. And the way it works is like, every time you set up Ingress for a domain, this little thing just kind of silently goes and says, oh, that's a new one. You don't have a cert for this. I'm just going to go and create a cert for you and just throw it in there, which is awesome. Because I mean, you just do nothing and this magic starts working. Downside is there's like sometimes a slight delay, whatever. There's a couple other downsides. Like if you create too many domains too quickly, it'll get blocked and that can be a real pain. So if you've got a large organization or whatever, you probably need to contact them and get them to bump up your limits or whatever. But anyway, I feel like there's more there. And again, if I want to dive into the helm charts, I can show you how this stuff is actually configured. But it's important to at least know that this stuff is there. I'm going to dive in a little bit more than about the configuration itself. Again, hopefully, well, some of you definitely know all this stuff, but I don't know where the rest of you are at. So this template includes a bunch of definitions of custom stages. By default, if I didn't specify any stages, there would be build, test, and deploy. Those are the three that are built in. If I specify any though, then that overrides the default. But there's always a default, so you don't have to bother specifying stages. In this case, it's actually fairly complicated. There's a bunch of different stages. But this is what I would recommend that every customer basically has. So even though auto-deploy did this stuff automatically for you, this is what I would say is the best practice with canaries. If you look at the one without canaries, then that's fine too. But basically, there's a bunch of stuff you're going to build somehow. You're going to skip canaries for a moment. You're going to have a bunch of different deployment options. Well, canary, but production, staging, and review apps. And then you also have to be able to destroy review apps. And that's basically it. Actually, what's funny is, I say this was best practice, but it's not. I defined a test stage, but I did not define a test job in here. So one of the downsides of auto-deploy is it's just about deploy. And we did not auto-test. In 10.0, we should be shipping auto-test as well. And a few other things. And that's why we're wrapping it up under auto DevOps. And so auto DevOps will automatically test, but will also do automatically check your code quality. And then after you deploy, it will automatically do auto monitoring, although that will, that's in this as well. So I think in 9.5, we added auto monitoring a little bit more. It's always been there, but we've improved it a lot in 9.5. So anyway, so this is not actually my best practice. Like the stages are, but there's no current job in here for test. And that is obviously a best practice is that you should have a test job. Now, how this stuff actually works is pretty interesting because this, you can see like this build, not a lot to it, right? We declare the stage, okay, no big deal. We declare a script, but all the script says is basically run build. And then we have this one little thing that just says only do this for branches. And in case this isn't obvious, for me, the terminology of branch is a little weird. I think of like master and branches, but in this case, master is a branch. So basically what this only means is basically everything except for tags. Because in Git, you can have like an individual commit, you can have a ref like a branch name or a tag name. And we don't currently do anything with tags here. So we're saying basically only do this for branches. And so if I pushed a tag to this repo, nothing would happen. No CI would run because there's no jobs to find for tags. Tags can be really useful for CI. So it's good to know about. In particular, if you're doing continuous delivery, one of the best ways to do this is to instead of making production a manual deploy, you would actually say, well, deploy the production on a tag. So the idea there would be, I merge a bunch of things in the master, that automatically goes to staging. When I'm ready, I would tag it as release 1.0 or whatever. And when I push up that tag, there we said, oh, this is a tag release. Great, now I'm going to go and deploy that to production. That is a perfectly valid flow that involves tags. This flow happens to not involve tags, so it's fine. But anyway, but that's all this does. So really there's this magic under there, which is something we need to dig into. But just to keep going through the rest of this file, it's basically the same stuff for everything else. We've got this magic that does a build. We've got this magic that does a deploy. We've got this magic that does a canary deploy. And then this magic that destroys something. So where the hell is all this magic defined? Basically, it all comes from this line where we have said that when you run all these steps, first thing you do is actually load this image, which the image name is Kubernetes deploy. And for some reason, we put it under GitLab examples. Probably should have been put somewhere else, but we're going to kill it soon, so it doesn't really matter. But this is a Docker image and it basically has all the magic in it. And that's worth diving into. This basically has these commands. So when you say command build, literally command is just a bash way of saying run this bash script. So when you say command build, you say run this bash script. This bash script isn't that long, but it's fairly long. So imagine if you were doing this, if you're writing this template from scratch and you actually wanted to do all this stuff, instead of just having a single line that says command build, you actually had to have this 8091 line file in there. It would make the GitLab CI incredibly hard to understand. And so we've just basically hit all this stuff. And then also from a don't repeat yourself kind of perspective, we call the deploy command multiple times, you know, staging production and review ops all called deploy. So it's better to wrap that up into a single bash script. I don't like this because it hides all the information and it doesn't teach you anything about how to do it. And it makes also growing out of auto deploy really hard. Because like the next step then is to suddenly jump into custom. And so we have some intentions anyway for improving this in 10.0. But for today, that's where we're at. Just realized there's a couple of questions I haven't necessarily answered. How common or uncommon is it for companies to do continuous deployment? There are a few. I remember Pivotal Labs, the consulting company, all of their engagements, they would do continuous deployments. You know, there are a few companies that do continuous deployment. There are a lot more that get really close, but don't. You know, even when I was at Heroku, things may have changed since, but we didn't do continuous deployment for many things. It would be like, oh, we'll deploy a couple of times a day, but you know, it still wasn't just automatic. Yeah, we have a lot of options for configurations, all documented. I haven't found it yet, but it's my first week. I remember repository of samples or reference by chance. Unfortunately, I think I lost the context. I'm not sure which options you're talking about there. You're just looking at all these YAML files. You know, there's a whole lot of variables that can be inserted here, right? You're starting to jump into it with the examples here. So you're out of me. Yeah, so there's a few things, right? GitLab, CI, variables is the place that I would start. Well, that's a shame. That's one place I go to a lot. The other one is then GitLab, CI, YAML reference. This basically is the most viewed page, probably, I think, in our entire documentation. And this is what describes every keyword you can have in that file. Unfortunately, it's sort of like describing a programming language. What you do with keywords matters, and that's where best practices come in. Unfortunately, we don't have a lot of good examples. Our stuff, dude. What's this when they ask us questions, and we have to walk them through it. But if you see some examples like in the demo stuff, it's actually pretty good for people. Yeah, it's funny because I end up hearing, I love your feedback more as you talk to folks, but I end up hearing, some people say, oh my god, this is so easy to use, and documentation is awesome. And then somebody else is like, this is horrible, I can't figure out there's no documentation. And I can only guess there that it's like, if we've documented it, we do a really good job. But if you need something that we didn't think you need, or whatever we just didn't document, then it's really horrible because it's really hard to figure out how the hell this stuff works. More than the doc people want an example of like, this is how you do it. We document the stuff and you look at it and you're like, I have no idea how to implement that. But if there's the doc and then even a three line example, you're like, oh, that's it. Right, exactly. So we need to improve there. You know, we do have a list of examples. There's one per language, but that's so it is not enough. Let's just think of that. Okay, so getting back to this script, I won't dive into what the script exactly does. Some of it is just sort of error checking because you could use the script anyway. So if you were to write the stuff yourself or as a customer, you wouldn't make it this verbose. You wouldn't check to see if the Docker engine is running and stuff. You'd make it so the Docker engine is running and then you'd leave it at that. But nonetheless, there is some interesting stuff in here that when we actually do the build, if you've got a Docker file in your repo, we will just build your Docker file and we'll push it up to some unique tag. If you don't have a Docker file, we will do something really cool, which is we will run this little thing. It's called Heroku-ish because it's basically a wrapper for Heroku build packs. And Heroku build packs are basically a way to say for a whole bunch of different languages, this is how you will construct it based on some Ubuntu-based image. So if you've got a Java app, we know how to compile that. If you've got a Ruby app, we know how to compile that. If you've got, I don't even know what, Meteor. We may not know how to compile it directly, but then there is a Meteor build pack, which knows how to do it. So it's a really extensible system. So what we've done is said, we're going to take this Heroku-ish and since you didn't give us a Docker file, we don't know how to build your thing, but Heroku-ish hopefully does. And so we'll try to get Heroku-ish to try to do it. So Heroku-ish will do this build and then after it's built, we basically, this Docker commit, which is really cool, we basically just take a snapshot of after the build is done and we then store that into the registry. And then that's your build. You're done. There's, I guess, a couple of other little steps here, but that's basically it. And then all we do is just, we log into the Docker registry and push the registry image up. And then when you get to the deploy, however the Docker image was created, whether it was Heroku-ish or by your Docker file, we then just take that Docker image and deploy it. So there's some interesting stuff in here. Again, some safety net things. But, you know, this is the meat of it. We first have create a secret so that Kubernetes can actually go and grab the Docker image if the Docker image is a private image, for example. And then, wow, this is some handling for whether it's a canary or whatnot. I'll kind of gloss over that. Oh, and there's some interesting stuff about figuring out how many replicas you want to set. We have handling for, if you set up a project level variable for production replicas equal 10, which we've got to be demo does, sets up some number, we will then just interpolate that and set the number of replicas manually. We set up a few other things, Postgres, blah, blah, blah. And then here's actually, this is a good example of not helm chart stuff. This is what the Kubernetes configuration looks like for the application. Yeah, up to there, I guess, to the end of file. It's fairly long. It's fairly ugly. It's got all sorts of weird redundancy and stuff. Like how many times have I mentioned, you know, Postgres app name, for example, things like that. But nonetheless, this is what you would do. You would create this YAML file and then when the file exists, you would deploy it. Oh, it was in the cat itself. So we actually took the contents of all that stuff and just passed it directly into the Kubernetes apply function. Kubernetes apply basically just takes whatever was passed into it and constructs all the stuff in Kubernetes. I don't want to go into the details. It's ugly. The next version of this, we'll get rid of all that and use helm charts. And so we'll just reference the helm chart and say just deploy the helm chart instead. But for now, this is what it does. So this is all pretty complicated crap, which is a good reason to not have that all in your GitLab CIML. But again, it hides it then in this wrapper. So that's where all the magic really happens. There's a few other bits that destroy. And then there's also this common thing, which is just a library that all these pieces use for some of these safety features. So it's a fairly complicated thing. The one other thing that it does is there's a Docker file because it is a Docker image itself and it loads up certain things for you. So it adds Git and Ruby and the Kubernetes-Cube CTL. And so it just sets up just stuff for you. So it's fairly complicated. But first off, it's awesome that we have this capability where like this isn't in code. It's one of the funny things about AutoDeploy. It's just a Docker image that knows how to do deploys. It's not like built in to GitLab CE. It's a template, which is just a really interesting fact, I suppose. So any questions on Kubernetes deploy and how all that magic works? Hey, Reb, in your experience, how often are you diving into your customer's AMO files? How deep have you gone? You're asking me? I haven't really done that at all too much. I mean, I show them the AMO files that we have and that's pretty much it. And we have some fairly complex ones. The Kubernetes one is fairly complex. But if I start off with the copyright example one, this is all pretty easy to explain. And even in the variable name that we've chosen in this stuff, if you look at it, you go step down through the file, you can sort of see how we build up, even in the Kubernetes example. From the limited experience I've had talking with prospects and customers, the ones that are interested in Kubernetes have done, well, either they're interested, but they don't know anything. In which case, they don't have any AMO files to look at. There's nothing. They're just still looking at the brochure, basically. And then the other ones, they've already done it and they figured it out and they know what they're doing. And they don't tend to ask for help. I assume over time there will be more people in the middle zone where as Kubernetes gets more approachable, they will know they want to do it, they'll try to start doing it, and they might even get some progress because it'll be easy, but then get stuck. But for now, it's like you need to have a certain level of experience before you even get a basic hello world working on Kubernetes that you will have figured it out. If you can figure out how to make Kubernetes work, you can easily figure out how GitLab CI works. Like that's the least of your problems. You know, I guess that's another way to phrase it. That's not entirely true and people will still be looking for inspiration and recommendations and blah, blah, blah, but nonetheless, there it is. Okay, so that's a whole bunch of stuff on that. Looking back at the agenda, there's a question about runners, which is worth talking about. I think we answered the mysterious build and deploy scripts. And then how it's reviewed, how it's working and exploding, oh, the Ruby. So this question about the basically exposing is about the ingress and the let's encrypt, so hopefully I already talked about that. It's worth again showing maybe some of the details about that so you understand, but at this point I hope you get it that like, what we do is we, so Kubernetes, let me just start simpler here for a second. Kubernetes has multiple things and so this is actually one of the most annoying things about, like in a VM world when you deploy, it's like, I've got an app. There's this one little object. It is an app. That's all there is to it. But somehow in Kubernetes, you've got deployments and replica sets and pods and like when they use each is incredibly just not intuitive. So it used to be maybe in the early days of Kubernetes that people would create pods and a pod is basically the equivalent of a virtual machine for a Docker container, depending on how you want to look at it. It's not technically a single Docker container. There's this weird thing that pods can actually contain multiple Docker containers under them. I've never yet seen an example why you would, but it is certainly theoretically possible. So they called it pods instead of container, but for all intents and purposes, these are three different containers. And if I were to scale up, there would be multiples of them here. And so you'd see them. So these are literally mentally thinking about them as containers or VMs or instances or whatever. But you don't create these directly. What you do instead is you create a deployment, which generally has the same name. You create a deployment and then that deployment has an associated replica set, just for, you know, to confuse you further. And that replica set then defines, like, well, how many of these things do you have? And then, of course, when you click on the replica set, then it's like, okay, well, this replica set, there is this one pod. And also, by the way, a service, which is another thing that's mentioned up there, but it shows down on the menu a little bit. So for every given thing for GitLab, there's a deployment, a replica set, a pod, and a service, and Ingress and Persephone claims. But thinking about just like the objects, it can get really kind of weirdly confusing. And you don't create the services directly. And usually you don't create the replica sets directly either. The thing you create is the deployment. And so the deployment, I don't know if it's worth going into this, but the deployment basically has all the information necessary to create the replica sets and the pods and the services. Um, if you look back at, uh, it could be just deploy. And I look at, well, the actual deploy part. You look at what objects did I actually create here? And I created this deployment and a service. And that's it. So I guess you do create two things. I said you only create one, but that was a lie. I think the service can be implied before whatever reason we created the service explicitly. Um, those are the only two things we created. And then the replica sets and everything else were created from there. And that kind of comes from this spec. And it's this weird thing where like, you're not declaring the number of replicas directly, you're declaring a specification for how many replicas there should be. And then, you know, this other thing goes and keeps track of where, anyway, complex community stuff you don't necessarily need to know too much about. Um, what you do need to know about though, maybe, but aware of is like, then how this connects with stuff. When I create a, um, a review app, for example, I'm basically creating a brand new deployment with a new name. Uh, what's the name declared? Actually, no, this is postgres. That's why it's going down. It's in here. So this, uh, the combination of the name space and the environment slug. So the names place would be, in my case, under, and then some numeric thing. And then it's like production or staging or whatever. So the combination of the name space and the app label give me a unique deployment of that item. So when I'm creating review apps, all I basically do is just create another one. And it then goes and because of Kubernetes, it just spins up another one. And then there's a whole bunch of other magic that's got to be pulled in, like the URL, um, which, uh, I don't even know what that's specified. Well, here's some ports that are declared. Um, oh, there's the ingress. That's where, so the third thing then, so postgres didn't have this because it didn't need the ingress explicitly, but ingress is a way for you to get from the outside world into, uh, you know, attaching to this individual pods or whatever. So here's where you specify the host names. And all you do basically is just declare this. And then again, something else magically solves it all. So there's this engine x ingress that is installed on the cluster that looks at this declaration and therefore sets up everything for you. That's one of the awesome things about Kubernetes. It's actually really. That's some of the magic that we sort of wanted to understand. But, you know, if there's no, if there are no user serviceable parts inside, I totally get it. I just sort of want to know what, what the moving parts that we can configure are. And I guess these are a lot of them. I've seen a lot of them. Yeah, you generally, I mean, you don't need to configure these. I'm just, you need to know that this exists and that's about it. But this is how it works. So if anybody asks you how the hell, whatever, you know, it's because we spin up these services and we configure ingress and because you've got engine x ingress configured, everything will just manage what happened. And then because you've got cube Lego, which enables let's encrypt, it will automatically add the, um, the security tokens or the security certificates for you. So those are the components. So it's a combination of the auto deploy feature and the assumptions about what's in the Kubernetes cluster that make all this magic happen. Um, Just, just, are all these variables and things, those are defined somewhere in the Kubernetes stuff, not in our stuff. Or if we document it, document it. Well, it depends on what you're, you see me like this variable, this environment host name, whatever. Well, I mean, okay, that's the environment host name, but you know, that's, that's our stuff. Where the Kubernetes, what you have to put in there a bit is the defined and documented in our documentation. Right. So we're going to have to put in for let's encrypt, right? Is that in our doc? Is that in the helm chart thing or? If you install the omnibus get lab helm chart, everything is just done for you magically. Don't have to configure anything. We just wrote this, Dmitri, our CTO wrote this quick start guide for if you didn't do it that way. And so it gives you a few bits of instructions, but then somewhere down here, it's like, Oh, okay, well now you've got Kubernetes, but you need ingress. So this is how you set up ingress. And again, it's using a helm chart to do it because that's the best way to do it. But yeah, you, this is how you set up ingress. And then how do you set up your DNS? Well, you grab the IP address and then you set up your DNS. There's not a lot in here. It's actually really straightforward by the magic of this ingress. But yeah, we've got instructions for how to do that for your cluster. Later on, by the way, in 10.2 or something like that, we're actually just going to have a little button that says, okay, once you connected to your Kubernetes cluster, set up ingress, set up Prometheus, set up runners, and we'll just do it for you. But these are the instructions if you need to do it yourself. Okay, so there's a little bit more I want to show you on the Kubernetes side because I've shown you the namespace for GitLab. But let's go further. And I did just do a deploy with something that I call it. It's under, right? Oh, I think I have to refresh. So that's the name. Setting up this interface is kind of interesting as well. You mean this dashboard? Right. Yeah, it's a pain. So this is just more Kubernetes stuff. I mean, I just encourage all of you to just become Kubernetes experts, which kind of sucks. But yeah, we just got to suck it up. I can help people through this, but take a look at the address here as localhost because it's really... This is a proxy that's crossing back out to the Kubernetes cluster, which is running somewhere else. Can you see this if I drag this over here? Can you see that window? No? I didn't see any window. All right, fair enough. I wasn't sure how Zoom shared. Anyway, yeah, I have a terminal open with the proxy working. So I had to connect to... If you share the entire screen instead of just the application, then we should be able to see your terminal window. Yeah, you would. But I'll just do this so you can focus on it. So in this terminal window, ignore the other stuff, but I have run this gcloud command, which goes and grabs credentials and blah, blah, blah, and sets up what cluster I'm going to connect to. And then I ran kube proxy, which connected to that cluster, and then port tunneled from that cluster over to my localhost there. And then that allows me to hit the browser on localhost. It's dumb in a lot of ways from a usability perspective, but it's better from a security perspective because it means that nobody on the internet broadly can hit your admin by accident because it's just not open. You literally cannot hit that IP address. And once it's configured, you just type that one command and it just pops it up. It's actually really... It's a pain in the ass to get the Patrick packages installed and stuff, but once you get it configured, easy peasy, this works. Right, exactly. It's not that bad. It's just annoying. And you got to keep swapping back and forth. And for a while with Google, I used to be able to type in the email... Sorry, the IP address and then just do some security stuff to get to it, but they shut that down a little while ago. And so now the only way to get into it is through the proxy. I'm sure it's secure. It's just annoying. All right, back to... So it's the only things I want to show you here that are for the under namespace for this app I just created. It created appointments. Of the app itself and a Postgres pod, just in case. I'm not actually using Postgres, but it kept it there anyway. Replica sets, obviously. And then the pods. If I were to scale the pods, you'd see more pods, but you'd still only see one replica set in one deployment. Just to show you that that's where the stuff lives. And then to go further, the default namespace has the runner. And then, interestingly, this is a little bit weird, but the autoscaling runner lives there. But as I run jobs, all the individual jobs will be created in the GitLab namespace. We're eventually going to clean that up in the next version of this stuff. Actually, in fact, if you install it as of today, I think that will be cleaned up. But this ITB online was set up before. And it's not compatible to upgrades. I'd have to tear it down and rebuild it to do it right. But if you did the demo today from scratch, everything would be in the GitLab namespace. So can somebody ask, like, where are the runners? Well, that's where they are. In fact, so just to be a little bit more specific, there is one runner, which is set up as a Kubernetes executor. It is sort of the master Kubernetes runner. And I'm not going to go into that. And when you have multiple jobs, it will then go and spin up other pods to run each of those jobs. And so if you've got five jobs that need to be run, it will go and spin up five pods. And then when they all finish, it'll just disappear. So that all happens within Kubernetes for you. There are a couple of weird things. Like if you run out of resources, it might end up triggering autoscaling of the cluster and blah, blah, blah. But that answers where are the runners? All right. So I think there's 15 minutes left. Any questions as a result of all this? Or anything I didn't cover sufficiently? I have a question. How often is this, as far as this set up, something that we would do in like a POC or trial? Rev, have you been involved in any of that? I've never set this up. I've demoed it and shown people various portions of it. But I think I wanted to know what was going on behind the covers and more than I already knew. And second, occasionally people have asked about it, at which point you just sort of say, yeah, well, it's Kubernetes. And it's sort of beyond the purview of our discussion and hope that they go away. So you won't say, I have no idea. It's magic. We are going to have to start doing this because this is the way things are going. This is the way our stuff is going. We're going to start deploying our stuff into Google Cloud. But I'm not allowed to say that, right? So to date, I don't think I've ever been involved in a proof of concept or anything, production or whatever, where we've been involved in setting up their Kubernetes. Again, if you're smart enough to get Kubernetes working and you've got all this stuff going for you already. But where I want to get to is that this is actually the easiest way to get started with GitHub. Like downloading Omnibus is actually painful now because you then have to manually still configure container registry, matter most, runners. It's actually in some ways really ridiculous that you install GitHub and you think you've got it, but like all these things are turned off. But you don't even have a runner. And to some extent that comes from the days when GitHub primarily was an SCM and all this other DevOps stuff was considered back then. But CI was like a thing you might want to consider down the road or something, I don't even know. But so we can think of Omnibus as being everything, but it doesn't have a runner. There's a very good reason for it not having a runner because we don't actually want the runner running on the same box as your main things. You don't want the runner to be able to take down your instance. So we still won't do it and we won't put it into Omnibus. But the point is that we're going to have a few one-click ways to do it. I do believe that people still like AWS, raw AWS. And we're going to have a nice one-click button to install AWS and it will be using CloudFormation to do all this stuff magically for you. And it will include a runner, set up an auto scaling mode and it'll be really easy. But installing on Kubernetes will be the second button and it'll be just click here and it'll install on Google GKE because that's the best Kubernetes platform currently and it will just do everything for you. Today, this is by far the easiest. If I were ever getting a demo of anything, I would install it on Kubernetes. It's so much easier. So if I were doing a program for somebody, I would seriously consider being like, well, we're just going to go and set up something on Google for you. Especially like set up for a Google account, it's free for, you can at least get a couple of weeks of compute out of it before you run out of compute. Heck, even Azure, you can get, I don't know, like a week or so with the free credits on a Kubernetes cluster this way. So it's just super easy. And it's complete. I'm going to set up a new one. Yeah, just make a new account. Just go ahead. You know, whatever. But as a demo for people and see it definitely had this vision that you'd go in and give this demo and then you'd walk away and be like, oh, and here you go. Here's the cluster we created for you. Keep playing because in fact, we just used your login or something like that. Have them log into Google and click the button and watch it on their account because again, it's free or within the free trial levels. And that, I don't know if we're there, but that's an interesting vision. You give the demo and you just leave it with them and they can go log in themselves right afterwards. Any other questions? My chat keeps sliding back. I'm sorry. We've got 10 more minutes. Do you want me to talk about the future of I2P or the future of auto DevOps? You want me to call it a day? I wouldn't mind hearing about auto DevOps. Yeah. All right. Cool. Auto DevOps. Let's see if it even shows up in Google search. Oh, it's auto slash integrated. Oh, that's not even it. All right. We'll back to square one. So auto DevOps is, it's a big issue. There's lots of stuff in there. The thing to focus on is this iteration. Oh man. That is a weird piece of new UX. The first iteration is going to include auto build, auto code quality, auto deploy, auto review apps, and auto monitoring. And actually auto CI. I hope that that's going to make it in there. Really, in a lot of ways, like auto deploy, what we call auto deploy today, is also auto build. You saw like there's a job that says build and uses Herokuish if you don't have a Docker file. That is auto build. Auto deploy is the next piece. Auto review apps is also part of auto deploy already. So we really already had three of these. And the auto monitoring we had, but prior to 9.5, auto monitoring monitored the system metrics that Kubernetes would give us. So the CPU and memory. And then in 9.5, we now, we'll go there. We now have, yeah, there's one. So engine X output. So it's got like error rate latency and throughput. And so it means that when you deploy an app, I didn't show this, but in the I2P demo, but you've seen this, where after you do a deploy, you can actually watch metrics on the app. And you're going to get now like throughput and latency numbers, which is awesome. So that basically already works. So what we're really adding then is code quality, which we shipped as a feature a little while ago, but we didn't enable by default. And auto CI, which again, we never had, but we will in 10.0. And what that will do is take the same build packs that we use from Heroku. Heroku now has a test capability. So for those same build packs, like Java and Ruby and whatnot, they have a default set of way of how to test each of those languages. And we will just leverage that to say, okay, let's just run the test back and do whatever it says to test how the test Java or whatever. It may not be perfect, but it's a really, really great start. And we'll work for a lot of languages. And so we're really excited about that. But AutoDevOps goes further and doesn't just have these features. What it has technically the features called implied GitLab CIEML. What that really means is that if you have a project on GitLab.com or wherever, if you've got AutoDevOps enabled at the instance level, we will say, oh, you've got CI CD enabled when you created the project, but you don't have a GitLab CIEML file. Well, hey, we're just going to use an implied one for you. And that GitLab CIEML file is going to be very similar to the AutoDevOps that we just went through, but it's going to be based on Helm charts and whatever, and it's going to add in the code quality and the AutoCI. But it is going to do all of these things for you automatically for every single project on your instance. So in the extreme, looking at .com means potentially, if we do it this way, we could literally turn it on one day. And every single project on the next deploy will suddenly get CI run and code quality. And there's a little bit of a trick there because not everybody has Kubernetes enabled, so we can't actually do AutoDeploy for you until you enable Kubernetes. But if you had Kubernetes enabled, then we would just automatically do deploys for you as well. We do review outs for you. So this is actually really, really significant. We may not in fact turn it on at 10.0 for everybody because it might just flood our number of runners. We just haven't done the math yet, so we're probably going to try it a little slowly. But certainly at your own instance level, you could do that. So the idea now is we've taken this somewhat complicated in Magic, GitLab, CI.AML, and said, you don't even need to look at it. We're going to do build, CI, deploy, code quality. We're just going to monitoring. We're going to do everything for you, and you don't even have to do anything. Now, if you want, you can take a look at what the implied CI.AML is, and you'll be able to edit it, commit it and edit it and make changes and grow and learn from it. And we're going to do all sorts of things to make the CI.AML easier to understand, so it won't look like magic. Nonetheless, we're just going to turn all the stuff on for you. And that's actually where the auto comes from. So it's a little bit funny because we use auto to refer all these sort of things, but you still have to turn them on. What we're really saying now is it's just going to be on for everybody. Now, given that Kubernetes is an important part of that, we're also going to go further. And in 10.1, 10.2, we plan on adding ways to make it really easy to connect to Kubernetes, but also to spin up a whole new cluster for you. So on.com, for example, there's just be a button that says, create a cluster for me on Google, GCP, and just answer like two questions, maybe. And we'll just go and obviously do the OAuth dance because we need to get your permissions because you're going to be paying for the cluster. We're not going to do it on our dime. But you click a button and then it'll create a cluster for you. And you had asked earlier about, well, how do you set up Prometheus? How do you set up Ingress? How do you set up whatever? Well, you don't. You just click the button. You know, maybe there's four buttons in the early version of it. But basically you'd say click. It would create the cluster. Then you'd say, OK, add Ingress. Add Prometheus. Add whatever. And basically it would just do all that stuff for you. And so then, what we hope anyway, is that if you look back at the.com example, where immediately everybody will get build and CI and code quality, but they won't get deploys. But it'll be a nice thing like, hey, you've got these things. You're getting the button. You don't have deploys. Try to consider adding a cluster and then you get review apps. And maybe they're not ready to do production deploys yet. But they can easily just still turn on review apps. And suddenly with a few clicks, they've now got an auto-built thing running. They don't even need a Docker file. We'll figure it out for them. And then we will go and just do review apps for them. So we're hoping that this is going to be not only an awesome change to our demo and stuff, but a really genuinely awesome thing for people to get started with CI. And that could really help us out too. Cool. That'd be great. Yeah. And we're really excited about it.