 I'm just going to talk about Kubernetes Cluster Bootstrap at my company on AWS. So I'm not covering it, I'm covering a very small slice of that huge pie. So yeah, there's a different scenarios you might need to bootstrap either locally. You could use hosted clusters. Like if you use local, a great example is Minicube. There's recently also Docker added, you know, the edge version. You can actually run Kubernetes within Docker for Mac. I haven't played around with it much. It just messed up my Kubernetes client. I was not happy. Then there's hosted. So there's a Google Container Engine Azure and Amazon announced EKS, which is I guess not generally available yet, right? So those are the options. I'm not going to endorse anything. Then turnkey cloud solutions. So there's a lot of projects that actually allow you to run Kubernetes very quickly on one of these like clouds. And one of them is AWS, which is the one that we are using, which is basically the focus of this talk. Then there are on-prem solutions which I'm not going to cover at all. And actually, there's a member here, and there's few people. I mean, who here runs Kubernetes on bare metal? Okay, so we have several people. So that might be an interesting follow-up talk, because you face different problems, right? So when we started to look for bootstrapping solutions, we knew that we like CoroS and Terraform. So this is basically ideally what we would like to use. And first thing I looked at was Cube AWS, which was a project from CoroS. I found some very old website where people were discussing about it. From my top of my head, so without actually looking at the latest status of that project, it was a lot of custom YAML. But in a way that I, at the time, I thought was not appropriate and it probably evolved since, but it was not Terraform. So it was nice, but not great. And then we started with this project called TAC, which TAC is a term in sailing when you TAC from one side to the other side, just for your information. And because I wrote like an internal confluence article about this and I was putting pictures of ships going across board and my colleague was, what is this, that's what it means, TAC, okay? So anyway. So this is using CoroS and Terraform, so that was right, we loved it. The problem was it was using Terraform back when it was version 0.7. And I don't know if you've used Terraform for a long time, but it was okay, but the remote state support wasn't really great. At the time also when you download Terraform, you get a huge binary which has all of the providers. So if you use AWS or GitHub or everything, that's all compiled in a single Terraform binary. And there was no, or as far as I remember, we didn't use any of the TLS certificate generation or templating with it. The local providers were not as advanced as they are right now. We were having a lot of troubles with it. There was a lot of batch scripting around it. And actually we didn't maintain it that much. So we were doing a lot of manual things outside of the project, which was an ideal. It was also the way that we run at CD. We didn't do a proper at CD setup. So we lost cluster state a few times. Managing and doing rolling upgrade of at CD was complicated. And also upgrading Kubernetes versions was manual and slow. It was great, but I mean, it helped us get things running on Kubernetes initially. Then in 2016, a lot of exciting projects. I mean, after Docker announced Swarm and kind of highlighted the issue that a lot of people were facing with the bootstrapping or the actual initialization part of Kubernetes, with Swarm, you just had to do Docker Swarm in it and there you go. So that was picked up by the Kubernetes community as well to build a lot of additional things such as Cube Admin, which I think, yeah, which is that topic here. So Cube Admin was announced not long after Swarm in it was created. Sorry, we started. So yeah, Cube Admin was announced not long after the Docker announced Swarm to kind of automate a lot of the things of bootstrap process, managing TS certificates and things like that. However, when we were looking at revamping our bootstrapping process, the status of Cube Admin was very like there was no support for high availability. It's still in alpha. So we had to kind of discarded that option. There was also COPS, which originally was called UPUP and which was integrated within the Kubernetes main repository. So it became like an incubated project, I believe. So under Kubernetes organization, you have the COPS repository and its tagline is to manage clusters to the Kubernetes way. And I'll try to highlight that in the remaining of slides. Who is familiar with COPS? Who has used COPS? Kops, sorry. Nobody? Great. So there's also an exciting proposal from CoreOS talking about how to self-host. So to create a temporary control plane, to create a cluster. And this self-hosting proposal actually a core component of CoreOS Tectonic. CoreOS Tectonic, so that's the commercial offering from CoreOS to manage Kubernetes clusters. And the idea here is to provide enterprise ready Kubernetes clusters. It's based on the container Linux, so CoreOS, the original operating system that they released, which are great to create a fleet of servers that then can self-update, that can coordinate reboots across the cluster to make sure that your services stay up. I mean that your nodes are also refreshed and keep up with the latest security updates. It's integrated with, I mean, they've open sourced components called DEX, which is like a daisy chain to provide authentication to the Kubernetes cluster. So you can integrate with LDAP, which is great for the enterprise, or you can integrate with any provider, identity provider supported by DEX. So it can be GitHub or Google. Actually, for Google you don't need DEX because you can integrate Google domains directly to your Kubernetes cluster. Then CoreOS comes also with a management console and it's beautiful, you just click a few buttons, you upgrade your Kubernetes cluster version, everything. It's wonderful and I really wanted it. Unfortunately, we didn't, like we negotiated a while with CoreOS, but we didn't come to a point where we actually were able to adopt it. But it's a very exciting framework. So that was another advertisement, I think. Okay, so I think, yeah. Actually one of the interesting parts when you start playing with CoreOS Tectonic is that it's fully built in Terraform. They initially created like a Terraform version 10 patched version to achieve what they needed to be able to do. So they built like their own Terraform binary, bundled it with CoreOS, sorry, with the Tectonic solution. And when you look into the Tectonic, sorry, in the Tectonic source code or the Terraform configurations, they use very interesting features and there's a lot to learn of how they use Terraform. So I did spend a lot of time to study it and play with the vanilla mode, the free version. But yeah, also the fact that once you engage CoreOS to get Tectonic, they will work together with you. They will dedicate engineers to work on certain components and Ticketmaster is one of the customers and they open-sourced an application load balancer controller for AWS, so that's also very interesting. So, and it has also grown a lot, so we haven't played with the latest version. Sorry, that's more about Terraform itself having better state backend support. The modules are, there's like a module registry now that you can push and pull from. So Terraform itself has improved a lot since we last used it. So yeah, more praise to Tectonic, maybe I should stop talking about that. But overall, very interesting project. I learned a lot by going through it. So this next bit of slides is to talk a little bit more about self-hosted control plane proposal given to the fact that we don't have like I basically have like 10 minutes left. I'm not gonna go into the self-hosted concept of Kubernetes. So I'm skipping these slides and I do wanna highlight a couple of very interesting projects which is using this kind of advanced way of bootstrapping a cluster and we couldn't use those because they weren't, they were like basically very young by the time we started working on it. But if I were to do it again, I would definitely start to look at these which are Typhoon, which is a minimum and free Kubernetes distribution. It uses Terraform. It has custom Terraform modules that prepare BootCube. And there's also Archon, which is an operator as we talked earlier about operators, which is a way to like an expert system to manage like an existing, another system. So for example, we talked all about an elastic search operator. So the operator would manage the nodes and like the whole cluster creation. The same way Archon, I believe the idea is to basically give it a Kubernetes cluster definition and Archon will go ahead and create it. So it's kind of like an operator for clusters based on the BootCube project. So it sounds very, very cool. Fortunately, we don't have the time to play with that. So we couldn't create, oh, there was one big, yeah, one big problem with Tectonic why we didn't go with it was at the time we were using it, you had to create a separate Tectonic installation for each cluster. And I wanted to use Terraform Workspaces, basically a way to isolate state using Terraform Workspaces. So you creep the same Terraform configuration and you just point it at different state files in it. It's able to create similar clusters, I mean identical clusters, but for different environments. So you can create like a production cluster, staging cluster or another production cluster, if you want. So that was one of the reasons I didn't, I wanted to, that was something that I felt was lacking in Tectonic and because I wanted to use the vanilla mode, a lot of components we had to like find replacements for and we had to study a lot and it took a lot of time. So we let go of that idea and we decided to go with Kops. So Kops is a project that is now endorsed by Kubernetes kind of and it's basically allows you to define clusters in code. You give it a cluster spec manifest, you give it a couple of instant group definitions and it will go out and generate or create those cloud components that you need to create those clusters. It manages secrets for you as well, we haven't to play too much with that. It manages the node boot sequence manages HCD and high availability mode. So our recovery mode for our old clusters is pretty bad, like we didn't have an easy way to recover for HCD with Kops, it kind of creates an auto scaling group of a single per master and so if the machine goes away, it can automatically replaced and then automatically the HCD data is reattached to that node and the node is joining the cluster again. So it has this auto recovery mode much better than what we had before. We didn't have a very good story for that. And then we played a lot with add-ons channels which is what I hope to demo, I hope I have some time for that. So you know, Kubernetes is like a base system and you have a couple of add-ons such as a dashboard or a DNS, even the DNS which every Kubernetes cluster uses as far as I know is an add-on, you don't have to use it, you can use something else. So channels is a tool to manage those. So here I like to exemplify why they say like manage clusters the Kubernetes way. So if you look at Kubernetes on one side, we have the client, we have a HCD which is basically a database or a state storage and we send manifest through an API server which stores it into the storage and controllers are notified or watched for manifests such as the operator controller or other controllers, replication controllers and things like that. And they talk to the cloud provider which then creates cloud resources. Like if we create a service manifest, we get a load balancer, if we like things like that. So that's how I see Kubernetes. So now let's look at how KOps is designed. I'm ignoring scheduler and things like that. So KOps, again, we have a client app that we use on our laptop and we have a state store except it's not at CD, it's S3 or Google Cloud Storage. We send manifest, there's no API server, we store them directly into the bucket. And there is a cloud up controller which runs on the node. Actually, sorry, cloud up is the one that talks to the cloud provider. It's a part of KOps. So it's actually a library within KOps. There's no API server. So that one talks to the cloud provider, either to the cloud provider or also to Terraform. So I can use KOps to generate Terraform. And then create my resources. So once I've created all of the cloud resources, the nodes that get started have a certain boot sequence as designed by KOps like this. So the first thing that runs on your node will be a node up component that will download your cluster manifest, your cluster definition and also prepare like if you have any assets that you need to copy onto your nodes or you want to install packages or you want to do things, you can customize that. There's hooks that you can specify for your Kubernetes cluster generation. And then sets up ProtoCube which ProtoCube manages the HCD volumes. So that's just the auto recovery or HCD. And then also sets up everything that the kubelet needs. So it will render all of the manifests to run Kubernetes components like, yeah, kube proxy and things like that. So before what we used to do was using systemd units and we copy them into the CoreOS. Now we just define a manifest and basically node up component takes care of all of that. Okay, after the kubelet comes up, it will look at the, it will say that I need a subnet for my containers, for my pods and basically the kube controller will notice that request and then allocate a subnet for the node and then the kubelet will start the container runtime which can be Docker, can be something else actually we're using Docker. With that particular cider allocated or subnet allocated. And then kubelet handles and reports basically the rest. Basically it plays the normal role of being the node agent. So that's basically covering two parts of K-OPS. So basically how do we create the cloud resources? How do we provision the nodes and join them to the cluster? And then finally, how do we manage add-ons on top of Kubernetes? How do we install the DNS dashboard, auto-scaler configuration, things like that? So for this K-OPS comes with embedded components called channels, but you can also build it separately and you can play with it yourself. So that's what we did at ONUSB. We basically defined our own channel. I call it beekeeper and in there we specify like, oh, whenever our cluster comes online, we wanna have tiller which is a core component of Helm. So as soon as our bootstrap completed, we wanna be able to use Helm to install the rest of our software on top of our cluster, right? So apart from that, we also provision namespaces. So like every development team or every project may have its own namespace. So we isolate and control access by namespace. We use strong RBEC policies that only give access to what they need to have and only have access within their namespace. And we use it integrated with our GitHub authentication. So if a developer logs in, he can access only the namespace that belongs to his GitHub team. So that's how we use that. We all bootstrap with channels and other things, auto-scaler and things like that. Okay, so COPS channels, K-OPS channels is basically to bootstrap and manage core Kubernetes add-ons. That's what its purpose is. The documentation is very, very limited. But if you spend some time to play with it, I found it actually quite powerful. And basically the way we do it, we just apply a certain channel, which is our own add-ons channel. And then it will tell us like, okay, currently you have namespaces version 1.1.1. And the one that you are in is in the upstream channel or the stable channel that you're watching is 1.1.2. So we're gonna update. So you can manage updates. It's kind of like Helm templated manifests, except it's like a bulk apply, like easy for a bootstrap. So I thought it was great, so we used it. It also allows us to get all the add-ons that are installed. So this is one of our clusters at the time had the core. I mean, there's a couple of add-ons that come from the COPS K-OPS state store, the S3 bucket. And we have like the bootstrap, there's a bootstrap channel that K-OPS uses. And then we have a couple that we bootstrapped on top of that. And then we have the ones that are part of our beekeeper channel. So we have a couple there, like state metrics and things like that. State metrics, you could use Helm to install it, but because it required some customization or we needed to have it before, Helm, for that reason, we added it to the channel. So at some point you have to decide what goes in as a channel add-on and what goes in as a Helm bootstrap, right? As soon as you have Helm, you could use Helm for everything else, like as soon as you have Taylor basically, okay? So as part of our onboarding in our team, I created a Terraform workshop, which is a public repository. And in there I have a sub directory that basically, this is the workshop, that basically goes over like, okay, what is K-OPS? This whole directory is a Terraform plan to create nodes to play with. And then within the node you get K-OPS, you get all everything ready, and you can just go ahead and play with it and execute the commands directly. So I've created here one of the nodes and I probably need to make it bigger. Is it visible? Is it visible on the everywhere? So right now I have like a B01 cluster, I have an SVC cluster and I have B01. So there's a couple of auto scaling group. So the first thing is to create a, I basically run K-OPS to create a cluster. Here I'm passing and this is an imperative command, right? I import the same way like you do cube cuddle run, it's gonna create a deployment, it's gonna create a replica set, pods are gonna start. So with K-OPS you can do imperative commands and in this case, it takes care of defining the cluster manifest, generating secrets for it, uploading everything to your state store in S3 and basically taking care of that. It didn't create the cluster, right? It just defined the cluster. In my command I didn't tell it to create the cluster. If I did yes, it would go ahead and create the cluster. But because one of the targets here is to, I mean everything in Kubernetes you can do imperatively or declaratively. Ideally use declarative statements, you define what is your memory limits. You don't tweak them and then don't store them. How do you recreate it without having it stored on their source control? So what we do is we actually get the cluster definition generated by K-OPS. So I can get that. Oops, whatever, just okay. So I just get and then I can cut my B03 cluster and it basically defines all of the things that, this is a lot of defaults. Some of the parameters were customized by me through the flags and it basically is a YAML document that defines like I'm gonna use an LCD cluster. This is gonna be API access, gonna be available publicly. SSH is also publicly open. This is a subnet allocated and all that, right? Defaults are not always good. There's an interesting Q-Con talk about a researcher from Red Hat who talked about the security of clusters. One of his projects was to address insecure defaults for a lot of these projects. So K-OPS in version 1.8 the latest version has changed a lot to make it more secure by default. Still, you will need to review a lot of the defaults. I mean, you need to be aware of what you do when you create your own clusters, right? So this was the cluster manifest and there's also instance group manifest. And if you wanna read more about it, again, the repository is open source and I mean, the links are there. You can quickly read about it. I'm not gonna go into all the details. So the next thing that we do, we love, as I said, several times Terraform. So we actually generate, we use a K-OPS to then read the manifest and then generate a Terraform configuration for that manifest. So it actually reads all of our AWS configuration and goes ahead and creates the, if I read, if I do, is actually, yeah. So we have a module and we have a cluster. Basically, the way that my command was, was to output it into a module directory under a cluster directory under its cluster name. So that's where it was created. This is a cluster name and within this particular directory, we have the whole Terraform config to create this cluster. After that, I create a Terraform configuration file to import this module. So I can use this Terraform configuration file to create multiple clusters, right? I can just use K-OPS to generate the Terraform configurations, create multiple modules, import them in here, and then I can create as many clusters as I want just using Terraform apply. Well, I'm lucky because I ran this earlier. I need to initialize and then plan. The plan is not important, but the init was important. Luckily, I ran it before, so no error. This is gonna take a while. So maybe while this is bootstrapping, I mean, the actual resources don't take so long. It's actually until the API server creates its DNS records and becomes available that takes a little bit. So I'm just gonna move on with the slides. Oops, where are the slides? Yeah. Right, one of the big issues that we had, we just use the defaults. And one of the defaults is 172, the subnet. I, maybe here, let me go to the workshop cops folder and cut the B, okay. So one of the defaults is to use this subnet. 172.20.0.0.16. So the funny thing is Docker allocates this subnet. I mean, the Docker bridge uses this subnet. So suddenly some nodes were working, some weren't. What was going on? We didn't know. So don't use the defaults. So be careful. That had us busy for at least a week to recreate things. Then, as I advise, use declarative manifests. Don't just use the imperative cops, KOP CLI. I think the strength of KOPs is to basically give you the ability to check in the manifest and to track changes and to manage them through source control. Make sure to reserve resources for your Kubelet, Docker and system. As Yorke was highlighting earlier, we had some problems with the elastic search taking up all memory and then basically killing the Docker runtime and then all of the pods dying. So by default, a lot of cluster bootstrap doesn't allocate resources for the actual Kubelet, for the node agent and for the system. So let's say you're running a node with 16 gigabytes of memory, a lot of these things, they say you have 16 gigabytes of memory to the Kubernetes scheduler and if you have some pods that either don't have a limit set or you basically use all of the memory, then they will use all the memory and kill everything else. So you need to make sure that there's enough memory on CPU available for your other components that you are running there. Okay, so let's, I wanted to highlight some of these. Where is, where are they? Here I think. So this is basically one that we did not too long ago, which is precisely this setup. We use KOps 1.8 that allows us to upload assets on our nodes. So part of the node up, as I mentioned, one of the bootstrap components of KOps is to run node up and node up watches the manifest and basically it's like a system, the kind of like we, it's like a cloud unit, right? You just pull in a unit and in this case, I'm defining the slice for the pod runtime. So I'm saying, yeah, the resources that are used for Kubernetes services will be this much. So that's the Kubelet, all of the Kubernetes services, anything that runs within a container that's part of the Kubernetes core system is called the pod runtime. In my case, I'm calling it the pod runtime. So I'm giving it, I'm defining the slice and then after that, you need to define the C group, basically the control group. And another one is the system slice, but we're not allocating system slice. Yeah, but this already solved a lot of our issues. So what do we do? We tell the master Kubelet, therefore the masters, it will use the C group. So the Kubelet will run on the C group and we are saying that if you have not enough memory available, you will get evicted. So by default, the KOps sets the eviction for 100 megabytes free memory. But if you have 16 gigabytes of memory and you only have 100 left and only then it starts evicting pods, you're basically, it's too late, like the node is already dying. So we have to, they advise you to use a certain percentage like of your total memory. So we adjusted it based on our nodes which basically have a lot more memory. So we allocate 750 megabytes if we cross that barrier, then the Kubernetes schedule will start evicting nodes to make sure that the nodes don't die. Next the, like I said, the queue preserved. This only works if you also create a C group for it and by default, there is none. So you have to create the C group like using system D slice definitions. So I'm pretty sure I'm gonna do a blog post about this to explain this in more detail. There's also a couple of KOps issues that I can link to that explain this deeper. But this was, I mean, even when I was like working on this, I could see that pull requests for Azure Container Service were just barely a couple of days old. I mean, it looks like almost nobody either faces this issue or sets these defaults, I don't know. So it was interesting. And that's why I wanted to highlight it. You can hint to Kubernetes how you want to, how the preference that you wanted to evade. So if you set the resource request exactly the same as the limit, it puts in this guaranteed class, which means it'll be at the bottom of the stack when it comes to eviction. But I suppose if it knows locked up then... Yeah, in our case, the problem was really that our container runtime was dying. Oh. So we didn't allocate any memory and CPU for the container runtime or the Kubernetes components. So that's the problem. So this is what we did. We created a C group that is allocated to Docker and to Kubelet. And we give it a minimum of this much memory. So that is not available for Kubernetes to schedule. Okay. Yeah. So that's why we basically give it the guarantee for those system components using this. So we use Kops. And on top of that, we build a whole bunch of Terraform around it because by itself Terraform, sorry, by itself Kops requires a bunch of things like an S3 bucket. It requires other things, a subdomain. So you need to set up... I mean, we are on AWS. We can get that easily with Terraform. So we just set up a subdomain in Route 53. We set up S3 buckets. We set up TLS keys. Everything that Kops needs before it starts are all managed, are all created by a Terraform module. And when that Terraform module is done, creates a VPC, creates everything, then we run Kops to create the cluster inside the VPC. Then we set up like VPC peering and a bunch of other things. I'm maybe skipping this, but this is basically how our design looks. Like basically we are creating a Kubernetes cluster per availability zone. Initially, we were running cross availability zones, but that's not really recommended. And if you're running at CD and you're running tree nodes, you're always gonna have a majority within one availability zone. So if that majority goes down, you're losing your consensus and you need to recover anyway. So that's not really high availability. Definitely not in Singapore because we only have two availability zones. If you were in another region, we could have three availability zones and we could run an at CD one nodes per availability zone and then maybe we have less concerns. But in our case, we were running a lot of at CD nodes and masters. I mean, so we decided like, we might as well run a cluster per availability zone. And then we look at federation. So right now we are not using the Kubernetes federation and I've talked to a lot of people about it and I can tell you why we're not using it, but I'm not gonna go into that right now. But what we're doing is we're setting up ingress. So nginx ingress or whatever ingress control you wanna use load balancer. And then we set up route 53 weighted routes across the two clusters with health checks and things like that. So that works great for stateless services, but elastic search will be a bit more complicated as York like told me. Dude, you're not thinking about elastic search. So also part of the Kops provisioning is that it provisions a bastion for us. So Kops, I mean, the tool Kops doesn't, but our Terraform module does. So our Terraform module creates a bucket, creates a bastion, creates the VPC, creates subnets, creates utilities, things like that. So we wrap that all around Kops to basically create our infrastructure. We provide some utility databases which are used for shared services kind of. Obviously for microservices, we have dedicated storage, but for staging environments, maybe we just share an RDS instance and things like that. So we try to save on resources there. So we have some utilities there. Yeah, this is very detailed about our setup. I wasn't gonna, I thought I removed that slide. I covered everything about Kops manifest. Here are a couple of examples, and I'm pretty much done. I mean, we can go back to the actual cluster that I created. Okay, so this is an example of why you wanna check in your Kops manifest, right? I mean, we're changing the machine type just to commit and then apply, that's it. We are adding text, so our auto scalers can automatically detect the nodes and automatically scale up and down our auto scaling group. So these are in Kops and manifest to you attach cloud labels. So we just assigned this information and boom, the auto scaler auto discovers it and automatically scales up the auto scaling group. Awesome. Did I say auto much? Then we have alpha API flags enabled. Just another, I think this is the same if you use a system unit files, right? But I kind of like this kind of setup, yeah. I didn't cover channels yet, okay? Yeah, so the prerequisites, like I mentioned, the S3 bucket, SSH keys, VPC, Bastian host, and a hosted zone. That's basically everything I mentioned. Before we run Kops, we need to have these things. So we use Terraform to provision this. Then the bootstrap module is basically what we use to bootstrap with the channel. So we bootstrap tiller, which is the server-side component of Helm. So when you use Helm and you need to install, for example, you want to Helm install Datadog because we use Datadog, but there's a public chart, we just do Helm install Datadog and it needs tiller to be available. So we need to bootstrap before that, things like that. And after that we also, so we bootstrap a bunch of fundamentals and then we bootstrap charts using just a simple bash script to set up Datadog, like I said. Okay, so I didn't really highlight it, but the logo that I showed you is actually part of a competition that they're currently running. You can vote. If you don't like it, maybe you can change to something else, but I think they're pretty much finalized. So that was just something I saw. So soon I'll have swag because I'm like, I became a huge fan. Okay, so maybe, I mean, that's pretty much everything I have. So I'll just see if this came up. They're up eight minutes at time. They're kind of all right. I think the proxy is running in a container as well. I'm not 100% sure. I don't remember off the top of my head. So I'll get out of the stash then. Give us a second. Yeah, yeah, yeah. Yeah, hold, sorry. Just this end. Yeah. Yeah. So the Kubelet is not self-hosted. So the Kubelet is a system that you need them, I guess. Yeah. Docker is not self-hosted, obviously. Yeah. Okay. So one more thing, because it's so cool. I'm going to use this tool, right? So part of this thing is how to use this channel tool. So this is really small. So this is like, if we go, our beekeeper channel has the following add-ons. So we have a namespace add-on. So part of the add-ons, part of the channel definition, we have the name, which is tiller. And then we have the actual location of the manifest, which is the Kubernetes 1.7 YAML manifest, which applies to Kubernetes versions like this. So if you were to maintain multiple clusters, our channels can, like, say, oh, we have 1.8 cluster. I hope so. And then we basically can maintain the same channel, but maybe make some changes for 1.8 and then just run channels apply. So our namespace channel basically looks something like this. Not really the one that we use at the moment. It's outdated. So we create a namespace. We create an RBAC role for, for example, the namespace is front-end. So there will be a manager, basically people that belong to the correct GitHub team. We adopt this role when they log in. And they're able to do, like, pretty much everything inside this namespace. The same, then we bind anything. This is basically RBAC. And the same for backend. And well, this is just namespace, RBAC setup. But we also bootstrap-tiller in the namespace. This is the old setup where we bootstrap-tiller at the Kube system level. This is not really good because it gives anybody with tiller access full access across all namespaces. So this is not how we do it anymore. But basically, using this channels tool, I can just, so if I go, channels apply. I say I apply the dashboard. So basically, it says, dashboard is located under the GitHub user content Kube needs code master. So that's an upstream dashboard, right? So I'm not maintaining that. And if I want to copy this, like, the monitoring. You need to bootstrap the monitoring to have hipster and things like that. Let's not go into that right now. And I'm going to bootstrap all of the OSB. Add-ons basically, that's just this one channel apply. So when I do this, he's saying, OK, I'm installing all of the namespaces that you need tiller. I'm creating Kube state metrics, whatever we want. This is only a fraction of the add-ons that we use. I just copy-pasted a few. And yeah, now if I do, and I do, let's say from then, obviously, the other from the manager. Channels kind of, I mean, interesting to know is that it uses annotations on the namespace to keep track of your versions. So if I describe, like, if I do, give me the Kube system namespace, and give me the annotations on that namespace, where I will select wherever it contains add-ons, then show me the actual JSON. There's a JSON blob inside, so I'm extracting the JSON blob, re-evaluating that, and then extracting just the version. So it shows me the key and the version of that particular add-on that's installed. So obviously, I don't need to do that. I can just do channels, get add-ons, I think. And it basically gives me the same information, gives me also where it comes from. So channels basically is a way to manage that. And it's not very well documented within the KOps repository. So I will probably try to share more of this information upstream. I don't have anything else to say, really. So questions? Yeah, it runs only on AWS, right? Sorry? KOps is only for AWS. I think they support, definitely in the last version, they support multiple clouds. Oh, there was also an interesting sound cut of how you say when they do an interview and they just only voice, like audio, block, I mean, they were just in. Yeah, podcast, yeah. There was kind of a podcast about the core maintainers of KOps, about the future. And they're actually looking at building out KOps to manage also hosted solutions. So use KOps to create, as your container service cluster, and use KOps to create other clusters. So that's one of the things I mentioned of where they would like to take this. I think if I go to releases, the last one, 1.8, they have better support for GC, early support for digital ocean. I mean, yeah, lots of interesting stuff. Yeah, I think the advantage with using this, I mean, we did try to run things very like hands-on, manage things manually. But after a while, you start to realize it becomes too complicated, right? And then this is kind of like going from compiling your own kernel to choosing a distribution. We're choosing the KOps distribution. So we have a lot of our problems taking care of. Still, the easier way would be to actually run like on top, like fully hosted solution if you can do that. Yeah, any other questions? In terms of bare metal, I don't think this KOps is at all targeting bare metal. Tectonic is, right? Yeah, of course I am. Tectonic, I need to give them that money. Like I said, Typhoon is the unbranded version of Tectonic. It does feel like that. It's like the Tectonic vanilla, but without any of the crap. I'm really focused on that. It's the same order. It's the same developer. I'm surprised to see that though, that they have this Corvette guy that wrote Tectonic the commercial offering, then come out with like a kind of, I mean, not really competing, but like an open source version of it. Yeah. I found that strange, but it's awesome. Typhoon. Yeah, any other questions? I think the most important part here for me was, I mean, even if you don't use KOps to provision the cluster, you can still use channels to bootstrap the initial part. Like what I wanted to highlight, apart from just like we are going to create a cluster, is once you created a cluster, how do you bootstrap the rest? And that's kind of what we did with channels. I'm not saying it's the best tool, but it's better than what Tectonic did back in August, at least when I was looking at it. Tectonic was a bash script, the basic bootstraps like everything in a loop. And I thought channels was way better than that. It basically picks up all of the YAML manifests, renders them and fires them up at the API, and you manage them like that. I thought that was an improvement on top of Tectonic. So that's why that was awesome. I was actually thinking even of taking channels and using it for Tectonic vanilla or something like that. Anyway, okay. All right. Welcome.