 So, hi guys, my name is Mark. It's my first time speaking in front of a very large crowd and in front of Google. So I'm basically a software engineer background. So I took up the responsibility of trying to handle the infra currently in Zomata. So what I'm about to share tonight is our Kubernetes journey in terms of how we came to be. So what I'm about to present is how do we get up simple cluster running, show the capabilities. So this is more in relation to an actual experience because we started from zero to now with Kubernetes, from zero traffic to 300 requests per hour. So at that scale, so it definitely goes to show that it can take load and it's quite reliable. So yeah, the agenda. Sorry, that's it. So let's get started, prerequisites. So you have knowledge on Docker, Google Cloud Platform, and a basic high-level understanding of the essential Kubernetes components because the terms are quite technical and they're quite specific to the platform. Also some of the CLI interfaces, command line stuff for G Cloud, Kubernetes, and Docker. So how it led to us using Kubernetes is we started Dockerizing our different projects because we were getting to a point where we were spread out pretty thinly, like each engineer was handling his own full stack. So this gave us speed in terms of development because we would just agree on Docker conventions and stuff. So each one takes their own project and virtualizes it. So the next problem we had is like, how do we orchestrate this thing? And one of our colleagues, he's already in Australia, Jonathan, so basically he started the idea of let's try this new tech. It's from Google, it's Kubernetes. And from then on, we had quite good reliability in terms of our system. So we did some exploration as well because once we had Docker, we were thinking like, oh, we can bounce around other cloud providers as well because it's virtualizing. But with Kubernetes, we tried to launch a cluster on other cloud providers at that time. So early on, it wasn't automated like now. Where on cloud, you just click a button, it creates the entire cluster for you. So you had to actually check out the script and then follow, to read me on how to run it. Problems were encountered for Azure where they completely deprecated the project. So I wasn't aware of that. So I was trying to fix something that wasn't supported anymore at that time. So that was like two years ago. And in terms of a soft layer, we tried like bare metal with Kubernetes. We got it running, but the problem was the firewall thing and all those hurdles. So we said, OK, let's stick back to GCP that time. So to give you some background on how big is our cluster right now, we have actually 32 nodes running. We have eight node pools. We have 188 deployments. These are quite, the terms are quite technical, but later I'll show you what do these each mean and what relates to your components and stuff. Sorry, Mark, I've had a question. People don't know what Sumata is or what you do. Oh, OK. Yeah, I think I missed that part. So yeah, Sumata, we do hotel aggregations. We're actually a B2B company. So we actually distribute hotel supplies and we actually aggregate from different suppliers as well. So yeah, it's kind of a Ouroboros thing. So OK, that good. All right, to continue. We have two regions, Singapore and US. So right now, this gives the impression like we had two HA clusters set up, but it's actually not. It's for different projects. Yeah, so just to let you know. And we have gone as far as having 180 plus nodes. The reason for that is we were doing TensorFlow. So there's actually a machine learning component on GCP, but we couldn't figure it out how to make it run. We always ended up with some problems. So we decided, OK, let's just dockerize our process and segment our data and then just deploy it horizontally to Kubernetes. So yeah, in terms of scale as well, we have over 1,200 CPUs at that time. And with our production stuff running on the same cluster, it's pretty reliable. There was no slowdown and anything. So it can take a beating. So on the next thing, OK. So something good on GKE cube, so specifically on GKE platform, the one that Devon just presented earlier. So number one is it lessens the amount of plumbing work. So previously, we had like independent instances where you had to manage everything yourself. Yeah, so that's a time saver in terms of development. Turnaround time for setting up new clusters is very fast in the event of a cluster failure. We have experienced actually one cluster failure. That was about a year ago. So we had around approximately 30 minutes downtime. So that was bad. But the ease of it is because once you configure your stuff on Kubernetes, it's all in code. So you can just redeploy everything. So you just say, I want the proxy servers. OK, turn it up, turn it up, turn it up. And then we were back running again. So high availability and auto scaling is very easy to set up. That's one of the biggest headaches. Because in Kubernetes, it's just like one line command, like, hey, I want 10 pods. I want 10 instances. I want 100 instances. You can easily do it. You don't have to worry about swapping or failing over or draining connections. It'll handle it for you. And it reduces infra cost dramatically. I'll show that in a diagram later on to show what I mean. And deployments are a breeze. So people from our team have different projects. Everything is Dockerized. So they can just keep on deploying and deploying, deploying through a command line to a single command. So they don't have to go into the machine, change what code or something. So it's pretty reliable. OK, so this is the slide on how it saves your money. OK, so this is a cube setup versus a non-cube setup. So usually, if you're a small to mid-sized company, you guys want to just run and go with your application development, right? So you shouldn't be trying to go to a very low level and try to figure out what exactly is the CPU requirement for your stuff. So if you do the traditional approach, you would estimate just with feels like, oh, I think my application server needs this. So you have to deploy it in different machines. So that would end up costing you $90, let's say, for each of the machine costing you $30. Whereas if you do cube setup, you're aware of the machine. Let's say this is like a 1 gig server. So you can actually split it up and specify each application, like how much of the pie it needs to take. And you can actually cap it. So in terms of figures, that was about last two years. We cut off our expenses by 60%. So yeah, it was around. We were running like a 10k monthly cost. It became like 4k after we cubed everything. So that actually matters. So continuation, the good on GKE Cube is deployment and setup configurations are preserved as YAML and JSON files. So basically it's code. You can track who changed it. You can check it out and then just run the commands. Node pools, so there's a concept in GKE called node pools. So earlier on when Devan was demonstrating deployment of a cluster, you guys could see that it had three nodes. Actually, there's a level of it called node pool. So it's actually a logical grouping of nodes. So the reason for that is whenever you start using cube, let's say you want your cluster to have one GB, one CPU, PC per node. So let's say in the future, you guys wanted to scale to have a bigger application requirement. So you can actually create another node pool where you specify like those nodes in the node pool would have like maybe five CPU and five GBs of RAM. And also you get to have your private Docker image repo. So that's a big bonus. OK, some bad things we encountered with GKE cube. So because we're using an older version of the cluster, I didn't look through like any other new developments to the latest version. Because as you can see, I've specified it's 1.37 to 1.57 that we're currently using. We couldn't capture the source IP. What that means is the external IP, let's say client IPs coming through. But we had to work around. It works by setting up a proxy as an external machine on GCP. And then you just proxy it across your cluster with the right configuration of trusting the X forwarded for headers. Yeah, you can get the source IP. And the other issues we've tackled is there's a local OS limits. I don't know whether it's capable to override it. I'm not too sure about that. But the problem was we were trying to deploy something where it required you to have a bigger OS limits, let's say open file descriptors and things like that. So that was for like elastic search and stuff. So we couldn't containerize it. And cross-region setup is tricky due to NAT limitations. So what that means is as you guys hear earlier, there was like a term federation. So that's basically you can have multiple clusters interacting with each other. So you can do high of cross-region cluster setups. So yeah, the federation, I think that's the solution for it. Yeah, I haven't looked through that. So also, cube system namespace components are tricky to manage. Basically, if you spin off your own cluster, you get to have these, what do you call this? These are like a cluster components that manage your worker nodes. So for example, earlier, I think Devan also showed you a dashboard. So that was one of the components of the cube system. So the other issues we had was GKA component versions moves quite fast. And you couldn't retain like you want to enforce using like 1.35, you can't. Whenever you spin off a new one, you'll be using like the latest, which is like 1.56. And now I think it's 1.7 or 1.6. So that all done. So I'm just going to give a brief overview of the basic components. So understanding it, so when GKE, if you do GKE clusters, Google manages your cube master. So you don't have to worry about that part. So down the cube master, as what I mentioned earlier is like node pools, the level of node pools. So as you can see, this node pool has like 2 CPU, 2 GB RAM. And the other one is like 1 CPU and 1 GB RAM. So actively, you can deploy multiple node pools, depending on your software requirements. So node represents the physical machine, actually. So when you start your cluster, if you say three worker nodes, that means three GCP instances in the cloud. And to further that, let's go down a level. So we're looking at the node level now. So node levels contain deployments. So it doesn't necessarily reside on the node. Sorry, for the wrong diagram. I mean, yeah, that's some disconnect there. But actually, this is just a concept where what's next to a node. So it's called deployments. So deployments are your different applications. For example, they're enclosed in parentheses. So this is your web application. You have another deployment for your caching server. You have another one for the proxy servers. So let's go down to the deployment level. So deployment level consists of replica sets. So what are replica sets? They are there to have a version control on your deployments. So whenever you deploy, it'll preserve a replica set. And then whenever you do another deploy, another replica set. So for example, here on this diagram, we're on currently release 200 with the pods running. So the good thing about GKE is it's a breeze to do rollbacks. You can say, hey, I want the previous version because this version is breaking. So you just issue one command. And it'll reallocate the replica sets too. And then it'll just wipe out all the other pods there. So that's the nice thing. And you're guaranteed to have no code volatility in terms of things like that because it just gets another Docker version and deploy it. So everything is fixed to your image. So the next level, which is the pod. The pod is the last thing, which you can scale. So every single pod, it contains multiple containers. So I would recommend you guys having only one container per pod because unless you want to really understand how the Kubernetes understand whether the entire pod is failing or not. Because once you have multiple containers, you'll have like a three out of three on your pod. So it's like two out of three. It's something that I do not know of. But yeah, to simplify things, I would suggest your deployment to have one container per pod. OK, so the next thing is how does your request go through your pods after you've deployed your stuff there? So how request goes through is through a component called a service. So what a service is is it'll try to create routing path that goes to your pod. So you create a service. The service will automatically create an endpoint depending on your configuration on your service. So it can actually point to the pods from the replica set earlier. So anytime you want to do a rollback, you don't have to tinker with these stuff. You just say you want to roll back to the previous replica set, and all the routing will be handled. So that's why it provides a lot of convenience. So all right, to start off, I'm on the path to going to the demo. So these are actually fixed images of the container engine creation. So as what Devon earlier demoed. So you go to GKE, you can actually create the clusters. So one thing to note is the option to turn off or turn on the GCP services of logging your cluster and stack monitoring. I'll explain that further on the slides. Also, one suggestion is when you're initially creating your cluster, I would suggest like cranking all the requirements down. The reason is whenever you create a cluster on GKE, it defaults to a node pool named default pool. I don't think you would want to use a default pool on your project. I mean, as a readability thing, like you want to name it more meaningfully. So crank it down and so to go further, this should be your cluster after you've created it. So it's here that you can see like, oh, this is your cluster detail and this is the initial node pool I was mentioning. So as you can see, there's a very fine text, but it actually displays this node pool is named as a default pool. Yeah, so as such, try to scale it down to zero, then you create another node pool for like, let's say your caching servers or your application servers. Yeah. So I'm about to show you the stuff it created. For example, sorry, let me just transition here. OK, so if you go to container engine, so if you go to container clusters, so from the slide, so actually we created a cluster here called Zomata demo. So this is assuming that I've already scaled down the default pool. So as you can see down here, I don't know if it's a zoom in. Oh, yeah. All right, so as you can see down here, I've reduced the size of the pool to zero. Then I'm going to see, sorry, it's right here. So I've created another node pool. Let's say this is called the text processing. So this is what I've been mentioning. So at least it gives meaning to what you're doing. What is a node pool? Sorry? What exactly is a node pool? So a node pool is a logical grouping of your nodes. Yeah. It's a tagging. Do you take some VMs and tag them on the same page? You don't need to tag it. So it's just a grouping so that the purpose of node pools is to have a heterogeneous mix of machines on your cluster. Because, sorry? Homogeneous. Is it? Think it's heterogeneous, right? Because it's OK. Inside the pool, there's homogenous. OK. Inside the pool, there are homogenous. But just to clarify, this is not a community concept. This is specific to Google GKE, a node pool. A node pool? Yeah. I think so. I think so. Sorry. So I'm not too sure as well on that. But as a user, I've been using GKE. And yeah, this was a concept on it. All right. So continue. So one of the pitfalls is the Docker image prep. So the importance of the entry point keyword, you don't want to keep on using CMDs and non-exec form of starting up your image. The reason for that is the kill signal for, let's say, you want to terminate your pods on the cluster. It actually sends a sig term specifically to the process ID 1. So I can show you down in the next slide is so. If you can see, this is a demo, the exec form of entry point, this is how you have to do it. It's an array of string. So what happens is when you spin off your container, your process will be on process ID 1. So it's a dumb application where it just tails a text file. So as compared to the shell form, what happens here is that when you start your container, it'll be wrapped through a shell command. So the problem here is when the Docker tries to kill your pod, it sends a sig term to PID 1. If it hits the shell wrap process, then there's nothing handling that signal. Nothing goes through your app. No signal will be delivered to your app. Because that's one of the pitfalls we had where early days where we just deploy things and we were like, why isn't it dying when Kubernetes was shutting enough? That's one thing. And yeah, so I'm about to give you a live walkthrough in terms of deploying, scaling. Go for it. OK, you just said you had this program. How did you solve it? Oh, you mean the entry point thing, right? So I just had to revisit all the Docker points and fix it. So you're using some kind of innings within the Docker? No. Sorry? What are you doing now? Are you using some kind of innings system in the Docker? So the Docker files, so I just have to replace them, like re-review all of them and look at it whether they're using a non-exec form. And I just replaced it with exec form. And OK, live walkthrough on the capabilities. So earlier, we just spin up a cluster. So as you can see on the, sorry, let me just zoom it in further. OK. So as you can see, the cluster size of this is one, so worker node one. What that means is in your Google Cloud, you can see that there is a, oopsie, this is too big now. OK, so this is on the VM instance panel now. So you can see it actually creates a GCP instance. So this is your node, this is a single node. So if you want to scale and require more resources, you can actually come back here to the node pool. It actually creates an instance group on GKE. So you can actually just quickly take a look like, oh, what's it doing from all the instance groups? So you want to scale, you can turn on the auto scale you want. So you can specify like CPU metrics. So that's pretty good. For now, let's simplify things and not do an auto scale. For example, I want to scale it to three machines now. So voila, it's just doing that. Then it'll create three instances. Now you have three machines. So that's how fast it can scale. Because right now, we're just running with our clients. So any clients that comes on board, we can just scale horizontally. We don't have to beat our heads on trying to figure out how to do that. So basically, so this VM is running as wrong. So the VMs, initially, you show it's running from Kubernetes clusters, right? And now, basically, you're in the VM instance and you scale Facebook Kubernetes clusters from the VM instance site. I don't know. So earlier, so to get back to that, so there's a GKE thing here. So you can see container clusters, right? So container clusters would have the node pools, right? So the node pools from here, it belong to a text processing node, as you can see here. So actually, what's going on is it's actually a node pool would be an instance group in GCP. That's how they manage the stuff. So when you do the scaling from there, you can just go to the instance group and adjust how many instances you want. There you go. So does that, sorry, answer question? Or? OK. All right. So that's how you scale in terms of physical machines. So we have actually done like 180 instances and it's pretty reliable. And to show you on the command line, so for example, so you just have to get familiar with the command line as well. So there's something like a command called, sorry, I need to zoom this in. Yeah, that should. So there's something called the, whoops, there's a command called get nodes. So what that means is it'll list down the nodes for your cluster. So now we're anticipating three. It takes a while to sync up the other three. So right now, you can see it's still one of the nodes. It takes a while. Sorry. Should be the same project, but it takes a while. Sorry. Oh, Cloud Show. I've never used the Cloud Show. Oh, this one? Sorry. There's actually an indicator where it's still spinning. That means it's still in progress. So as you can see, there's a tooltip saying that it's resizing from one to three. All right, there you go. So as you can see, so what this means is there are further commands to inspect your nodes. That's one of the nice things where you can see what's a node that you created. So there's something called the describe. And you can actually see this stuff. So actually, it displays a lot of information. For example, what are the current things deployed, as you can see, in the machine? It actually provides you like, oh, this machine has one CPU and 6.5 gig of MIM. How many pods can it take? All right. So yeah. And you have your memory and CPU limit stats here as well. So you can see as early on, earlier slides, the pie thing where you can divide your pods and application inside here. So now to demonstrate some of the configurations, you can see, so there's a couple of different components here. I don't know if it's clear enough. But so as you can see, there's a lot of YAML files. So to start off, let me just jump directly into an application. Let's say this thing called a simple app. So this is actually a deployment configuration. You don't have to try to understand each of the lines. But yeah, this is actually a full-fledged deployment configuration for controlling your stuff. So you have certain configurations, like the resource control. I don't think this is quite small. But here you can actually specify how much of CPU you want to take from the pie, like the memory and stuff. And you can actually cap it. You can limit it so that it doesn't go through. So you have five applications deployed on a node. So if you limit it to just like 200 mCPU, it doesn't go over and hog the machine or clog the machines. So to deploy that, I can show you something. So, for example, let me get my deployments. So there's a command called, oh, sorry, deployments. So currently, I'm going to remove a deployment for now to show you guys. The old container EngineX Proxy or use Kubernetes Proxy? It's a combination of both because you need an external load balancer from GCP to connect outside traffic. And it'll go through your EngineX Proxy. And from there, it handles all the routing inside your cluster. So we delete a deployment now. Oh, sorry. So to deploy the application, we just need to do a create command and say the file being on. So you just need to do a command line where you just point it to your configuration. So, for example, it was on the deploy. Sorry, this is the cube. So, for example, here, simple app. So the app actually just returns a host name. So as you can see, it creates it. So if we examine how many pods now are running, so as you can see, it is creating the application that you just did. So it takes very quick. It's a very easy way to deploy your stuff. So after it's ready, just have to wait a little bit because it'll do our liveness checks and stuff. So all right, it's running now. It's not ready. Oh, there's a watch thing. All right, it is running now. So don't we have an endpoint for it? So what it does is it just prints the host name for it. So host name being the pod name itself. So as you can see, it should be like 6441RZ7. OK. Oh, sorry. It needs to be host name. Sorry about that. There you go. So it's actually hitting the servers. Zoom that in. There you go. So it's actually hitting the server itself. So what that means is that your stuff can be horizontally scaled very quickly. So as you can see right now, it's a single pod. So let's say your traffic increases. You can actually do commands like just scale. And then you can say, I want it to be three instances. And you can specify the deployment. So these are command lines that are available to the Kube control. So this is how easy it is to quickly scale. So once it's scaled, if you do a get PO. So as you can see, there's like three pods coming up now. So it's actually very easy to take on traffic. And the nice thing about it is the other thing I want to show you is there's actually a component called a horizontal pod auto-scaler. So what this is is this component manages your auto-scaling capabilities of your pods. So you can actually target it to a deployment. So for example, so currently this configuration shows that it's for the engine X proxy. So you can actually specify what's your minimum and maximum replica. So you can specify initially I want to have two proxies. And you can say I want to have five proxies to scale up to when it starts to face 60% CPU load. The question is, so what we are scaling basically the number of the pods across the clusters, the VM that is attached to the cluster, right? So if you bridge that limit, let's say the entire cluster running almost at full capacity, is there any way to scale on to add additional capacity from the VM? From the node side. So there was earlier on GKE, there was an auto-scaling capability. So they have a warning. Oh, they have a warning? Yeah. Can't use auto-scaling. So when I used this before, I haven't scaled it from this side. You can scale it back in the container, the UI. There's a way of increasing the nodes, and that's supported. I haven't ever seen anyone actually do this, so let's go into the compute side and scale the compute. I've actually only ever done it from the GKE platform. Yeah, but there's no auto-scaling. No, it is. But it's beta. It is supported from the GKE platform, UI, and it's beta. So you can add compute. It could be related to when you launch your cluster for the first time you send it up, but over time if you want to modify it, you go to the master and you do it from that. I have another question, because the extra layer of the pool, right? Yeah. So you may be reaching the capacity of a pool. Will this scaling spill over to a pool that may be in the same cluster, or are they linked into a pool? So the auto-scaling configuration for the G-Cloud is per pool configuration. Because a pool would be translated to an instance pool. We're not talking about the upscaling of the cluster itself, but from Kubernetes. You are scaling the number of pods. Are those number of pods going to be limited to a pool, or will they just go where? So actually you can specify. You can configure it to only recite the certain pools, or you can specify it to go across node pools or all the pools. So actually it intelligently distributes your application. For example, you have five machines. So when you scale to like seven, eight, or ten instances, it actually distributes them evenly. Yeah, but you are scaling a single pool, right? Sorry, come again? When you scale a pool, you don't scale the whole cluster. That's what I understood you have. Now if you're doing scaling on the pool level, you're controlling the physical machines. Yes, yes. So if you're on the pod, you're on the cluster level where you can see both the node pools. So if you don't configure your pods to deploy to certain node pools, they can actually recite in both of the nodes pools. So going back, oops, sorry about that. So on the scaling for Nginx, so I have one already registered here as getHPA. So I guess you can see, oopsie, that brings it down. Sorry. So let me bring it all the way up. So as you can see, this is a HPA resource. So it actually displays your, this is like the target CPU and your current usage. The table is misaligned, so sorry on the tab thing here. It's because of the reason. So currently and minimum pods and max pods, right? Sorry. So this is currently having like the configuration of two is the two. So actually I can update that with just a single command as well. So, sorry. So for example, I'll apply the file earlier. It's a lot of prefix. And then cube, then we go HPA. So there's a config called apply and it'll just easily apply whatever you have updated to your configuration. So it'll say it has been configured. So if you check your HPA again, so as you can see the configurations are reflected. And this is last time what I've read, I don't know what's the current updated functionality is. This is actually a real time CPU usage across all your pods. So it actually updates every three minutes. So that's something to take note because we use the HPA information as something like an alert for us as well. Yeah, with that, I guess also sorry. The other thing is, so there's a lot of capability on GKE where you can configure all your ENV variables here as well. So for the things to deploy, so that's with relation to, sorry, what was the ENV bar? ENV bar. Okay, I don't have it here. So you can actually configure ENV bar to your deployments. And the other thing is you have to specify and configure the liveliness and readiness probe. So these things will make sure that it'll properly detect and properly do routing and failovers. So what these things does is you can configure it to hit an endpoint of your application, try to connect to a socket to see if your pod is alive. So try to always remember to configure a liveliness and a readiness. Readiness means that it'll route it when it's ready and a liveliness will restart it if it's not responding. So it gives you a hassle-free worry. And also the other thing to consider is, so in terms of the service as what I talked to you about, it's the thing that routes the components in. Actually, you can actually route services to external instances as well. That's actually possible for, let's say you have a databases where you don't want them to reside inside your cluster. So you can actually configure, these are called endpoints components. I can show you a couple. So let's say if you do a get service, what you get is something like this. What this line means is this is the external IP for hitting the engine X. So we can try it. I think I've configured it to give us a forbidden or not found. So it's saying that this has a port of 80 open and 443 and routes to these respective ports in your cluster. So further down, there were something called endpoints, as I mentioned. So from this component, you can see like, oh, your engine X actually has these endpoints. So these represents the different instances in your cluster. So whenever you do deploy, scale, everything, all of these are handled. So you don't have to worry about it. And yeah, that is basically it. All right. Thank you. It's a good presentation. Just now you shared there are some problems you are experiencing while using the Kubernetes. I just want to also learn from you that usually while deploying the cluster for some unpromised cloud setup, a lot of issues will be facing will be related to, for some, very high IO intensive application. Because right now, because of this, a cloud setup is easy to do scale up to add more CPU and add more RAM, especially with this kind of cluster setup. So how do you experience any performance issues when certain application requires additional IOPS, meaning high disk IO? Because I see the configuration that you can add additional SSD disk. Have you experienced any slow IO performance issue? Yeah, actually we have experienced it before. So when we were doing the TensorFlow thing for the cluster, so we were actually doing an NFS server. So we basically mounted for our cluster so that we were doing NFS IO. And when it cluttered up the bandwidth, it started to not respond. Yeah, so your stuff will start to get killed. So for that issue, by adding additional SSD disk into the configuration, will that help? Or actually you can only add up to a certain number of SSD, and that also limits to IOPS. The additional, I haven't tinkered with the additional SSD feature, but if you do disk mounting, so actual volume in your pod, I think that would solve the issue because how we solved the problem with that is we had to separate the data and then we just pull it down to the local file system and run it. So we basically split it into smaller bits. Okay, thanks a lot. Thank you. So we're handing out stickers for two more questions. Yeah, just out of curiosity, what Docker runtime is already installing the node? When you create, like, Google create the nodes? Docker runtime? The Docker runtime? Yeah, it has to be there in the node, right? Yeah, so when you use Docker with the JigCloud, the JigCloud CLI actually wraps a Docker functionality. Do we have to install it separately on the node? No, I think it's configurable, but for example, as you can see here, JigCloud can't remember the command lines here, but it actually displays, I think, was it version? I think in most cases, I think it was 1.11, something. Or something like, yeah, can't remember all the parameters. Oh yeah, there you go. So as you can see, it's like 1.12. So it can be a very separately? Yeah, you can, we've tried it before. But as long as, if it works, try not to do so, because last time we encountered an issue where everyone couldn't push because it had something to do with it working coincidentally with JigCloud. Yeah, that similar situation that I had. Yeah, you have to roll back. We would try that before we tried to look into solutions and then, yeah, it just causes more pain and trouble. Basically, what issue that I face, the Kubelet doesn't communicate with the Docker runtime. Yes, we've encountered that issue as well. So as long as it works, don't move it. Try not to move it, try not to move it. I mean, you can upgrade because GKE would start to inform you like, oh, your node pool is pretty old, you better start switching and that's the time you start to switch, yeah. Thank you. Any questions for Hunter? Sure. About the step-full apps and storage volumes, right? Yes. When all these started, you know, all the more cloud native apps, templates apps, keeping your services separate from applications and all, then what is the running factor to have step-full apps? Again, running in continental. What is the running factor? Why do we talk about this? Why do we want to do state-of-the-art? I think because the workplace is, you still need to run the database. You still need to be able to store your data in WordPress or, you know, if you're running to do it, then those things need to be stored somewhere and you need to be able to have them existing when the node phase is what you scan. So that's what a state-of-the-art workplace is. Right, so what happens is most of the times the database is not a small database. It grows with time and with the data and all and she was explaining the IO problems, what kind of problems should come. So database is something that you have to be careful, you know, you'll understand for it. If that is the requirement, why do we have to run it inside the container? Well, as I said, if you can use a managed service, why not use a managed service? But as for the question, well, why not run a running container? The thing is that it helps the operator cycle. It helps the maintenance cycle. So it takes steps away and I think, you know, if you will say running MySQL and your data is stored on the system body on a Google Assistant disk, you'll be able to set it for the meeting house. Your data is stored there, but as you go and you are able to update the container to ship between versions, you don't have to migrate the whole thing or you don't have to do anything else without it. So it's a different way of doing it. If you run it on a bare metal machine without anything else in there, you know, that's entirely up to you. You need to just, I guess, factor in how you want to run these things, but in terms of the platform itself, you can run anything you want in containers and you can manage it as a unit with all of those dependencies that can contain together with the container bidding on its own migration strategy versus the data that's sent with it. So it can be good to see how we can proceed with the future. How do we put it up? I'm so sorry that we've got over the time by about an hour, but the topics were really interesting. Thank you very much, Hunter, for the talk and storage. Mark, thank you very much for the talk.