 So my name is Yongkang He, very happy to be here to talk about how to automate the EKS cluster creation and the protection. I basically, almost about one year ago, when I joined the company, Casting, I started to run the demo to show customers how to protect the containers, how to, you know, restore the containers, how to run the disaster recovery, how to support the portability migration from one region to the other region, or could be from other cloud, from on-premises, come to AWS EKS. Is that one? There is a display box that's blocking the screen. Ah. Okay, yes, okay. Oh, it's okay. All right. Sorry. I don't know what's happening. Let me try again. Oh, sorry, yeah. Can we exit from it? It's going to do all the screen, that's all right. Is that okay now? No. It's okay, right? All right. Let's continue. So the idea is when I joined last year, I realized there is a very good, you know, opportunity. I need an automation to create the EKS cluster to demonstrate to the customer. I don't have to keep 24x7. Basically, in the initial two weeks, I keep running the EKS cluster in order for me to demonstrate to the customers. And then I realized, I actually, I only use one hour or two hours every day. So I created the automation. So every time before the customer demo, I just run the automation, one command, I will spin up the environment. In about 20 minutes, it will be up running. And then after the customer demo, I just turned off. Run another command to destroy the whole environment. So very happy to be here. I can't wait to show you how it works. So I will go straight to the demo. The reason is it doesn't take like 20 minutes to build up the environment. So I log into the AWS Cloud Show. So the single one command is run the deploy. But where to get the command, get the scripts, it is actually in my GitHub page. So I will show you shortly. If I come back to the slide here, you can see, so the magic command is dot slash deploy. It's based on the batch sale. And inside of the command, actually, there are two scripts. One is to create the EKS cluster. The second one is to enable the container backups. So totally we are looking at about 18, sometimes 20 minutes. The whole environment is ready. By any of the scripts, you are expecting to see. Here is a link to access the web console. And you can immediately to see how to granular recover the containers. It could be somebody deleted the configure maps, secrets, or service account. Whatever any one of the spec artifacts is missing, you can recover. You can provide the correct service. So you can immediately log into the console to show how the advanced backup recovery options works for the EKS. Where to get the scripts? So it's publicly available from my GitHub page. So the bottom of the three, basically it's a one-time job. Once you've done it, it will be sitting there. So now, since we are already running the command, as you can see from the cloud show, I use the cloud show so you don't have to think about it to use your own terminal. But technically you can use your own terminal anyway. So right now we're building the... I was using the EKS CTO command line to automate the whole environment. So right now it is still building. Let me walk you through a little bit more details here. I guess everybody knows containers... There are a lot of advantages of comparing to the virtual machine. So I'm not going to spend too much time here. But the DevOps actually somehow speed up the adoption of the containers. Literally everywhere. One of the big advantages is you literally just build once, you can run anywhere. It could be public cloud, private or virtual machine, physical machine, etc. But the problem here is when you initially only like, you know, dozens of containers, you can properly still manage them. But when you launch hundreds of thousands of containers, then becomes a problem. That's where there is a Kubernetes comes into play. So typically when you have the Kubernetes cluster, you normally have the control plan nodes or master nodes, and then you have multiple worker nodes. But how to set up the cluster? That's a big challenge. I don't know how many people did to set up the Kubernetes cluster. Use the hard way. That means you install the Linux machine, you install all the packages, you create the control plan nodes, then add the worker nodes, it takes ages. So that's why according to the CNCF survey, people just hate the self-managed cluster. They want an easy way to deploy. That's where the managed Kubernetes becomes so popular. Amazon EKS is definitely the clear leader and followed by some other cloud providers. The main reason is the hand-tuning your own cluster is so hard. I also did my own survey by my LinkedIn post. 51% choose Amazon EKS, definitely a clear leader. So how to create an EKS cluster? I also listed here there are a lot of ways to create the EKS cluster. I only listed here a few. So you can create from Amazon Web Console. You can use cloud formation. You can use Terraform. You can use AWS CLI or EKS CLI. It could be COPS or literally it doesn't supply different tools. You can create the cluster. But none of them can like the automation I created. Just one command does everything for you. So what I'm going to focus is how I created the automation using EKS. Before I show you the automation, I want to show you, if you want to create from a Web Console, what are the steps you need to do? So first of all, you need to create Amazon EKS cluster rule. Use the policy listed here. And then you need to create an EKS node IAM rule, three different policies. And once you have the rules available, you can create a control plan node first, followed by create the node group. It does take some time. Especially if I'm a developer, I just need a Kubernetes cluster. I just need to run my application. That is very challenging to them, how to create the IAM rule, how to create, use the policy, et cetera. So that's why I created the automation here. So basically you just run EKS-deploy. You just need an EKS cluster ready to go. So EKS-deploy, we will create the EKS cluster. The code, same area. It's from my GitHub page here. Did anyone think about the Kubernetes containers? You still need a backup. So Amazon or other cloud have the same model, shared responsibility model. The data is customer's responsibility. It's customer's responsibility to protect them, to make sure it is safe. So how to enable the container backups? So by the way, AWS backup does not cover EKS cluster. It does not have the visibility into the Kubernetes containers. So typically when you want to enable the container backups, so here are the steps. First of all, you need to retrieve the EKS, keep a configure file, be able to access the cluster. And once you get the access, you can verify if my cluster is up running, and then you can install the backup tool. There are a lot of different backup tools, even free open source tool. There is also commercial tools available. And once you install the tool, you basically typically need to create a backup location target. So since you're running from Amazon, most likely you choose Amazon S3, but it can be any other S3 compatible as well. And once you have this ready, you can create a backup policy based on your settings, how often you want to do the backup, or how often, where are you going to send the backup to, and you can schedule the backup jobs automatically, or you can run on-demand backup jobs. But these steps are also pretty. It's not that straightforward. That's why I created the automation for the container production to enable the container backup three minutes, three minutes to enable the container backups. You just run k10-deploy. So I created the automation based on the custom k10. That's the number one Kubernetes backup DR application mobility tool. In order to get the scripts in the same place, GitHub reports train, you can get all the scripts. But shortly I can log into the GitHub page to show you. But just to summarize, my automation created three different functions. So you just need an EKS cluster. That's the first column, EKS-deploy. The middle, I already have an EKS cluster, but I don't have the container backups yet. That's very important, especially when you're running in the production environment, you definitely need the backup to specialize the tool for your container backups. So the k10-deploy will enable the container backup. There's all of the jobs here. Install the tool, deploy. I also deploy a sample database. So the backup tool does not require Cassandra. So the Cassandra is just to deploy a sample database to show you how you can granularly recover the database applications. And then create Amazon S3, create backup policy, kick off on demand backup jobs. So the last column is I don't have anything. I just want to spin up an environment to try all different use cases. So that's the deploy. So basically there's automation of the number one and the number two together. So once I finish all of the testing, you can basically use three different tools to destroy the whole environment, to clean up the environment. So eks-deploy and then k10-deploy plus dot slash destroy. That basically clean up everything. Any questions? Yeah? Can you go through the script? Yeah, I can show you the scripts now. So right now we're still building the environment. That means it doesn't pass 20 minutes yet. So let me open the different tab. Say if I go to my GitHub page. All the snapshots also? Volume snapshots? Yeah, it does. Actually, let me talk about it. So typically when you're running your applications, your containerized application, running, say, from more Amazon eks. Let's say I've got a Castangio database running here. And you have all different, you typically running in a separate namespace. You have all your different Kubernetes objects. Your configurations and plus your persistent volume claims. So when we are going to do the snapshot, we basically will text snapshot of your application components, of your Kubernetes configurations, and your persistent volume claims snapshot. So make sure when we do the snapshot, we capture everything. When you're going to recover, we give you the options to recover the whole namespace, the whole application. So it can be for restoring place. Or you might just want to clone to a different namespace. Or somebody, I mentioned earlier, somebody accidentally deleted some of the configurations. It will cause problems. We give you the visibility. You just take the objects, do the recovery. Yeah? Okay. So I created automation, you know, AWS eks. That's the one. If I click this link here. So you can see all of the scripts are listed here. And I also, if I jump to directly to the read me here. Here. Oh, not showing. Sorry. Let me make the mirror. Yeah. This should be okay. Let's see. Sorry for that. Let me go back here. So this is my guitar page. Yeah. Okay. What about now? Oh, serious. Because my laptop is actually pretty old. Okay. Is that, is that good now? Okay. So I created the automation here. So eks-k10. That's the one for Amazon eks. How to automate the eks across the creation and, you know, protection. So if I go direct to the read me section here. So this is the one. I use, you know, cloud shell. It makes my life easier. But you can use your own terminal. As long as you have your AWS command line tool installed, your eks-ctl, this tool installed, that will be easier. You clone the repository. What we did earlier is we did the deploy command. So that's where we create the eks across the first, followed by enable the container backups. Any other questions here? I also have some other YouTube videos to walk you through. If you just like to create from a UI, I've got a video to show you how to create the eks cluster from a UI. And how to enable container backups three minutes. Okay. And if you guys are interested to see what's inside of the scripts that deploy, basically pretty simple. You create the eks cluster, and then you enable container backups. Yeah, go ahead. So let's suppose that I want to spin up this eks cluster in a specific CID range, in a specific VPC range, so where I can make those kinds of customizations? Yeah, that's a very good question. That's part of the eks deploy. When you spin up the eks cluster, so for now you can see the eks deploy. So basically what I did is eks create the cluster. That's the command to create the eks cluster. You can customize it from here. Okay. Yeah. And then you can also... Then for the security group and all, we can customize it. I'm pretty sure you can customize it. It's the feature provided by the eks CTL. Okay. Thank you. Not a problem. Any other question? Otherwise, let's see if it's still running. Let's wait a few more seconds. I have a question. Yep. How does... Which is the fastest cloud provided to spin up communities cluster? Have you tried Google? Actually, my automation does a lot more than AWS. I did cover all six public cloud. So which one is the fastest? Which one is the fastest? If I'm only still living with you, definitely Google is the fastest. I'm serious, yeah, yeah. But the thing is, when customers look into the Kubernetes cluster, it's more than just the Kubernetes cluster. But where does it get speed to... But depends on when you are talking or more. If you think about the overall solution, nobody can compete with AWS in terms of the breadth and the depth of the product services. For that, you just want to spin up something and try it out. If you just want to try it out? Yeah. I personally also try Google pretty much a lot, seriously. Because I only need like six minutes so I can spin up a GKE cluster. Six minutes for Google and then 20 minutes for VPN? Yeah, it is. But you can think about it the other way. Probably there are some advanced features, right? No, seriously, it's like having IAM and AWS through source. Okay, let's see if it's still running or... Yeah, it's almost there. Actually, the EKS cluster has been created. Now we are installing the back of a tool, but not showing. Sorry, let me make a mirror. What about now? Yeah, it's good. So basically, if I scroll back, you can see the EKS cluster has been created exactly 15 minutes. And then follow my backup enablement. Enable the container backups. It doesn't take about two to three minutes. Yeah, so it's almost done. But do you guys want to try? If anyone wants to try, actually, you can try using your own AWS account, but it will charge you, obviously. But if you don't want to try using your own AWS account, I got something I can... Actually, let me show you here. So you can join the Kubernetes-Singapore make-tapel group. So basically, I'm the organizer. I organized a meeting for tomorrow at the AWS office as well, but not this office. Unfortunately, we could not book this office. So we booked it in 1 George Street. So it's a full version, a full hands-on version. AWS will provide the access. So you don't have to be charged by AWS for your testing. So feel free to join us. I'm actually got a very easy domain name. So KAS-UG, user group. So if you click type this link, you can finally join the group. In the meeting, you can also join us, because we only... It's not a big room. We only can fit like 60 people. So we will also have the remote team's meeting available. Anyone wants to join us, feel free to join us. So some of the reference links... So I also created a lot of the YouTube videos. Talk about Kubernetes. That's one of my favorites. And I also have the container backup. E-Cas, a light bulb video. That's the one I created with my AWS friends. We worked together for this video. In the automation code, I already shared earlier. And some of the documentation, if you're interested to know about a constant, this is our official documentation page. Last page, if you want to know more, I created a lot of different videos. Some of the workshop, a two-hour workshop. I have the lab guide to show you step-by-step how to create the E-Cas cluster, how to do the recovery of the individual namespace, of the individual objects, et cetera. So by now, the environment should be ready. Let's see, yeah. Actually, it is pretty good. This time, you know, seven and a half minutes. So the whole environment is ready. Oh, actually, let me... Can you guys see? You guys can't see. Sorry for the... So now you guys should be able to see. So by end of the scripts, you can see total time of 17 and, you know, 17 and a half minutes. And for the enable of the backup, you only need two and a half minutes. So by end of the scripts, you can see, this is my URL to access the web console. Once I log into the web console, I'm not going to show you how to recover the container, but just to show you the UI. So I copy and paste it here. And press Enter. And you will be asked to provide the token code to access the web console. So come back to the cloud shell. This is my token code. So I paste my token code here. So click Sign. So first time to log in, you will be asked to provide your email address, your company name, and then click Accept Terms. So did you guys see? So about, you know, less than 20 minutes. I got the EKS cluster running. I got the Cassandra database deployed. I got my backup jobs already running. And I can immediately show you how to recover the containers. Actually, my backup jobs are already finished. So that's the question. Yes, someone asked earlier, what we back it up here. We do the application components backup. We do the Kubernetes configuration backup and also your workload. In this case, that's Cassandra. Could be any other, you know, applications. So right now we're doing, we finished the snapshot, but the snapshot's still sitting in the same cluster, same data center. What if somebody deleted the whole namespace, or maybe the whole data center is gone? You lost the recover capabilities. So that's why we always recommend to send another copy to external Amazon S3. That would be a lot safe and cheaper, reliable. You can keep it longer for, you know, little costs. So the second job actually copied the snapshot to the Amazon S3. Actually, there is a lot of other, you know, cool features. Once the data is sitting from the Amazon S3, let's say I give you an example. Customer initially running the EKS cluster from a Singapore region. Now Jakarta region is available. They want to move back to Jakarta. I can easily import the data from the Amazon S3 and then recover the containers from Jakarta region. So that's to support the portability or application mobility functions. So right now while we are talking here, the backup is already completed. You can immediately to see how the recover works. So I'm going to just show you, you know, how recover works. So the main thing here is, so most of the people might ask, you know, since I'm running from EKS, behind of the scene, I also, I can do the EBS backup. Yes, you can do EBS backup. But do you see what's inside of the EBS, you know, volume? You can't see. So here is a difference. If I click the restore point here, so we can select from any one of the response, the restore point. So from the primary storage, the second one actually already moved to Amazon S3. So you can choose from any one of the copy for the recover. So if I want to recover anything, you just select the restore points. So here is a major thing. The main difference, we do see what's inside. You can see I got a PVC, I got a configure maps, a security, et cetera. So somebody deleted something here. You can just deselect all of these. I just want to recover the configure maps or the secrets. That's very handy. Yeah, okay. So you also got the option. You can choose to say, if you do four restore in place, just click restore where we restore everything back to the origin name space. But if you want to restore to a new name space, you just give a name in the click restore where we restore everything to a new name space. This is also very important. My production system is running okay, but it's very slow. You want to troubleshooting. You don't want to play from your production system. So we allow you to restore from the backup to a different name space. You do the troubleshooting investigation from there. I think that's pretty much all from my side. So is there any other questions? I have one question. Yeah, go ahead. It's a bit different question, but again, it will take a lot of work. So I believe your representation is publicly available. It is. So we suppose that in real life scenario, we are going for a productions department. Now we can present to the client that we are downloading this representation. We are just creating your environment with some trustable resources. How we can convince them to like, like in terms of security, like this board has been obtained by some tools or something. How we can convince them? If you want to make it secure, definitely. You can use the IAM rules. You can have the Active Directory integration or any LDAP integration as well. So it is configurable. I mean, generally in real life scenarios, we are not allowed to download publicly something. The images. Yeah, typically in the isolated environment, especially like a financial industry, that's what we call air gap installation. Yeah, so when you go to the board with the agencies, like... Yes, yeah. It is pretty easy. Basically, you just dock or pull all the images, push to your private registry, and then we do the installation from your private registry. Yeah, images? Yeah, images. For people like for CIS based, CIS based standards, you can do the hygiene as a body. Yeah, I mean, that was my main question because if we download something from your repository, how we can convince them to the client that we are downloading this infrastructure that they got from some trustable resources? Yeah. Technically, you don't have to, you know, use my automation to do the installation. The most simple it is to enable the container backup is just one command. One Helm install will deploy to your EKS cluster. Or you can deploy from Amazon, you know, marketplace, AWS marketplace. One click, you can deploy. Yeah, that would be a lot easier. That will solve your problem, your concern. You don't want to use someone else, you know, a repository, right? I don't know, do we have time for any other questions? Why we are setting up the environments? Yeah. Yes, one last question. Make it quick. Yeah. Sorry, what's the question? That is actually a very common question that I get asked very often. So I tell you all, in the traditional way, earlier days, when there is no tool like a constant or any other, you know, advanced tool to have the application, you know, granularity, there isn't, the only way backup is via the ETCD. But the ETCD backup problem is you are not able to do the backup like every five minutes, every 10 minutes. You will be in trouble if you do every five minutes do the backup. But the granular backup, you can buy your application, you can make it every five minutes, every 10 minutes. And you can spare all the backups across the whole, you know, 24 hours. You don't have to, like the traditional ETCD backup, you might do every day do the backup. You might, you can, technically, you can do every four hours or even every two hours. But when you are going to restore, you restore to two hours a ghost data, a lot of stuff already overrided. So you can't get the latest restore points. That's why the, you know, application-centric backup is more important, is more, you know, effective. But having said that, you know, constant does allow you to backup the ETCD. But we just, we don't recommend a customer to do that way. We don't think it's a great way to protect the applications. Okay. But if I have some internal configuration, like some binding rules, or some within the cluster, so those icons are... The cluster scope of the resource is also backed up by us. Yeah. It's also taken care of by the custom backup. When you do the backup, you can also select the cluster scope of the resource. Easy. You want to back them up and we'll back them up. Okay. Thanks.