 Welcome to Cloud Native Live where we dive into the code behind Cloud Native. I'm your host today. My name is Whitney Lee. I'm a developer advocate at Tanzu by Broadcom. So every week here on Cloud Native Live, we bring new presenters to showcase how to work with Cloud Native technologies. So we'll build things, we'll break things, and we'll answer your questions. Today, we're here with Panjong. He's going to talk to us about how to enable developer self-service with seal, walrus, and open tofu. I'm so excited. So this is an official live stream of the CNCF and as such, it's subject to the CNCF code of conduct. So please don't add anything to the chat that's going to be in violation of the code of conduct. Basically, just be respectful to the other chatters, to the presenters, and to me, please. And thank you. I have no doubt that you will. Friends who are joining us live, please do say hello in the chat. I love how this is a global community and we're having a global experience right now. So please say hello and where you're from. And also, those in the chat, if you have questions as the presentation is going, please do ask those questions in the moment and we'll take a break from the presentation and we'll have a conversation and we'll answer your question as we go. So that's going to be super fun. And with that, I'm going to hand it over to Pan Zhang for today's presentation. Hi, Pan. Okay. So thank you, Whitney, for the formally introduction. Good morning and maybe good evening to everyone. And this is Pan from seal.io. And I'm very excited to have this opportunity in cloud native live stream and introduce how to enable developer self-service with seal virus and open-tooth. Okay, so let's start. So first quick introduction about seal of a company and myself. So seal is a startup company founded in year 2022 with core team members from Rancher Labs, which is a cloud native company who has developed a container management platform called Rancher. Okay. And our team members are experienced in open source cloud computing and DevOps. So in seal, we want to develop open source platform engineering projects. And about myself, I'm the co-founder of seal and I'm an engineer with the infrastructure background. And I have been working in IT industry for maybe 19 years. Yeah. So before seal, I worked in SUSE and Rancher Labs, Citrix, and Microsoft. And my Twitter and XID is Peng Jiang 80. And yeah. So probably because I just read and never post. So it was suspended several months ago. So I'm still struggling to get it back. Okay. So about open-tooth. So I think most of you should already know open-tooth. So, but I will just use one slide to quickly introduce it. So open-tooth is a community-driven open source fork of Terraform. Since HashCop changed their Terraform license last year. So now open-tooth is a Linux foundation project. And the first GA version 1.6, 1.0 was released in this month. So in January 10th this year. So it just has some minor difference between open-tooth and Terraform currently. So open-tooth has some new module testing future. It has updated S3 backend. And it also has its own open-tooth provider and module registry now. And it contains some other enhancements. But frankly speaking, currently at this point, because open-tooth is just open source fork for Terraform. So it also has some limitations just as Terraform. So the first one is environment management. So if you're a Terraform user, you'll know that there is always a debate whether you should use workspace to manage different environments or not. So in general, workspace is not a good idea. So probably you need to use maybe multiple folders or something like this to manage different environments. The second one is composable modules and workspace. So it's also a question to whether you should use a mono module or multiple module to handle some complicated scenarios. For example, if you want to create a Kubernetes cluster with open-tooth, then you need to create VPC. You need to create subnet or something like this. Then probably you need to create a root module. Then you need to create some other submodules then to complete this task. But it's a very complicated and big module. Then it will cause a big blast radius. And it's hard to maintain. Also, it will cause a long runtime. And also it will be very difficult to handle drift detection, to handle drift, something like this. And the third one is multi-modules orchestration. So you know that in Terraform or open-tooth, there is no stack or application concept. So which means that if you want to manage multiple modules, you need to find your own solution and to manage them together. And there is no official solution in open-tooth. So this is from the operator perspective. So from the developer perspective, there are also some limitations. So the first one is that as an IAC tool, infrastructure as a code, so there are too many operation-related things in open-tooth and its modules. So HCL, hash code configuration language, is good. It's quite simple for a developer. But HCL itself doesn't work. We need to use different providers and different modules to manage infrastructure. And each of them contains a lot of things related to infrastructure which may not be related to our developers. And these infrastructure-related things require additional knowledge and may add cognitive load to our developers. That's a good thing. The second one is multi-cloud portability. So developers allow containers because it provides portability. You can build one container image and run it everywhere. But for open-tooth modules, normally they are specific for certain cloud or certain infrastructure. So if you use open-tooth modules directly in your application definition, then most likely you will need to update the definition for each environment independently if they are using different infrastructure. And the third one is infrastructure and resource management without application view. This is the same thing as in previous slides. So as an IAC tool, open-tooth itself, just focus on infrastructure management and there is no application view. So what developers really care is the application. They just want to have the infrastructure dependencies ready as quick as possible and use them in the application system. But they don't want to manage them. That's why we want to create a solution to enable developers to do self-service and manage the infrastructure more efficiently. So from our point of view, we think there are three key points to achieve this. The first one is application view, which means that you need to manage the infrastructure resource in certain applications instead of managing them independently out of your application system. The second one is reduce complexity, which means that there are some developers who have very good infrastructure knowledge, but there are more developers who don't know infrastructure very well or they don't have time to learn it. So we shouldn't expose the infrastructure concepts directly to the developers if we want to achieve developer self-service. And actually the complexity is not just for infrastructure only and it also includes the complexity of Kubernetes. So the third one is cross-platform portability. So container is great because it provides portability. Once you have your container image, you can run your application almost everywhere. But in the real world, the entire cloud native application system is more than just Kubernetes, right? You need a database, you need Matthew Q, you need a load balancer and other things. Most of them are running outside of your Kubernetes cluster and normally they are different in different platforms. So it will be great if we can use a single definition to describe our application system and the wrong edge anywhere. So this is what we want to achieve through virus. So virus is a fully open source application platform based on IAC tools such as open source telephone and we will expand to support more IAC tools in the future and we want to enable developer self-service through virus. Here is an architecture diagram for virus. So virus is based on Kubernetes. So it has some core services. So the first one is a resource manager. So resource manager will be responsible to synchronize your template registry and to manage your resource and to compose it as an application. So resource manager is also responsible to core the developer to do the real infrastructure provisioning task. So for each run, it will create a Kubernetes port and for each port it's actually a developer, a deployer. So currently we support open-tooth and telephone as two deployers and we will support more IAC tools and deployers in the future. And the second one is operator. So since open-tooth is just an IAC tool and the model responsibility for it is just for resource provisioning. So if you want to do some other management related tasks such as, you know, start or stop the resource or maybe do some check the logs or go to the container share or something like this, you need some other tools to achieve this. So that's why we have an operator component. And also we have a building workflow engine which is based on Argo workflow. And all the data in virus currently we don't store it in ETCD. We store it in a PostgreSQL database. So this is a high-level architecture of virus and I will demo it later. Okay. This slide is the core concepts and the core scenario of virus. So on the left side is a virus file. So virus file is for developer. So it's similar with Docker Compose file. In this slide you can see there is a sample file which is an WordPress application. So it contains two resources. So the first one is a WordPress database and you only need to define the type as MySQL. And the second one is a WordPress service. So the type is a container service. And for this resource you can define some attributes such as the container image and the port you want to expose. And also it contains some environment variables which refer to the output of your WordPress database. For example, you create a database. Then that database will have a database host address and the database password and the database username. Then you can use this output as the input of your WordPress service and use it. Okay. So with this single virus file we can deploy it to different platforms. So for example, if you are a developer and you have your laptop with Docker engine installed, then you can use this virus file to create a virus deployment based totally based on Docker. And if there is a test environment or formal development environment in your company which is based on Kubernetes, then you can use this virus file to define and deploy your application as Kubernetes workloads and Helm charts. And if there is a projection environment, for example, based on AWS, then you can use this virus file to deploy your application in your EKS cluster and you can provision the database as a RDS database. And if you also have maybe a disaster recovery environment in your Google Cloud and you can even run the container service as a Google Cloud run, which is something like a serverless or fast service. And you can also use the database as a Google Cloud circuit. So how to achieve this is based on the resource definition, so which is in the right side. So resource definition is actually a group of matching rules and some template configurations and perimeters. So it's configured by the operator to define the rule to confirm which template and what parameters should be used to prove your infrastructure resource based on certain criteria or different filtering rules. So next I will do the demo to show how could we achieve this through virus and open-tooth. Okay. Well, you go back, I think maybe you're about to demo what my question is, but do you just like the developer take that file and apply it to different environments and depending on which environment you apply to, if it creates it? Is that the idea? It's a single file, but it can be applied to lots of different places for the same result. Okay, cool. Thank you. Okay. So, okay, let's start. So the demo will be based on this slide. So first we will start from the local environment. So you can download the virus client through our good happy referral. So currently we haven't published it to Humber or some other platform yet, so you can only download it from our release page. Okay. Will you make it bigger, please? Okay. Thank you. Yes, awesome. Thank you. Currently we support Linux and macOS and Windows support will come in soon. Okay. Okay. And my client, I already have a virus client installed and also have a Docker desktop installed. Okay. So now I can use the virus local install comments. Maybe I need to, that may be clear. Okay. Virus local install comment will install a virus Docker extension to your Docker desktop. Okay. It will take just maybe one or two minutes. During that time we can say hello to some folks in chat. We have a hello from Nigeria, hello from Romania, from Dubai, from Germany. So cool. Yeah. Are we back up? Okay. Cool. Good evening and to everyone. Okay. Okay. Now it's already installed. So if we switch back to Docker desktop we will see a virus extension here. So it's a little bit small. Okay. Wonderful. Thank you. Okay. So we can go back to the terminal. Okay. Now in my laptop I already have a application, a large file defined. So just as the slide showed, it's a WordPress application and it contains a WordPress database and a WordPress service. So now I will try to deploy this application to my local desktop. So I'm going to use virus apply. We have a question. If you're doing it locally, do you need Docker desktop? Yeah. Currently you know for local environment it depends on Docker desktop because it will use install Docker desktop extension. Actually it's also a virus server. I will show you later. So it's actually a local virus server and if you will call your local Docker engine and to create the Docker containers. Okay. Excellent. Okay. So I will use dash, dash, wait, then it will. So now I have two resources created and the first one is a DB and the second one is a WordPress. Okay. Since I used the dash, dash, wait, permature. So it will return once the resource deployed. Okay. Preparing, employing. So we have another question. Kushal says CNCF is really interesting and that they want to contribute to this project. Whether they mean open tofu or walrus seal, seal walrus, I don't know. But how is a good way to contribute for folks who want to contribute? Yeah. So you can go to our GitHub project and we have documentation about how to contribute to this project. We have the API document and the architecture design document. So any contribution, feedback or issue is really appreciated. Okay. Awesome. Thank you. Okay. Okay. If we're still waiting, I'll say hello to someone in Springfield, Missouri. And we also have Geneva, Switzerland. Super cool. Oh, Switzerland. Welcome. Okay. Okay. So this data is ready. Now I can use the virus resource list to see the resources deployed. Okay. Now I have two resources deployed. So this first one, and it also contains the endpoint, which is returned by virus. So I can use open comment to access this address. Okay. Okay. I have my WordPress ready. Okay. So how it works. So actually, if I use Docker comment and use Docker, yes. So I will see four containers running in my local desktop. So normally, you would expect that there are only two containers, but there are four containers. It's because we used a special open tofu module to create, you know, post container, which is similar as Kubernetes, then we can ensure that it's compatible with Kubernetes deployment in the future in case you want to create multiple containers in your Kubernetes deployment, you can also use it in your local Docker deployment. Okay. That's the case. So if you want to see the local virus deployment, you can just access the local address. We have a question. Is Walrus compatible with re-enter desktop? Currently not. We may spot it in the future. Yeah. Cool. Thank you. Okay. So if you access the local host and the 7443 port, then you will see the local virus server deployed. So it will contains the default project and the local environment. We make it bigger, please. Thank you. Okay. And how it works. So it's based on the building resource definition. So if you go to the resource definitions, you will see that there is a building mysql definition. So the time is on. I'm sorry. I can't see it very well. Will you make it bigger? Okay. Thank you. Awesome. So the name is Docker and the type is mysql. So if the environment matches this matching rule, then we will use the Docker mysql open-tooth template to deploy the container for you. Okay. So now let's go to the second scenario. Okay. So we have this definition virus file. And we can also deploy it to Kubernetes cluster. So technically, you can use your local virus to manage Kubernetes or even your public cloud. But in a formal scenario or in your real case, you need to independent virus server to do this. So let me switch to my demo environment. So could you see it? And where's this running? Yeah, it's running in, this is server is in AWS. Okay. Okay. This is a server with Docker engine installed. So as I mentioned, so virus is based on Kubernetes. So you can install it in a Kubernetes cluster or you can use a single server installation. So you can follow the steps in our official documents. We have a question from the chat. What operations can I perform on applications running on Kubernetes? Yeah, I will demo it later. So you can start or stop the resource which actually creates and deleted the Kubernetes deployment and you can connect to the container share. And you can see the log files of your container. Yeah, go back to my server. So I can use a single server install mode too. Okay. So I can use this Docker wrong comment to start a virus container and which is a single node installation. So we can do this because in the virus server image, we have embedded a K3S server. So if it's a single node installation, it will use the local K3S as a type of engine at the wrong edge. Okay. Oh, sorry. Oh, sorry. It's the wrong server. Sorry. This is this one. And that one is a K3S server. Okay. So we're on AWS on a K3S server? Yeah. And when you're going to run it locally on AWS? Yeah, this one I have currently I have two servers and this one is a virus server. I will deploy virus and this one is a K3S server, which I will connect it by virus later to deploy your Kubernetes application. Okay. Okay. Now I have the container running and I will use Docker log. This is a log file and may take two or three minutes to be ready. Then we can access the virus EUI. Okay. That's good because we have questions rolling in. Ken has a question. Later he redacted it, but I personally am curious where, because I don't know a lot about Terraform honestly. So where's the state stored in Terraform and open tofu? Ah, it's a good question. So if you're using Terraform or open tofu CLI client, so actually it's just stored, the state files stored in your local folder. Okay. Okay. But in virus currently we store the state file in the post-graphic database we used. Okay. You talked about that when you showed the architecture diagram. Yeah. We have another question. It'd be great to know how Walrus is different from backstage and how could I develop custom features and plugins on Walrus? Yeah. It's a good question. So backstage is actually an internal developer portal. So I think it's a portal. You know, it has many, many plugins. So you can, you know, integrate your, maybe your code irrepro, your CICD system or your Kubernetes cluster, or other platform to backstage and your developer can use backstage as a single portal to do almost everything here. And for Walrus, Walrus is an application platform. So we may develop a backstage plugin in the future. So you can use Walrus through backstage to manage your application and your environment. Excellent. And then if you're using Walrus, do you still need open tofu? Yeah. I will show it later. So open tofu is a deployer. So it's a runner of a virus. So we can switch from Terraform to open tofu and we plan to support more IEC tools in the future. Okay. Awesome. And those are the questions. Thanks, everyone, for your questions. Okay. I think the server is ready. So let me try to access this server. So I already have a, there's a Walrus-CNL cloud native live and .cio.nite. Okay. So since it's a, since it's a first installation, I need to get the bootstrap password through this comment. Okay. Will you move the window so we can see your cursor from behind your name? I can't figure out. Okay. Yeah. Well, like you can have the server window open, but can you see, can you see the stream view? Basically, your name tag is blocked. Yeah. Yeah. I know. Okay. Cool. Okay. Okay. Let me change the window here. Yeah. Thank you. Could it be better? Yes. Okay. That would be better. Now I have the bootstrap admin password and I can go back to the first login. I need to config the password. Okay. Now the live server is ready. Excellent. Okay. The first thing you should do since today we are talking about open tofu. So I will change the deploy your image from Terraform to open tofu. Okay. We make it bigger. Yeah. Thank you. Awesome. Yes. Okay. Yeah. So by default it's the Terraform deployer, which is an old version because the license limitation of a hash cop. So let me change it to open tofu. I open tofu. Okay. And since I will need to use a virus client in the demo later, so I need to generate API key and to configure my virus client. Okay. So I go to my client. I will use the virus login to configure my server. virus.email.co.net and my token. And since it's a self-sign certificate, I will set it as in secure then the default project and the default local environment. And since I can see the main comment, it means that the client has been configured successfully and I can use it. So now what I need to do is I will create a connector. So from the operations tab. So the operations tab is actually for the operators. So the operators can configure connectors to connect to your infrastructure here. And the operators can also configure the catalogs and templates here. And the most important part is the resource definitions. So let me create the connector first. So the connector here is actually the telephone provider or the open tofu provider. So for example, I create a p3s. Yeah. And I need to choose the applicable environment type, which means confirms whether, you know, this connector can be used in which type of environment for currently we spoke, we have three building types defined, which is the development, staging and production. So let me choose the development environment. And I already have a k3s server ready here. This is my k3s server. So I will config the kube config file. Now I have a Kubernetes cluster ready. So I can create an environment and bind this connector to the environment. Let me create a connector. I create a development environment here. And at connector, I can choose a k3s connector here. Okay. Now I have a new development environment. Then I can go back to my terminal and try to deploy the application to this Kubernetes cluster. Okay. I can also use. And so this is the exact same file you used to deploy locally. Yeah. It's exactly the same file. And you can close. You can just use cat. So it's exactly the same file. Okay. Then I'm going to use a virus, apply and then I will use a dash e perimeter to specify the environment. So deploy it to the development environment. And I still have to use which comment. Okay. So excellent. We have a couple of questions. Would it now be a good time for that? Yeah. Cool. How does walrus compare to teramite? It's similar in that it abstracts application stacks. Yeah. Personally, I'm not familiar with the teramite. Although I know it, but I didn't spend a lot of time to know the concept. I think the common part is that both of us have some maybe stack or application concept to do this. But I don't think teramite has an abstraction about resource definition. So it means in virus for a developer, you don't need to understand and know the technical details for the underlying infrastructure. For example, just now I want to create a database. I just need to create a resource and to set the type is MySQL. I don't need to know whether I should use a module for AWS or a module for GCP or a module for error. So as I know, teramite is something like an IAC measurement tool. So it's most likely for operators, but not for developers. But for virus, we want to reduce the complexity and make some abstraction, not just the application system or the stack. We also want to reduce or remove the complexity of the infrastructure and Kubernetes. So I think this is the difference. Yeah, I will take more time to research teramite to confirm if this is correct. Excellent. And who, speaking of personas, which persona is meant to set up the dev environment, like set up the WorldRisk server and do the connector and all those steps you've done previously to apply the YAML. Is that meant for an operator or the developer is supposed to do this? Yeah, so actually, because I'm using an admin account, so I'm doing the things at both sides. So the connector and the template is actually the test performed by the operator and apply the virus file to deploy the application is actually the self-service for the developer. Because I'm using a single account and I only have one person, so I'm going to simulate two roles in this scenario. Okay. Excellent. And then I'm skipping ahead a little bit to stay on topic with teramite, but is teramite a hosted solution, do you know? I don't know. I think, yeah, because teramite is all, they also have some open source tools, I think they have a hosted solution. Okay. And then back to the states, are there any thoughts on how the state would remain reliable in the case of pod disruptions? Yes, you know, because in my demo, I just used a single node installation, so it will use a embedded post-graphic server, but in your real production deployment, you can use an external database server. So it will not affect if the virus pod is not available. Okay. Excellent. And there's one more question. Does this support Terra grunt and can this be used in a GitOps mode? You did mention GitOps in your architecture diagram, I noticed. Yeah, that's a good question. So for the first one, currently we don't spot teragrams, but we will evaluate it later, maybe in the future, because Terra grunt is a enhancement for Terraform, right? So it's also an IAC definitional tool, so we may spot it in the future. And for GitOps, currently in this version, we don't spot it, but we will spot GitOps based on virus file in next release, which is the virus 0.6, and we will release it in March before the cube come Europe. Okay. Awesome. Thanks again, everyone, for the great questions. Okay, let's continue. So now I have the development environment, and I have my application deployed as Kubernetes workload and Helm charge. So as you can see, so the WordPress database is actually a Helm charge. You can see the subcomponents as Helm release, and it contains Kubernetes podge. And for the WordPress resource, it's actually a Kubernetes deployment. So it contains service deployment and the podge. So through virus UI, you can, you know, just to see the logs of your container podge. Will you make it bigger, please? Okay. Thank you. Awesome. Yes. Thank you. And you can also access the container terminal through our UI. And also we have a future called dependency graph. So it will show the relationship. Yeah, maybe I need to order. So it will show you the resource in this environment and these subcomponents. So I can go to full screen and show subcomponents. Then it will show you the subresource actually. So it's called the subresource in Terraform or Open Chosu. But in virus, the things we call the top level as the resource. So we change the name to components. Okay. So you can, you will be able to see the subcomponents of your application. Okay. And you will be able to see the relationship between them, whether it's composition or it's dependency or it's realization. Okay. Let's go back. Now the next one is, so how about if I want to deploy my application to a real staging or production environment, I don't want to use a containerized database. I want to use RDS. So let's go to demo this. We have a couple of questions while you're in between demos perhaps. Where are the deployers running? Are they running in pods? And are they configurable? Yeah, it's running in Kubernetes support as mentioned just now. So, yeah, we can do some basic configuration currently, but we didn't expose too much configuration per meters outside. So yeah, it's a good idea. So maybe you can describe your requirements or idea in our GitHub issue. And we will check if we can support it in later release. Okay. Awesome. And one more. Can Walrus also deploy the actual Kubernetes clusters themselves through Open Chosu? Yeah, yeah, definitely. Because, you know, Open Chosu is an IEC tool. So you can use Open Chosu module to deploy a Kubernetes cluster. And we also have the plan to, in the future, for example, you can use a virus to deploy a Kubernetes cluster or you can deploy a EKS cluster or GKE cluster or AKE cluster. AKS cluster, and automatically we will configure Kubernetes cluster connector to that Kubernetes cluster and then you can use it. Okay, let's continue. Okay, to use the public cloud, I need to configure a public cloud connector. So I will configure AWS Tokyo and yeah. Okay, and staging and I need to input the access key and secret key and for the region I will set as Tokyo. Okay, now I have two connectors and I need more connectors for different environments. So to save time, I will use virus client to create the connector. Okay, I will create. So it's almost the same perimeter as the user web UI. So you define the category, the type, the applicable environment type and something like this. Then I create A3S staging as a connector for staging environment and I will create another A3S production and you'll be used for the production environment. Okay, then I will create another two AWS connectors. Okay, it's done. So let's go back to the web UI. Now I have five connectors and I have two additional Kubernetes clusters for staging and for production and I have two AWS connectors for staging and production. So what I need to do next is to create another two environments as a staging environment and the production environment. So I will use a staging connector for K3S and AWS Tokyo host. The name is Jeff. Sorry, I think I aimed to the wrong name but it doesn't matter. And then I create the production connector and now I have the environment. What I need to do next is as an operator I need to define the resource definitions to let the developers to use this type of resource. So what I need to do is create another resource definition called AWS MySQL and the type is MySQL and I can define different matching rules. So the first one is a staging matching rule and I can define different selectors to do the filtering. So first one is environment type and I would like to apply this rule to staging environment and second one is maybe environment label. So I will use a label to confirm that this environment has a AWS connector connected, so you can use AWS modules to deploy an RDS database. Okay, let me do it. So for each connector has different cloud providers config, virus will automatically config some system labels to that environment. For example, you have an AWS connector, then virus will automatically config the virus.co.io slash provider dash AWS label to that environment and config that label as true. So I can use this label as a filter to confirm that this environment is available to use AWS template. So I can choose AWS template and since it's a staging environment, so I want to change some default settings. For example, I want to use a smaller database instance and maybe smaller database size. And I need to config the VPC ID, which is mandatory for RDS database. Okay, now it's for, okay, I want to add another matching rule for the production environment, maybe for PRB and it's almost the same. I need to filter the environment type and add an environment label filter to ensure that that environment has AWS connector config. And still use AWS. We have a couple of questions. Is now a good time? Are there other connectors besides GCP at AWS? Yeah, actually, you can add any open-toff or telephone connector as a custom connector. So I will show it later. Okay, so in my demo, I also include the GCP demo, although it's not formally supported yet, but I can use it. Okay. And then when we have time, I have another question, but there's no hurry. Okay, let me complete the definition first. Sounds good. Okay, for this one, I will keep the resource class as the default, which is a T3 medium. And maybe I want to change the database size to a bigger one. Okay, now I have the resource definition for staging and production environment for AWS. So let's try to use the same virus file and try to create it. I have a staging environment. Okay, and I have a production environment. Let's go back to the web UI to see. Okay, so virus will also handle the resource dependencies for you. So which means that if resource A depends on resource B, virus will try to create resource B first, and resource A will be in the queue, and until resource B is ready, it will be created. So let's go to see the first one, the staging. We can see the log file. Oh, sorry, it's too big. Okay, so too many log files. Okay, should be ready. Okay, so first we can see that it's using OpenTofu, the latest version, and it's trying to create an AWS database. Okay, we can also verify it through AWS CLI. So we can use AWS, describe DB instances. So I have a staging database creating, and the class is what I have configured is t2 micro, and the storage case 20. And I have another database creating. Let me see. It's not started yet. Okay, let me see the production environment. Let me check the connector. Oh, production. I think I may config the matching rule incorrectly, so it breaks something. You know, things always breaking demo. We promised up top we would break some things. So I'm glad you kept me true to my word. Okay, let me check the let me delete this first and try to recreate it. We have quite a few questions building up when you're ready. Yeah, live demos are fun. Check the resource definition first. Projection. And what is true? Projection. AWS RDS MySQL. And it's my security provider to try to create it. Provider AWS to AWS ideas. Basic t3 medium. It should be fine. Let me try to divide it again. So let's go to the questions. Okay. Excellent. One thing I was wondering this myself. What existing software is walrus most similar to? I'm not sure. Have you ever heard a quite new project called radius, which is developed by error by Microsoft error team? Okay, I'm not going to open source to maybe October last year. Okay. Okay, so I think they're different, but the concept is quite similar. So in radius, so it's also an application platform. And in radius, it's also based on resource, and it has a concept called recipe. So resource recipe. So recipe is something similar to resource definition. So which means that for developers, they don't need to care about the technical details and the implementation details of infrastructure resource. And you just need to use it. And the operators will be able to define the recipe to define how to provision the resource through this recipe. So it's quite similar. So the difference is that, you know, radius is based on bicep and telephone. So because bicep is IC language by error. And the second difference is that radius has, they have some building resource types. So you cannot create your own type currently. So but in virus, you can define your own type, you can define any, you know, custom type as you want. Okay. Yeah, but yeah, it's quite, you know, you know, we know, we know radius almost, I think it's in October. So they just open sausage and contributed to, to CNCF. And we found it's quite similar. The concept is quite similar. Yeah. And for others, virus may be, it may be similar to QVALA. It's called QVALA, which is the application module is different. So in QVALA, it's based on some components. And the components is actually the, the, the, the basic of an application. So in QVALA, there is no abstraction of infrastructure. They just split the, the configuration parts between components and trace. So in QVALA, if you want to config some, you know, perimeters, which may be related to operators, you need to define them in trace instead of defining them in component itself. And I saw, for the most similar to, I think it should be radius. Okay. Thanks. And then how can we better hide the complexity with Walrus? As an infrastructure dev, I only want to expose what's necessary for my app devs to do their work. I think most of what you've shown today is for an infrastructure dev or is for an operator. The app dev piece is just applying that same MMO file and changing your environment variable, like your environment to which one you want to deploy to, correct? Yeah. So, yeah, what Walrus do is trying to, you know, just expose the required complexity or required parameters to your developers. So, yeah, maybe I can show more information to, to, okay. Yeah. Things is creating the AWS database. So, we can go back to the basic configuration for the operators. So, here is the catalog. So, catalog is actually a group of templates or modules of open source or telephone. So, we have the, we have a building catalog, which contains some, you know, templates developed by, by seal. But you can also add your own catalog, or you can also import, you know, community catalog. For example, you can just import the AWS Terraform modules. Let me try to Google it. Terraform AWS modules. So, this is a GitHub organization for Terraform AWS modules. So, you can just import it, okay. AWS catalog. The Walrus will try to synchronize all these modules to local Walrus server, and then you can use them in your Walrus server. So, I will show you the special things Walrus can do. Okay. Let me add another catalog, which is a Walrus demo catalog. So, it may take some time for... Great. We have some questions. Well, we have Ken's statement that I feel like needs addressing that Ken feels like they've heard Walrus isn't for application developers as for platform engineers. But the whole title of your talk even makes me think that you're trying to show that it's for application developers. Yeah. Yeah. Actually, it's for platform engineers to build your internal developer platform, and which will serve your developers. Okay. Beautiful. That's a great way to put it. Is Walrus free to use, or does it need a license? Yeah, it's free. It's a fully open source, and it's based on Apache license, so you can use it for free. Excellent. And are there any RBAC options available for Walrus? Yeah. Currently, we have both RBAC and ABAC, but it's still in development. So, we have three building roles, and you can have a general user, manager, and administrator, and they have different permissions for different platforms. So, you can click to see the, I think you can see the user. And we're running out of time today, so maybe you can pick one more thing to feature. Maybe you want to show the catalog that got imported? Yeah. Okay. Yeah. Or just go to the catalog. Okay. Okay. So, now this catalog has already synchronized some modules. I think we have more modules. Okay. And this is a AWS module, telephone or open source module. And from this template, we have a UI schema. So, UI schema can use users to define how the module can be exposed or displayed to your developers or to your users, for example. So, you can get more information regarding UI schema. So, you can use some open API schema parameters to define how this UI will be displayed. So, we also extend some additional features, such as hidden, immutable, or show if. So, you can control how the template will be displayed to your end users. So, you can just modify this UI schema and you can do the preview and then you can use it. So, you don't need to change any code or something like this, just modify the UI schema. So, the UI schema can also be used in resource definition. So, as I showed just now, currently, I have AWS resource definition for the RDS database. And the default UI schema is just like this and contains the basic configuration and advanced configuration, right? So, if I just want to hide the technical details for the end user. So, for example, when I try to create a database from my single database and currently I can see this user interface and I think it's too complicated and I want to remove some parameters and hide some technical details, then I can use the UI schema. So, currently, I have another already complete UI schema. It's modified. I can just use this UI schema and replace my definition, remove the old one, and then preview. Then it's quite simple. So, it only has four parameters. So, the database engine version as immutable is a read-only parameter. So, it's configured as 8.0. So, users only need to include the username, database name, and the password to create the database. So, it's quite simple. So, you can customize the UI and you can control how you want to show your template or your resource definition to your end users through the UI schema. Fabulous. So, we're up over the hour now. So, it's time to say goodbye. Is there anything for people who want to try this out and get their hands dirty? What do you recommend that they do? There we go. And the last slide. So, we're a sponsor for KubeCon Europe. So, if you will also join these, you can still buy all the booths at the number B321. Nice. I'll stop and say hello. And then the GitHub link I put back on the screen so you can also start. There would be a good, if you're not going to KubeCon, that's probably a good way to learn more about the project. Yeah. Thank you so much, everyone, for joining us today. I appreciate you. It's been an especially good chat. Thank you, Pungyong, for teaching us about enabling developer self-service with Seal, Walrus, and OpenTofu. Thank you. Thank you, everyone. In the chat, thank you for those of you who watch the recordings. And here at Cloud Native Live, we bring you the latest Cloud Native code on Tuesdays and Wednesdays. So, that means we're doing this again tomorrow. Tomorrow, we're going to talk about Linkerdee. So, join us then, if you like, at the same time. Same bat time, same bat channel. So, thanks again. See you soon. Goodbye.