 Hi everyone, I'm Jia Hongshu from China Merchants Bank. I will give a brief introduction for myself. I have a 13 years software research and development experience across technical, automotive, financial industry. Recently, I focused on cognitive and resourcing knowledge and take some practice. Currently, I'm responsible for the cognitive, education plan for in China Merchants Bank. I'm glad to have a chance to share our enterprise practice experience six. So let's start. Okay, today's topic is about the production practice for large scale financial application platform in China Merchants Bank. And this is our pair of ways, Jian Bo-sun, a staff engineer at Alibaba Cloud. Faced by, also is my friend. Both of us are the Kuvavela maintainer. Today's agenda includes four parts, backgrounds, practice in CMB. CMB is a short name on my comment. And the Kuvavela Essential Feature Introduction and we will show a demo about the feature. The first two parts, I will charge. The part three and part four, Jian Bo will take and charge. Okay, let me give a brief introduction about China Merchants Bank. China Merchants Bank is a commercial mass bank, ranking as 14th of the World Bank in last year's 2021. There are two merger mobile apps. In App Store, Mobile Bank and CMB Life. Both of our apps has large scale customer globally. Until end of June, Mobile Bank has already 178 million customer globally. CMB Life has 132 million also. So China Merchants Bank has developed a reputation for being a technology focused banking globally. We are on the way to cloud-lative, right? We really, really a challenge in application domain. In recent year, we has already established a lot of large scale private cloud in our organization. From the high level, private cloud architecture diagram, we can notice there are hyper-wider virtual machine, Kubernetes, server-lines, Hadoop and some dedicated long times and huge infrastructure. And we provide diverse app workloads and the mid-ware service in corresponding domain or corresponding long time. Invitable with ability, applications, capital space, long time and infrastructure, it's really complex and the challenge for us. That was cannot be agile and cannot be high efficiency. So both application engineer and the plan for engineer have some issue to be resolved. From application engineer side, they want to decouple application from runtime or infrastructure and also want application support, a candidate delivery, to make the business stability and the security. I want application observability for maintenance and the plan for engineer side. How can we solve them by cloud-lative ecosystem? At first, we think we should, it's important, most important to decouple application from runtime and infrastructure. So we follow the open application model in the OEM implementation, Kuba Vela. We design our application model. We divide it and conquer and abstract and encapsulate our application workloads use case. For the components definitions domain, we define each workloads component definition. Each one mapping unique application use case. For example, front-end for static sites, web service for HTTP API server, VM service for virtual machine service, task for schedule job, API get away from my circle, flink and the more this kind of workloads, the usage is very obvious as the name. Secondly, we define many data components and trace definitions are part of OEM aspect. They are application-oriented concept. It's easy to understand and use. Application engineer will be happy, of course. Time for engineer should provide each definition by Q template with Kuba Vela. We can achieve concern separation by OEM and Kuba Vela. Concern separation to decouple application from runtime. Since we can define our application model and then describe it, declare it, so we can extend and define many useful features in this way, the same way. Now we can define canary delivery as a treat. The left is an example. There is a four components API, a type API, type F, topology and trace dependency. In one sprint, front-end and web service related comments should be released end-to-end at the same time. They should integration and release end-to-end at once. Both front-end and web service components version one to version two, so the step two is the most important step for traffic control. Canary validates with a dynamic traffic step and the user can configure zero percent to zero. To 100% gradually to canary instance showing in this group diagram for validation. And in this step, there are three points, very important, three points. In this step, we need to choose first jump components at the entry to tag the traffic for control. Moreover, it is less necessary to re-route tagged traffic only to canary instance version two. And specially, both only front-end and web service can be infected. Other components in these trace cannot be affected. Now we can validate both canary version two instance with tagged traffic end-to-end. In step three, we have two choice. If the canary version is qualified, app canary can become to baseline version two. Version one instance will be removed. If the canary version not qualified, 100% of traffic will come back to version one. Canary version two instance will be removed. Baseline no version change. We can, in the right, in the right, there are two snapshot illustration. We can implement the feature with Kubevilla workflow engine and the canary routes as streets, definition, plugin, envoy, dynamic traffic control. There is the file, YAML file to describe, declare the workflow and the canary routes trade illustration. In the same way, we can also integrate app observability and SOS trade. As the diagram shows, there is demo app, the same demo app. They will generate log trace metrics of the app. So this observability data will be encoded as hotel and connected to time theory database, trace database, and colon storage for allies and the dashboard. If one two can control connected instance and the connection routes, we can define metric trade, apply to corresponding components. Components will be observed with OpenTanimetry. If one two support SRE for components, we can define also, define also, define SL trace plugin to integrate OpenSL spec. We can config service level objectives, SLI metric, a lot of policies. There are metric and SL trace sample in the file, the snapshot. I just choose some type of practice or example to prove the Kubavela can be as the application. We try to use Kubavela to build a cloud-related application platform to achieve it. And look at the diagram. The platform is add-on-based architecture includes three main parts, a workload layer, trade layer, and OEM engine. A workload layer includes all components definitions and the controllers. Trade layer includes all trace definition and the controllers, and Vela as the OEM engine. So we can see all types tag it by add-on. We have the yes. Especially in the trade layer, we have support in the OEM engine. We have support in the app meta, app config, app release, app route, app observe, app dependency, app gallery, and app elastic. And we can support more trades. With application-uniform control plan, we can achieve concerns, separation to reduce complexity with transparent runtime. Yes, that's runtime. So my part is already finished. So next, Jinbo will introduce Kubavela important feature and progress in the demo. Okay, thanks to Jiahang, he has introduced the large-scale practices in China Materials Bank. And now I will introduce some Kubavela essentials that I will give you a quick demo at the last. My name is Jinbo. I come from Alibaba Cloud. I'm the maintainer of Kubavela. The most highlight of Kubavela is the extensibility. The extensibility follows OEM model. The end user just needs to write the component type. It will reference to a component definition registry. And the registry follows the OEM model. The definition has a consistent API interface, and it can be platform-agnostic. It can be implemented by different projects. For example, if we want to deploy a Helm chart, we can both use Argo CD or Flux CD to implement it. And they are the capability providers. And users can glue definitions and the projects into an artifact. The artifact in Kubavela world is called add-ons. We have built lots of out-of-box capabilities, and we have stable Kubavela control plan. And it's a core part of Kubavela and lots of add-ons, including Flux CD, Argo CD, and Vela UX. It's a UI console of Kubavela. Also, we have cross-plan terraform integrations, open-class management for multi-class deployment, and open-art for IoT Edge, and more than 50 add-ons. So we have a lightweight installation called VelaD. You don't need to rely on any Kubernetes clusters for control plan. Now, I will give you a quick demo about how it works. You can find the scripts of all the demos in the link. Virtual machine. In my demo is Ubuntu installed. You will get VelaD here. And we will install the Kubavela control plan and bind the public IP for demo. Now it's installing. You can also install Kubavela on any existing Kubernetes clusters. You just need to install it with Helm. Here is a folder of my demo. Okay, it's started now. And actually, VelaD leverages K3S as a Kubernetes cluster. So we will also have a Kubernetes config here. We can read it. You can also copy it in your local machine for convenience to export to the config. We can also install some add-ons. For example, we want to see the UI console of Kubavela, of VelaUX. And VelaD provides the air gap installation so it can install very fast. So we use the command here. So all the images have already installed locally. Okay, now we can read it. It's this dashboard. And the initial password is here. And let's use it. Change the password. Email. Okay, now we have the Kubavela dashboard. It has two languages supported. This is one application installed. The VelaUX add-on is also one of Kubavela applications. We can see the components and the underlying resources. We also have the service endpoint. Let's see all Kubavela ecosystem capabilities are built as add-ons. Now we already have more than 50 add-ons here. The most interesting add-ons are the observability add-ons. Let's enable them. This is the permissions add-on. Okay, I have enabled it. We can check the status here. And it will bring up permissions instance. Let's enable other add-ons such as node exporter. And let's also enable KubeStateMetrics export. And we also enable the Grafana definition. It provides us the CRD operations on Grafana API. This is very important capability that we can make Grafana dashboard as code in Kubavela. This add-on brings up Grafana server. Let's change it as a node port and enable it. Okay, now we have several add-ons enabled. Okay, it's already running. Let's check the Grafana dashboard here. The password is Kubavela. Here are four dashboards that have been added automatically. One is the Kubavela system metrics. It provides a very detailed Kubavela system state. So we can check the system health status. And it also provides what component we have, what trait we have, and other workflow steps here. Okay, and another important dashboard I want to show you is Kubavela application panel. It gives application details. And you can see what components in one application and what are the backend resources. And you can also check the details. And we also have some Kubernetes API server metrics. Kubavela has lots of component types. When you install some add-ons, it will also provide the component types. And I will show you the web service add-on web. As you can see, the web service add-on has several parameters. You can deploy it here or use command line. Use command line. You can use Vela show web service to learn what are these arguments. You can also use Vela show web service. We can also see some examples of how to use it. Here are the parameters and details. I'll show you how to build your own definitions to extend the capability of Kubavela. Now let's start to create a definition. And here is Java definition. And we just want to deploy what they're using Tomcat. So it's a very basic definition. We just run two objects. One is Kubernetes deployment. And another one is Kubernetes service. And we have defined some arguments. Let's deploy it. Vela depth apply. What definition? Java what? For just now, we don't have this component type. We don't have this one. So we can refresh the website. Then we can see the Java what component type here. And if we look, we can deploy some work. For example, we use a simple bar here. Let's deploy it. Create and deploy it. You can also deploy this application YAML. Now it's creating. And you can see the logs. You can also see the instance status. You can also see the topology. Okay, it's running now. We can wait it. And it has some suffix simple. Okay, we have succeed. Okay, we have succeed in show you how to define definition. Again, let's build an add-on. I will choose a click house add-on. And I was just to find a click house operator. So I will use a click house operator. The first time I saw the click house operator, I just got the installation step that is YAML here. They all install in one YAML. So if we want to build an add-on, the easiest way is to copy the YAML bundle here. And use the ref objects of Kubella. We also add some trade to scrap the permissions metrics that we also provide this dashboard. The dashboard I found is just an image. So we use web service to deploy the image and provide some service come to give the capabilities to access the community resources. We also, I also find it provides some Grafna dashboard. When a user deploy the click house operator, it's very hard to integrate the Grafna dashboard and the permissions. While Kubella has a powerful capability to do them, we just need to copy the dashboard JSON here. And we can install this add-on. We can install the add-on by Vella. Add-on enable this folder. This folder is click house add-on. Oh no, sorry. Let's enable it with service type Nudepot so we can check the click house. Okay, it's installing. We also add one component type called click house so we can deploy apps with click house capability. It's actually using the click house installation CRD. Kubella also provides the capability that you can define what the resources behind the click house CRD. We have already installed it. The dashboard is here. We can see the dashboard. It's provided from the click house operator. And we also able to create click house component with several parameters. It's great one. Click house instance. Choose click house bind to default environment. Install it with Nudepot. Okay, create it. Deploy it. Kubella has a power to check the underlying state for set and port. How could Kubella do that? In the add-on, we can also define the resource topology here. We can define the topology is the click house installation has some child resources such as state for set service resources. Okay, we have finished deploying the click house application and we can check it here. Oh, the click house has suffix play. Okay, now we have deployed small. We have graphna dashboard of click house. Let's go back to the graphna dashboard. The magic power is here. We have automatically installed the click house operator dashboard. The dashboard is provided by the click house operator community. We just copied that and automatically glue this dashboard here. Thanks everyone. We have just finished our demo because time is limited. We still have many powerful Kubella features that didn't show you such as the programmable workflow, multi-clustered application delivery and management. At last, I want to share the recent roadmap of Kubella to you. We want to make our end users to have many more choices to use Kubella such as Kubernetes custom resources, API or command line or UI console. And along with GitOps with more powerful observability, we will enhance the Vella Core to make it more extensible, easy to use and flexible. Again, we will enhance the security and the stability. We also will enhance our add-on ecosystem with higher quality add-ons and easy to build add-on toolkits to make your own add-ons more easy. And we will also build some interesting add-ons to make Kubella ecosystem has more powerful features and scenarios. We hope all of you can join the Kubella community to experience the powerful features of Kubella. Thanks everyone. Hope to see you in the Kubella community call.