 Hello, everyone. I'm Jian Bosun, engineer from Alibaba. Today, I and my colleague, Inda, will share the topic for you. I will introduce the first part, and the rest will be introduced by my colleague, Inda. Okay, let's start. So Kuvela is a modern software platform that makes delivering and operating applications across today's hybrid multi-cloud environments easier, faster, and reliable. It is an application-centric control plan and also familiar by the community as implementation of open application model. It is a powerful glue engine. On the left side, it can connect with your traditional CI systems or the modern GitOps. Then it provides infrastructure agnostic layer to render, orchestrate, deploy, and manage your software. And finally, deploy to the right side of the picture, both Kubernetes, clusters, multi-cloud services, or IoT edge scenarios. And Kuvela is a control plan that can provide best practices for your platform engineering. On the upside, it can expose many ways to manage. It provides API, COI, and also a very user-friendly UI console. And you can use GitOps to connect with it. And it provides fully observability for the whole platform. The most important is it has a large catalog of add-ons that can connect to the whole community. So here comes the question that why we build Kuvela. Process improvements over the past two to three decades have significantly increased the eligibility of software application and product teams, offering them flexible services for both infrastructure like compute, network, and storage, as well as developed services like builds, tests, delivery, and observability. So autonomy and process improvements also have the effect of gradually shifting more and more responsibility for supporting services to product teams, forcing them to spend more and more time. And cognitive manager on infrastructure concerns and reducing their time to produce value relevant to their organization. So here is a workflow that in a usual application delivery pipeline, they must need to concern effects workloads for different kinds. And they have to learn observability and security details for better stability and avoid risks. They need to handle rollout and traffic splitting and learn these complexity infrastructures. But all of these things are hard to learn well. For example, one developer learns the ingress with V1 Battle 1 API that can work well as a gateway in Kubernetes 1.21 version. But it suddenly failed when the cluster upgraded to 1.22 due to the API deprecation. They shouldn't care about all these things, but they have two in this case. So we developed the open application model in 2019. And it provides a consistent model for application delivery. And in the open application model, we provide infrastructure agnostic model that the developer just needed to focus on components, traits. And after they compose whole applications, they can deploy it to different kinds of runtime platforms. So here comes the first challenge. How could the model adapt to so many complex application delivery scenarios? We firstly leverage the Kubernetes API and the CRD ecosystem, and it can provide very detailed capability to connect the real world. But we can't do everything to write CRD operator. So we leverage the Q configuration language to program the logic in a very lightweight. In the example, we use Q template as a component definition. It defines user level component and expose the fields called image. After that, the user just needed to fill the image in the application. And finally, it will render the real resource. It's Kubernetes deployment. So in this way, the platform builder can provide the best practices and avoid lots of complexity for the end user. The platform builder can also create traits for the end user and modify the fields. In this example, we expose the replica fields. With the replica field, we can modify the component. And the underlying resource is the deployment. So after that, the QVLA application is infrastructure agnostic. It won't be affected. Even the underlying API version is changed. The next big challenge is how to balance the extensibility and the user experience. When the scenario becomes more and more complex, we will have lots of definitions. And since these definitions are all extensible, the end user won't know which fields are there. So we developed lots of ways to help the users to discover how to use these definitions. The first one is the API system. With the API, the path platform builder can build their UI console and integrate the QVLA system with their own platform and provide which fields are contained in the component or trait. The second one is docs. We have developed the Vela Show command line. And with the Vela Show command line, you can check all these fields to learn the feature of these definitions, which fields are contained and how they can be used and what's the type of these fields. These documentation are all generated medically. And the third one is the UI console. The QVLA ecosystem has a project called Vela UX. It provides UI console for you and every component and trait definitions can be converted in the UI console. And the end user can have click-based UI and just fill the forms in the UI console. The third one that we have recently developed is SDK for developers. Now users can use the Golang SDK and with the Vela Dev Gen API command line, it will generate the SDK from the definition. And we are also developing the JavaScript Python language SDKs. And also we have provided the observability for all these definitions that we can check the topology for any definitions and the relationships. For example, if you have a CRD operator installed, you can define the relationship between the custom resource and the underlying Kubernetes-native resource. And after that, the Vela UX will show the topology graph automatically for you and provide the health status. So after we solve the Z, we will usually have the third challenges. It is when we have so many extensions in the community, how can we discover them? So we have developed the catalog for all these extensions. We call it add-on. An add-on is a combined of all these OEM definitions and its back-end capability providers. For example, if we want a Helm component, then we need to define the Helm component definition. And behind that, it may be powered by Flag CD operator. So the Kubella add-on will combine the Helm component definition with the Flag CD operator and bundle them as a package and then users can discover the add-on in the community registry and reuse them and download and install them very easy. The click house operator is also the same. The community have already built more than 50 add-ons and we also have the quality assurance for these add-ons. Some of the add-ons are official ones and some of them are experimental ones. So as a result, the workflow of Kubella can be divided into two parts. The first part is prepared by the platform engineer. They can enforce the best practices and provide the deployment confidence. Usually, they can install add-ons from the Kubella community and provide different component types and traits and reduce them in the Kubella control plan. And finally, the platform users, which is also called the end user, can enjoy a path-like experience. They can just choose a target environment and compose their application by choosing capabilities from different kinds of definitions that are prepared by the platform team. For example, they can choose a web service component and the database component just fills the images, replicas and necessary fields they need to know. And finally, it's composed as a Kubella application and deployed to the control plan. And the Kubella will handle the rest of the thing. It will render the user application along with all these definitions and translate them into the Kubernetes API. Of course, it will contain different kinds of custom resources. And we also developed something like Terraform Controller to connect the cloud resources. In the introduction above, we have introduced the abstraction of the open application model. But the next challenge is the modern application delivery process is very complex. It will not only include containers, but also need to integrate with the cloud resources and SAS APIs. For example, in a general workflow, maybe we need to run CI in the CI service and get the image ID and packaging them into add effects. And then do some canary deploy. After that, maybe we want to connect to the visibility service to check the quality gate and do some conditional check. And finally, promote workloads to multi-clouds or multi-clusters. And after all of them succeed, we may trigger a notification to Slack or email. So this can be a very complex orchestration and integrate with lots of third-party services. How can we orchestrate and manage them in a united way? Is that possible? This is a very big challenge. And we have managed it. The first thing we have managed is we can naturally define multi-cluster deploy in the model. We will treat multi-cluster as first-class citizen in Kubevila. We have extended policies filled in the open application model. In these fields, we can define the topology of the application that defines how we deploy the application in which clusters and how we can override the parameters when we deploy to some specific clusters. And we have developed a component called the cluster gateway. Provide a united way to integrate with a different multi-cluster technology. We will connect the Kubernetes clusters by just using their configurations and deploy resources. Another way is a pull model. We will leverage the open-class management project for us. And the open-class management will have an agent installed in the cluster, and it will connect to the cluster gateway. In the cluster gateway layer, we will have a united way to speak to all these kinds of multi-cluster technologies. The second thing is how can we customize all these steps? How can we connect these third-party services? The solution is the same. That we leverage the Q configuration language as a glue code. We call it the workflow step definition. And in the definition, we write QLAN code. And we have some building operations, something called Vela slash OP. And with these operations, we can allow the resources, apply components, and do something like conditional wait. And the rest operations speak to the clusters, do something like send emails and connect to the third-party services. And the workflow of QVela application has the rich capabilities and the process control. It can also do the schema checksum with the help of the QLAN. And it also shares the same mechanism with component definition and trade definition. It is fully extensible and programmable and reusable. So as a result, that you can deploy the component A and leverage the underlying Kubernetes CRD. Maybe it is Terraform controller and deploy it on different clouds. And the component B can be a help chart. And the component C can be an open-cruise workload. And it has some trades that can do auto-scaling, leverages the Cuda project, and do some mesh leveraging the Easter projects. And it can also do some custom steps with the help of the Q. And finally, it can do some notifications. And the application delivery behind QVela is a consistent programmable declarative workflow. Okay, the rest part will be introduced by my colleague, Inda. Hello, everyone. And my name is Inda. And I am the maintainer of QVela and also a senior engineer from Alibaba Cloud. And next I will go on to discuss about how QVela handles the day two application management. As we mentioned previously, QVela leveraged the open application model to define application for users to handle the delivery of Kubernetes resources. And as we know, that's one of the main design principles in Kubernetes is the declarative API. The declarative API cares for both the day one delivery of resources and the day two management. And so does QVela. So QVela also follows this principle and makes declarative API for the application. So as shown in this slide, you can see that there are a wide range of day two application management capabilities we have been provided in the QVela. And it includes configuration drift prevention, automated garbage collection, resource sharing, version control, and customized observability and something about the stability and scalability for the control plan. So later I will give a brief overview for all these features. And there are also more advanced features like the security part. And you can go to the official documentation site of QVela to see more features. Okay. So the first thing I would like to introduce is that one of the most important features that makes QVela not only a delivery tool is that it consistently watches the applied resources and makes sure that they do not diverge from the declared state. So this prevention of configuration drift helps QVela users to have a strong control for the underlying resources dispatched by the application. So if anyone changes the resources managed by the application, the QVela will bring it back. For example, if one of the deployment is applied by the QVela application and someone outside changes the deployment's image fields and QVela application will watch that changes and find that it is not the desired state of the deployment and it will bring back the image field. And in this case, the QVela will help users to enforce all edits to be made from the application entrance. And another thing to mention is that QVela, although this is the configuration drift prevention is the default behavior for QVela's application, they are also customized the rules for QVela application users to declare that can manually specify some resource to allow configuration drift to some extent. For example, in the HPA scenario, you need to let outside controllers to dynamically adjust the replica field of deployment. In that case, users in QVela can specify that the replica field of the deployment is mutable so that QVela will keep other fields to be always updated with the desired state, but let other controllers or users to be able to edit some specified fields. Like other delivery tools, QVela also handles the garbage collection automatically during the upgrades of applications. So unused applied resources in new versions of applications will be recycled once the application has been successfully upgraded. So for example, in the first version of QVela application, we use the deployment X. But when we upgrade that application to version two, it changes to use deployment Y. And the deployment X is not being used anymore. So when the version two or the application successfully entered that new version, the old resources will be recycled automatically. It is also possible for QVela users to customize the strategy for garbage collection. For example, to let some resource only be recycled once the application is deleted, not upgraded. For example, we have seen that there are scenarios that QVela users use application to deliver some storage. For example, like the persistent value claim. This storage thing is intended to be kept even if the QVela application is deleted. So at this time, it is possible for users to specify which garbage collection strategy should be used for persistent value claim type resources. And also another feature for QVela application is that it supports resource sharing across different applications. And in some case, that multiple applications need to use the same resource like they need to share the namespace or some config map. And they want to say that, oh, if the resource does not exist, I will create it. But if it does exist, I will reuse it instead of create a new Y. So at this time, we can use the resource sharing capability in QVela's application to let multiple applications to be able to deploy the same resource. And it will support only one writer and multiple readers for that. So once the writer goes, the next reader will be the successor for that writer. And only the writer will be able to mutate the desired resource. And the last one that uses the resource will take charge of recycle it when it is deleted. Another key capability of QVela application is the version control. So QVela application keeps a certain amount of revisions for the past application states. And users could make comparisons between different versions and you can see that we have some tools to help users to inspect the changes across different versions. Like we can see that the image field is changed in this application for the current revision and the last revision. And also, it is possible for user to declare that the new, when the new version failed, he can roll back to previously succeeded version or just specify some old version as a republishment so that the user can reuse some successful version and quickly manage it to let it to go to a successful state so that in production usage, it will be more easier to handle exceptions. And there is the observability as the first class citizen in QVela's system. And QVela integrates the community projects via add-ons and help users to build application-centric observability, like use permacils to collect metrics and use Grapna to make dashboards and see how the metrics changes or how logs are passed. And okay, so with the abstraction layer powered by Q, as we mentioned in the X definition previously, it is also similar easy to code your observation rules as QVela definitions and therefore provides out-of-box infrastructure for developer to use. For example, in some case, you can define how to collect the components' metrics and automatically transform them into some aggregated metrics and then plot them on Grapna dashboard. And you can make it as a treat definition to attach to your component so that you do not need to add too much views to that trade and your application's component will be able to generate dashboard automatically on your Grapna. Besides, with a large number of companies deploying thousands of QVela applications in production environments today, the stability and scalability issue are increasingly being discussed among the whole community. And the system monitoring tool like the dashboard and the metrics provides a comprehensive view for the healthness of the QVela's control plan. And you can see that with only like to enable some simple add-ons in QVela, you will be able to get out-of-box monitoring dashboard which can see how your controller works and how many applications are running in your system and how much of them are healthy and if there is too much CPU or memory usage for your controllers. And also there are a lot of auxiliary dashboards. For example, you can go to see the latency of your Kubernetes API server and go to see each deployment how the consumption and also you can see the details of ports like the network IO and the disk IO as well. So these dashboards and metrics, the out-of-box observability tools can help system operators like to see the exceptions and performance bottlenecks are behind the system. And beyond it, you will be able to like to make customized tuning strategy and depending on how your metrics exposes. So depending on the different usage, there are different tuning strategies for users to use to optimize the QVela's control plan. And as for the scalability, we have used low testing of QVela to show that it has the capability for both vertical and horizontal scaling. So users could not only boost the system capacity by giving more computation resources, it is also possible to add multiple charts to the control plan and let each controller to handle a particular part of the application. So it will be possible for users to like to make multi-tenant system and let each controller to handle particular tenants applications. So another benefit of using the horizontal scaling is that it can reduce the explosion radius for unexpected disasters. For example, if you run multiple controllers on different nodes, so if one of the nodes like fails and it will only affect the applications currently managed by that controller, and other applications and other controllers will not be affected. So with this strategy, it will be capable for the QVela controller to like to make self-isolation for the runtime of the application and limit the possible disasters. And besides, we have also used the low testing to show that QVela is almost agnostic to the number of nodes and clusters in the Kubernetes and on the control plan. So we have been testing like it will be possible for one single QVela control plan to handle over like 200,000 applications upon hundreds of Kubernetes cluster and it will be feasible to do that. So we have lots of production users now, only a few of them faces some performance issues and they all have some customized tuning strategy provided by QVela to solve and improve their performance as well. And the ecosystem of QVela is beyond the simple basic application delivery. For example, the Vela CLI and Vela UX provides different ways for users to access the QVela's application. So like if you would like to use web UI to access your QVela's control plan and manage your Kubernetes applications and it will be possible to use the Vela UX and there are also a lot of out-of-box commands embedded in the Vela CLI which helps you to have a very detailed view for the application and for the system as well. And also we have the Vela workflow. The Vela workflow embeds the basic capability to orchestrate the delivery process and the core engine is used in the QVela application as we have seen about but it is also possible to set out a separate workflow run to only use it to make some orchestration for your delivery process not only for the application but also for other tools like you can use some trigger to trigger events and let the workflow run to run some daily jobs and it is lightweighted and simple to use. And also to empower the users to manage cloud resources and multiple clusters on QVela's control plan. We have the Terraform controller and the cluster gateway to help integrate different types of resources on the QVela's control plan and also it is possible to integrate with other community projects like Open Yacht or Qoop Edge to make the Edge devices and IoT devices accessible from the QVela's control plan. And also similarly you can use QVela add-ons to like to install some popular projects like the Argo CD and also Flex CD as well and there are already over 50 community add-ons out of box to use so you can like to install it without too much customization. So QVela by far has been applied across various areas like commercial banks, car manufacturers and cloud providers and etc. So some of the users use QVela as an underlying platform to manage multi-cluster resources and some others make customized development based on the community version and provides application platforms for their internal developers. And so the community of QVela nowadays has grown a lot since it started three years ago and there are over 300 contributors participated in the QVela community from various countries and we hold bi-weekly community meetings to welcome open discussions of new features and ideas. So welcome to the QVela community and we are looking forward to new participations and your new ideas as well. So that's all for my sharing. Thanks everyone.