 Hello everyone, today I will talk about managed Qwiti service for public cloud with OpenStack and Qwiti and this year case study we deployed service for a public cloud in Vietnam. So this is the agenda of my talk. So the first one I will talk about why do we need managed Qwiti service and the second one is problem with other solution which we can build managed Qwiti service and next I will present about our managed Qwiti service architecture and the last one is Q&A. So why do we need managed Qwiti service? As you know the race of Qwiti adoption in enterprise and company and everyone talk about container and Qwiti, how deploy service in Qwiti and how to containerize the application and so this is a point for us to build managed Qwiti service and the next reason is our customer asked us about container service so because there is no Vietnamese cloud provider providing the Qwiti service at that time. So we are the first cloud provider in Vietnam. We provide Qwiti service and the last reason is managed Qwiti service will reduce the cost of operation for DevOps engineer SIE and we can leverage managed Qwiti service for our deployment and you know deleting managed Qwiti service product. We have GKE from Google cloud provider aka from Microsoft Azure aka from AWS and they have some very basic feature create and destroy Qwiti cluster in managed node pool with mixed flavor or instant time. They also provide cluster upgrade capability and we can recycle a node in cluster desecralone to desecralone in cluster means we can replace fail node or error node in cluster with a healthy node. And the advanced feature is muttering we can mutter the cluster with the workload of the cluster and the last feature is auto scaling so with run Qwiti cluster in cloud we can auto scale worker node on some magic and the second session is problem with other solution. You know we use OpenSnack for years so we will leverage some projects in OpenSnack and the first when we talk about Qwiti service we always talk about OpenSnack Magnum and the next solution is Rancher and Gardener and the newest solution is Oneinfra it's released recently so let's talk about OpenSnack Magnum. And we also researched on this project but Magnum is not fully managed Qwiti solution so when we define a cluster we need to so Magnum will provision master node and worker node so user need to manage master node so this is not fully managed Qwiti solution and the second reason is Magnum is not stable when provisioning cluster we got some errors when we create a new cluster and I think the problem is because we use Ferrar or Atomic to create cluster and in some environment it's not stable and the cluster is created by Magnum takes so long so it takes 10 or even 15 minutes to create a cluster so after we researched on this project we skip on this and we found Rancher but Rancher is not solution to provide Qwiti service. You know with Rancher version 2 and we can connect Rancher with Qwiti master so Rancher will manage multiple cluster but Rancher use RKE for provisioning Qwiti cluster RKE need to be accessed to HNode in the cluster because it will create some process run in container and so this solution is not fully managed and we skip this solution. The next solution is Gardener after we study on this project and we found it's a great architecture. The control plane of Qwiti cluster are managed by another Qwiti cluster it's called Sys cluster and the cluster created by Gardener is Shoot cluster and one issue we got is the after 2 months we did not make it whole work because at that time Gardener is just released some version and I think it's not stable and our overstate environment is quite different to that environment so we drop it and then we decide to build our solution and we learn from Gardener and our architecture is quite similar to Gardener and we develop an orchestrator engine to provision managed Qwiti master component and the customer Qwiti control plane component are run on Qwiti cluster including could be API server, could be scheduler, cloud controller manager and Qwiti dashboard. This is our architecture. You can see here this is our orchestrator engine with API and orchestrator. We call it BK and BK orchestrator will connect to Qwiti Sys cluster to create Qwiti control plane component and the control plane component run in state phone set deployment in separate namespace so we also use VPN server to connect between worker and control plane. You know because the Qwiti API server needs to access to Qwiti on worker so in this environment we don't know the Qwiti API server don't know how to get how to reach the Qwiti so we need to build a tunnel between control plane and worker cluster so we use open VPN for this so the traffic between control plane and cobalat will go through VPN tunnel and the traffic between worker cluster port to port is through another network they will not go into tunnel and with each worker cluster we use auto scaling service. This service is built beyond open stack sending so we need to define profile for each cluster and we create cluster for each node point in Qwiti so the end to access to control plane we for example we can use the Qubi CTL to access to a cluster in run inside Qwiti cluster so we use the HAPoC ingress controller so we use the SNI feature for routing the request to Qwiti control plane. So for example we have a cluster with ID and we create an endpoint like this and inside HAPoC we will route this endpoint to this cluster so each cluster will have different ID with different endpoint and with worker cluster as you know we use the open stack sending for to manage the worker cluster so to trigger an auto scaling to a worker cluster we use our in-house solution we will monitor the cluster because each cluster will have cluster ID so we calculate the averaged measurement for each cluster if the user defines a cluster to trigger to increase or decrease number of nodes in Qwiti worker cluster we will trigger we call to open stack sending to increase node or decrease node so and the next one is because we cannot use the native open stack API for end user so we have a business layer API here so the end user will interact with our cloud through a mid-aware API like the mid-aware API will include some business logic so we cannot use the cloud controller from open stack anymore and we need to develop the the cloud controller manager and cni to use the load bronzer persistent volume and for inside the Qwiti cluster so after we we build the managed Qwiti service we we have some lessons learned and white guard is not working as expected the first version we use the white guard but the white guard is not working as we expected the some traffic from control plane go to cobalat is the wrong row and the second lesson is the recycle load feature we will replay the current node with a new one so the the server ID is changed but provider ID in Qwiti is not changed so in this case we need to remove the node in Qwiti cluster then we then we will join the node again so Qwiti