 Hi everyone, I'm Marco Lorini and I'm working the Distributor Computing and Storage Department of GAR. And today I'll show you the work that I have done with my colleagues, Alex Barcazzi and Matteo DiFazio. The work is about an declarative chain to multi-cubernities cluster for automation of the multi-region high availability for the workload. We can start with a brief introduction of the GAR Cloud architecture. It's composed to several layers. We have a physical layer in which we have installed the mask tool to configuring the hardware of the cloud. After we have an operating system and virtualization layers. On this we use the juju tools to deploy the open stack inside the application layers. We use the open stack to provide YAS services for the GAR Cloud users. In the last year we want to create a Kubernetes cluster on OpenStack. And for this we have created juju bundles that we have published in the juju store that allow to create an easy and in-pass way and Kubernetes cluster. So in this way we have the possibility to create a multi-cubernities cluster in different regions of the GAR Cloud. And this representing the declarative part of the project. So the Kubernetes ensure the high availability inside a single cluster for application and services. But we want to ensure the same high availability outside a single Kubernetes cluster. And for this we ask ourselves, can we have an infrastructure that tolls to freeze the user of the multi-region HA? And we find the answer in KubeFed and the external DNS. The KubeFed, the completed name of the KubeFed is the Kubernetes cluster federation. In fact when we talk about Unchained or Kubernetes cluster we talk about to the federation. And in this presentation I show you what is KubeFed and how it works. And after I show you the external DNS and how it works with the KubeFed. So a KubeFed is a tool that allows to coordinate and configuration a multiple Kubernetes cluster from a single set of the API. And it provides a mechanism for managing a multi-region application to ensure a disaster recovery system. The KubeFed is composed of two control plane server side and KubeFed CTL client side. The control plane is a core and it's the manager of the federation. It's a tool to coordinate all Kubernetes cluster in the federation. And the KubeFed CTL is a client similar to KubeFed CTL but it's used to configuring the control plane. In the KubeFed we have two actual, the OS cluster and the member cluster. The OS cluster have the managing role inside the federation. In this cluster we can install and configuring the control plane. And for the single federation we have one OS cluster. The member cluster is the cluster with the executive role. In fact, when we deploy a resource, the resource is instances in the member cluster. And we can have many member clusters created also in different regions of the cloud. But how the OS cluster can be managing the member cluster through an important mechanism in the KubeFed, the propagation. To create on federation it's necessary that in the control plane are injected the config file of the member cluster. In this way the control plane have the possibility to know who is the member in the federation and how it can access to them. To start the propagation mechanism it's necessary that the user define the federated resources in the cluster post. In this way the control plane create an object that describe and representing this federated resource. And after the control plane propagate this information inside the member cluster. And it can create an instance of this resource in the member cluster. For example, it create a deployment on service, on namespace and so on. The federated resource is composed to three important components, the template, the placement and the overrides. The templates is the classic definition of the Kubernetes resources. In the placement we can add a list or set of the member cluster in which we want to create the resource. And the overrides that is an optional field, it's representing a set of the change that we want to apply for the resource in the specific member cluster. This is a general schema of federated resources. In the kind we can define the type of the federated resource. And for example, we can have a federated namespace, federated deployment and federated services and so on. After we define the name of the resource and the namespace in which we want to create the federated resource inside the member cluster. And after we define the three components, template, placement and overrides. This is a simple example of federated deployment that we have used to test the Kubernetes. You can see that in the template we have the configuration, the definition of a classic deployment in the Kubernetes. Inside we have defined the number of the replicas that we want for this deployment. And we define a specific Docker image that we want to use for this deployment and so on. In the placement we add the list of the member cluster in which we want to create these resources. For example, in member cluster one and in the member cluster two. And in the overrides we have a change for this deployment for the member cluster two. In this case, we want to add a replica to the member cluster two. In fact, when these federated deployments are created, we have two replicas in the member cluster one and three replicas in the member cluster two. So this is an architecture of the kubefet. We have a single cluster host in which we have installed the control plane and we are injected the config file for the member cluster one and for the member cluster two. And then we can see that there are the definition and object for the federated resource, for example, federated namespace and federated deployment, and the hard road representing the propagation mechanism to create the Kubernetes resource inside the member clusters. So now we have an infrastructure that hollow the to create the in different Kubernetes cluster, creating in different cloud region, the resources that we want, for example, on deployment and services. And now we want to try a mechanism that unifies the services under the same domain name. And now we asked ourselves, how can we automate the management of the DNS records. And the answer is in the external DNS. It's a tool that makes makes the Kubernetes resources usable through the DNS provider. We can retrieve a list of the results present in the member cluster such as service or ingress and configuring on specific DNS provider. But how these two can be retrieved these results and this information through the multi cluster ingress DNS. This is a tool that implements the mechanism to retrieve in the member cluster the information about on service and ingress. And for information we intend the IP address of the services. And this mechanism. Ensure to retrieve this resource and information inside the multiple Kubernetes cluster. This tool is used in the federation context and it's in integrates very well with the external DNS. So this is some simple schema of the multi cluster ingress DNS mechanism to start the map this mechanism it's necessary that the user define a single object, the ingress DNS record object. The user define the name of the results that he want to retrieve in the member cluster and the domain for which we want to retrieve the IP address. One this object is defined by the user on in automatically will be created on the ingress DNS controller that have the possibility to retrieve on specific ingress and on specific domain in the member cluster in the member cluster of the federation. And one this controller retrieved the information configuring the ingress DNS record. Now, one this object is configured them another controller the color the DNS and point controller can be start and retrieve the information inside the ingress DNS record and translate it in another Kubernetes object, the DNS and point object. This is important because the external DNS is configured to retrieve the information about the services at ingress inside the DNS and points object. And when this object is created the external DNS can be retrieved this information and configuring on specific DNS provider. So, this is an schema of an ingress DNS record. It's important that the name of this object it's the same name of the resource ingress that we want to retrieve. And in the old field, we define the domain name for which we want to retrieve the IP address of the services. This is on screenshot of the describe command for the ingress DNS record. And in the red section, we can see that we have a single domain in the host. In this case, for example, we have an L word test global your services it and for this domain we have two sets of IP address one set for one single member cluster in the federation. For example, we have fed cluster city one and the fed cluster and a that representing the member cluster in our federation. And the IP address present in this set corresponding to the IP address of the worker node inside the member cluster. Now, this information is translated in the DNS and points object and this is a screenshot of the describe command. In the red section we can see that we have the same domain name, but for this domain we have a single set of the IP address. Now, the external DNS can be retrieved is this information and can configuring the DNS provider. In this example, we have used the power DNS and this is a screenshot of the dashboard and you can see that the external DNS for each IP address have created a single DNS record. Okay, so this is our architecture that we have created for testing the cube of that. You can see that we have a single class that of course, in which we have installed and configuring the control plane, and we have deployed and then configuring external DNS to communicate with our power DNS. So, we have created the also to member cluster. And you can see that this member cluster is created inside the two different region of the guard cloud. This because we want to create a read and dancy and I got the ability multi region for the hard application and services. So, in the future, we ensure and find a read and dancy system for the host cluster, because when the whole host cluster is down, we don't have the possibility to coordinate the federation. But for the cube of the developers, it's a not problem, because it say that when your host clusters is down, the member cluster in your federation will continue to work again. But we want to try and find the mechanism to create a read and dancy for the host cluster. And another work, it's a multi region storage, read and dancy for the Kubernetes storage and persistence, because for the moment we work at only for the application services layer, but we want to try mechanism to create the same high availability for the storage layer. This is on some link about the cube of that external DNS documentation. And it's, it's a github repository that we have defined and configuring some file for create and for testing the cube of that. That's all and thanks for your attention and I'm available live to answer your questions.