 Okay let's get started. Hello everyone and thank you very much for joining us for today's CNCF webinar, that collaboratively managing apps in a multi-cluster world. I'm Jerry Fallon and I'll be moderating today's webinar. We would like to welcome our presenter today, Fernando Rapol, Solution Engineer at Giant Swarm. Just a few housekeeping items before we get started. During the webinar you are not able to talk as an attendee. There is a Q&A box at the bottom of your screen. Please feel free to drop in your questions in there and we'll get to as many as we can at the end. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of the Code of Conduct. Please be respectful of your fellow participants and presenters. Please also note that the recording and slides will be available later today on the CNCF webinar page at cncf.io slash webinars and with that I'll hand it over to our presenter for today's webinar. Hi Jerry thank you for the presentation and welcome everyone. I'm here to talk to you about how we manage applications declaratively in a multi-cluster world and as you say I've been working as a Solution Engineer at Giant Swarm for more than two years. So I've been helping customers to manage their own cognitive platforms. When I say a platform I say that because it's just not running Kornet's cluster anymore. It's managing also the components application or tooling that the user, the developers and the creators need in order to run successfully the worlds. And by managing I mean manage the entire life cycle, monitor them, upgrade them, type of operations that these applications need over all the clusters. So at the end the goal is to enable the IT departments to be the desired platform trying to remove all the burden from their shoulders and let them focus on the real business value. So the idea of the talk is to tell you the story why and how we have built what we call the application platform of our open source project. And I would like to start explaining what was the reason why we took that decision and also command the two paradigms that assist out there in terms of managing applications and infrastructure. After I will define briefly how we see the cloud, how is cloud native application built and how do we run them in all the targets of the clusters that we need to manage. Later I will explain how we have built the platform itself, the architecture and their components. And finally I will show you how it works in action. And I will close starting with you what are the future plans of the project. So let's start. Nobody is running a cluster, a Kubernetes cluster has become a commodity. Why? Well, we started the journey three years ago provisioning and managing a Kubernetes cluster was a big deal. At that time there was some tooling to help with that task but the phase space environment together with the majority of the projects and also the complexity of the platform made that the companies need entire teams to operate the match them. Today the world is different and we have tooling, we have companies that are creating distributions that are platforms that helps provide or provide us an easier and safer way to run the platform itself. We have also this Kubernetes as a service, all the big players, AWS, Azure, Digital Sierra and many more, offer to you a Kubernetes cluster with a click of a button. So the trend that we see in the enterprise world is to worry less about the platform and focus more in the core values of the companies. So build applications faster, provide value to the end user as quick as possible. So as consequence of the previous point we have seen a shift in the scene. At the beginning most of the companies were running big clusters. So you have one to big clusters and you were probably relying on this multi-tenancy and installation features that Kubernetes provide and also the containers. But over time we have realized that this carries a set of problems, a set of different issues. The installation is not perfect. At the end of the day your containers have to share a kernel of different machines with other tenants and in many different ways they can be affected. Also in the cluster in the platform you are sharing components between your workloads. So components like English controller, server, or the schedulers, potentially a chain in the configuration of one of these components can affect all the applications that are running in the platform. So as a result, together with the ability to create and manage a high number of clusters this year, we have seen that companies have decided to segregate the workloads and unswift the paradigm having a bigger number of clusters and dividing the application by different manners. Now let's say that it's occurring some problems and this will take for another talk, but that is what we have seen. At the same time, when this pattern changes, the maturity of the core functionality Kubernetes has, and all the components that has been built up in Kubernetes, has reached a level of stability that gives us confidence to run our applications or build our tooling top of Kubernetes. So yeah, we can, we have the experience enough how to configure the workloads, applications, how to assign resources to them, have checks, how to scale in security. So yeah, and in order to run your workloads in this platform, you need to create a BANDA, right? A BANDA, which is the configuration infrastructure that your application needs, and define how it will run in the cluster, right? What we also know as all the general files that are around this Kubernetes environment. And in order to achieve this, there are a bunch of tools that has been created or has been adapted to work in this new last game, right? And we have here more customized, which are too great when this cognitive wave started, but also see tools like Terraform and Siebel that has been adapted to be able to manage the application in that type of environments. And finally, we have seen also that we need to move this application package to the environment where it's going to run, right? And we need to configure this application in a different way depending where it's running. So there has been an explosion of different tools that are helping us with this part, with the integration and the delivery of the application. And yeah, as I said before, there was some apps, some projects that were before that has been adapted, or there has been a bunch of different tools created with this new sparrowing. And each of those kind of fulfilled the same goal, but in different ways. So the next thing I wanted to comment is the different two approaches when you need to manage your application in the first instance, right? Instead, the imperative and the declarative. So to explain initially, I'm going to take a quote from our phrase of Friends of Wave, which is that declarative means that the configuration is guaranteed by a set of paths instead of by a set of instructions. So for example, the creative way will be there are six Maria SQL servers in this environment, rather than start six Maria SQL servers and tell me if they work or not. So at the end, the imperative is a set of instruction steps that you need to perform to reach the desired state. And it has some pros and cons. So normally it has, it's not in-depth, it's important. So if you run multiple times the same script, it can end that the desired state is not the same. Also the state of the cluster or of the platform is usually saving a secondary place like you have used to reform, you have a tf file where you save the state of the cluster and you have to keep track of this file. It usually helps whether in terms of defining and instructing the dependencies of your application, but the way that it works, the steps that you define when you create the workflow has its own limitations. So it cannot do whatever you want to do. So in this panorama we see there are tools, classic tool like Jenkins or Ansible, even there are program languages that also follow this pattern. But I have to say also that some of them have been adapted to the declarative paradigm. So some of them can work in both ways. And in the declarative part what we want to do is we want to give our system the desired state. So you provide the configuration of the resources and the system will take action to to reach that state. And normally there is a logic embed in the system to be able to fulfill this desired state. And generally it has the benefit that is in the important. That doesn't matter how many times you apply the desired state, the configuration it will cause the same result. There will not be difference because at the end you are defining the desired state of your application. The state also is preserved in the system. So you don't need to maintain it out of there. And the bad thing or the difficult thing that we have seen is that defining the dependencies in the application is not so easy. So right relations between different applications. But the database and an application all this stuff is not so easy as you can achieve with a declarative tool. Yeah and in this scenario we're going to do a sample of declarative products but confirmation, plumi or languages like Haskell or SQL are also declarative in the how they run the resources. So in order to show you why we build our platform let's go quickly through the steps that we need to do in order to create our application as a Kubernetes style. So if we are familiarized with this this graphic there's set of steps that you usually are known in the DevOps paradigm which is a developer called the application in a given language. Hopefully this application is small and it's largely couple and then the container they containerize they build the application they define the configuration and then it goes is to start the CI workflow which is how to integrate and build this container or this package and how do you test it. From that point if I use a tool that help you with the delivery of the application so yeah depends on a set of conditions you will deploy the application in a different environment which is a different configuration and finally your application will be a monitor and previously you have to instrument it but the landscape or the platform itself it will help you to to monitor the application that is fulfilling the the criteria the certain criteria that you have defined and this is the same for each chain that you introduce your application so that the cycle is is rectangular and you go through all these steps and for the most of these steps we are relying on assistant tools there are great tools for that but in the two last steps is where where our ad platform is coming to to help and I will show you later. So first in the in the line will be the container layer right so this is the first stretch on layer and we all know the properties that it give us like the overhead is great in terms of profitability and has a fair isolation it has a great community has good consistency so these things there's a lot of projects that with different variables that help you to to control containers to create and manage containers so this this will be assumed that you know if you're in this talk next is the application configuration so and usually you have to define the the configuration of your application relying on a set of different concepts which will be if you are in corner test the conflict map or or the secrets or environment variables and at the same time you have to to define them and you have to define the the infrastructure part with just the size which kind of policies in terms of security you want to to have for this application what are the restrictions in the networking so which services can go with my service which not actually in case that you need to access to the the API of the of the system which access will have these applications so we'll call the role-based access control the auto scaling settings how the traffic is going to to end up in the service of this kind of the configuration of the of the English traffic and and yeah and so on and then all this configuration all these processors should be packaged right in a bundle and for that we rely in in assistant tools like him it's the one that we choose that help us for example with the template right so so this configuration files of the processor lights will differ depending on the environment or the conditions where the application is going to be deployed so having this template is the cvl right and applying the the configuration at the playing time is is what will make us make our application portable and then we need assistant tool to distribute usually this this bundle when it's a chunk of this management and actually this part of configurability to be able to configure the application when it's going to be installed the player upgraded and for that is why we decide that this is kind of what we need and Helm in this case is the standard factor of the community and in particular the centers we need and so that's part is covered with with Helm and then so all having said what is left in the in the in the life cycle or say in the other way what are the goals of the application platform that we want to build is the the manage the the the goal of manage multiple deployments over several targets and so at the end we will have like a meta meta control plane cluster like a meta cluster this cluster that we define with targets and which applications we need to run and yeah and there will be there are some logic that it will work to to achieve this we need also different levels of configuration so depending on the environment and depending on some criteria and this application will emerge and push to the deployment part we rely in the in the declarative way the declarative approach so we wanted to yeah run our application resources like we will run our pulse of deployments finally we wanted to provide a monitoring out of the box or for our applications so making a share for the the operators and get track of the workloads so yeah at the end for the ones that has realized like this is our kind of the tenets of Quernetes so yeah Quernetes offers this maintainability of the state so and the content is the declarative approach in in in his entrails so yeah the idea is precisely if you want to create a deployment you the you give the to the api the deployment with all the parameters and Quernetes will try to fulfill a complex task we wanted to do the same way and if you want to deploy an application which is an extraction layer of the deployment we want to do the same so we apply application and it will overload the system then we have the observability part also in the Quernetes so Quernetes offers this status filling all the resources where expose the the status of the resource in any moment we wanted to rely into the the the so yeah the the vision that Quernetes has in order to automate the different actions so what they call controller operators how they embed the logic to exactly go from the current state to the the desired state and also we needed some some type of versioning so our platform can evolve um confidently so our users can can use different apis over time um and the the where we run in this platform and help us to develop this and these applications right so that's the reason why we at the end decide to go build our app platform upon Quernetes still and but to see how it really works you need to know a bit how function the operator pattern so the operating pattern is a way to extend the api of Quernetes and to enrich the functionality of it uh giving more features to the user so we have mainly two concepts one concept here one is the the custom resource which is a new schema that you can define in Quernetes uh to provide the schema and then you submit the schema to the Quernetes API from that very moment you can start creating other resources and you will benefit from um yeah some functionality that Quernetes gives to you like validation or authorization on all this part that has already been built in the in the api and then you have an operator which is like um a container running in your platform uh that embeds some logic some logic about this custom resource that you have created um it notes how to react on different events because at the end Quernetes is a event driver and there's an environment so for each event that happens in Quernetes related to a specific custom resource you can hook um your code into this kind of events and react and to at the end exactly try to reach this desired state that the user has that wants to to uh to apply so what are the concepts of the app platform well uh first we have created a resource that is called app catalog and the catalog is the representation of it's always a collection of app packages definitions um that at the end are alchem package and are charts and it let us define the first level that's all of configuration so in you can define some some values in your catalog here and this will be the first values that will be applied um to to your application um and yeah for example you can define the registry that will have all the applications in the the catalog by default and then you can also define some metadata like the title description logo which will be useful for give more information to the users then we have the application custom resource which is the really the the the resource that holds the entity that holds the information uh related with the the deployment of this application so the the intention to install the application in a given target with a given configuration so i'm saying that the schema will contain the the target cluster which is the where it's going to be installed at the application also it will contain some configuration which is the one that we call cluster scope which will contain values like the the provider contains values like like the size of the cluster things like that so you can based on on these parameters define how it's going to to behave your application and then you you can also pass what we call the the user values which is the user that uh the final user that provide the latest changes in the configuration which depend on the cluster you want to define um some parameters that are um especially for for your type of of deployment right um then we have some metadata related with the with application the the name that you are given to them to the this installation of the application so the distance of the application where uh where it's going to be in which name space is going to be installed which version of the application will be installed and and so on um and then we have the the operator and the operator is is the the operator that contains the logic um to manage the the app cr because the app catalog is just the informational uh resource for now so it's only to to hold the information but this there is no operator behind that does anything in the cluster but the the the application cr um is managed by the the operator that we created and it helped us with the validation and also managing all the configuration so as we said there's like three levels of configuration in the cluster um and then uh the operator um checks that all is in all is valid and and and match all these values uh as a minus the creation and out of the charge cr which is the the next cr that i'm going to explain uh and and this is the entity that at the end is pushed to its cluster its target cluster uh will be uh and deploy the application and then also expose some some information some uh status of of the deployment of the application then we have the charge cr and as i said this is the the actually the the resource that hold all the information that is needed to install the the application in in a given cluster and it have all the configuration merchant matching what in one in one single file matching your config map in this case and also contains all the information related to the name space version um that is new uh and then we have the charge operator uh which is the the operator that holds all the all the logic and related to the deployment itself so in our case we are using Helm under the hood so charge operator like kind of extract and all the all the logic of the deployment and at the end the intention is if we want to change the the deployment tool uh at some given point we don't need to change anything it's just the logic behind this this operator so we could like offer in the in the future to manage different types of deployments also this operator is reacting to changes in the configuration and applying to the those changes to the application and also is exposing the status of the of the application and the event time so to put this in a picture uh clarified um this is how it would like and look like uh we have a control plane cluster a meta cluster where we run our app operator and we have the api which is the commit the api that operates our user so our user wants to to install different applications in different clusters um for that first he needs to create the app catalog which is the the the place where we define and where are leaving these these definitions uh after he's had been applied it can it needs to to apply the the application cr which contains all the the types of the installation and then the operator will start to work in order to create all the charts that are needed in each target uh so the chart operator uh which is the operator that lives in in each tenon clusters how we call all these clusters uh is the taking care that actually this installation is is produced in the in the cluster so the design uh to split this uh two different operations like validation and configuration part in the part of the operator and the installation part in the chart operator is in purpose because when you are scaling and you are having a lot of clusters you you cannot do anything uh you you cannot do everything with a single operator in the control plane cluster it will be over welling and you will probably slam down the the api coordinate so we decided to to make it this to to scale and so i'm going to show you in the demo how this works and for that i created a catalog um which is really um where you storage the the application right and as i said before we are using hen hen lets you package your application in a complex file and helps you also to define an index sample that is that then you can place in whatever ttp server and that is accessible and it will be a chart repository so and you can then with this chart repository start creating the catalog here in the sample i create an app stupid app this same kind of hardware and then i created the index and uploaded it to jikha so now i'm going to to go through the examples of of the the different crs and then i will run this into into our cluster and see how it works and so in the app catalog as i say um i have um this custom resource which is the the new resource i applied to the to the Kubernetes API on the on my control plane cluster on my meta cluster and there is where i defined it uh this this default catalog configure values so these values are the ones that i mentioned at the beginning that will be applied to all the app that are storage in this catalog then all this all this metadata that i say before like the logo URL the the title description and then finally we have the storage which is really where these applications uh these charts uh package are and leaving in this case this jikha and the product created and here i define type hell because as you see before we are using hell right now but in theory uh we could extract it and use another tool if at some point we need to to move to something else or add more features that hell does not support so then we have the the app custom resource application custom resource so yeah here we also um apply this configuration this this new cr to the our meta controller cluster right and and there we also define uh couple of things uh first the target cluster so as you see here you can define uh a contest inside a kube config where where it's going to be applied all these changes uh the the real installation of the application so here we hold this information which is sensitive information because it has the token or all the all the access key of the cluster so we have uh saved this in a secret and then we are pointing in this kube config which content it should use so you have you can even have a single kube config in a secret with different context and use the same for all your applications but also you can uh use the in cluster mode which will install the the application with within the same cluster that we are running this so this is in the meta cluster then we have all the metadata like which is the catalog that refers uh this application definition so it's this catalog this catalog should resist the name of the application and in which name space is going to be installed we have the configuration part first we have the configuration which is the cluster configuration as you see in the names and we and you see uh id and then cluster values this is because inside this config mag inside this secret is where we are holding information that is related to the to the cluster itself like the size of the cluster in which cloud providers use is running all these are stuff that can help and the applications to i don't know deploy a different type of resources depending on on which provider or which size of cluster you are running so you see also there is a secret always in the config and this is because if there's any sensitive information instead of using a config mag can use a secret and everything will be merged at the end and this then the latest configuration which is the user configuration and it's the most important one so and the catalog configuration this cluster configuration and the user config or everything is going to be at the end merging one single config map and one single secret and apply and the installation time and this is the user configuration is the one that the the user defined and depending on the very specific criteria that this application needs to have in order to run in a specific cluster and finally yeah we have the version also that the application should run so finally we have the the chart resource which is the remember the resource that is created by the app operator in each cluster where we want to really deploy the application and then here is as i say is all this configuration file that you have created as config map and pass to the the app cr and the app catalogs here all these are merged automatically by our operator in this config map and secret and it will be used by the chart operator to deploy the application or to upgrade the application in the in the tenon cluster we have also the the tarball which is the the reference to the to the package that has to be installed with the version included and as you see here also i include here the the status the status of this application so here you see the version that that is running at this very moment when it has been deployed last time and what is the current status it's pilot depending on the install or whatever so let's show you the the demo and for the demo i recorded everything i didn't want to trust in the demo but so i tried to challenge my video skills and i record this and first thing i want to show you is i'm the control plane cluster and as i told you before to use the app platform you need to first create the crts right and in covenants you can use qctl to get a specific crt that has been created in the cluster and see the schema and this is what it was going to do first so we are going to get the apps and the application cr schema and you will see that yeah it contains all the properties uh that yeah we have mentioned i saw you before so here we see that there is yeah this name app uh with as name of the custom shores we see that the this is namespace it uh namespace and we see here an array of different versions and for each version we see the schema that contains the version with yeah its property define it and also if it's required if not and all these kind of things that it will help us in order to validate and amend time and the resources in the cluster automatically by the equipment is it now um yeah we are going to uh to get the to see check that the the operator is really running in this condor plane cluster so as you as you see here we are running three different versions of operators and because we have currently three different versions of the of the operators that you can run a different application person sorry uh and yeah it will depend on the version one or the other uh now we're going to see that catalog uh that we are going to apply so it's the same as i saw you before the reference to the repository that you have page where it's leaving my my package and also the first level of configuration which is the config map and i saw you now that this is applied so we have applied in the catalog and it's running in in the cluster now we're going to see this this configuration this is the configuration that i give at the catalog level so this configuration will be applied to all the apps uh packets that are defined in this cluster and i look it looks at the end like a value statement so this is going to be injected like as a value statement so here i put just a single property i define a single property which is and the value is hollow and we are applying this config map too so right now we have the app catalog and the config map at a point that is rated with the catalog yeah and with this information now uh you can build a QCTL plugin that we do or you can use a UI to get the the index amount of the the catalog and expose or show you so the the different um yeah catalogs that are running in in your cluster and also which applications and which versions run in in this cluster this is our interface but it can be also and display in the interface that's for crs and and here we have the the app so yeah here we have the app values so the config values that we are providing as a users and so here i giving the cluster id of where we are going to to apply this this uh this application and also i i am passing the same property m with a different value all of uh to the to the application uh we were checking that this config map is applied and and now we are going to to see and the app cr application cr so this is the the applications you are uh as you see it has the name of the the distance the distance the enemy space where it's going to be applied the key of config with the configuration of the of the access to the cluster is going to be installed and you see there is a user config and a config so the config is the automatic config that we generate in our contemplating cluster with the the cluster values if the cluster is big they provide everything and then the user config is the one that we have just created with this m value equal equal to to voila i'm in the version and everything is there right so we are going to to apply now the um these applications here now we check that it's really applied and it's running um at this moment uh if we wait a bit the app operator is going to watch for the events and it will detect that there is a new application there um so what i want to demonstrate here is um yeah after uh sometime the wait a moment yeah after sometime the app operator has done his job has created a configuration that has created the charge here and has pushed this this charge here to the tenant cluster to the api to the api uh of this cluster to the tenant cluster so it is there and also it can show you um what is the status of of the application uh in the in the environment so here the application is already deployed it has been deployed yeah at that time and the version that is running is this one so um now to check this is true we are going to the tenant cluster so we are switching context to the tenant side which is a totally different cluster and first i see that the our chat operator is running and as i told you before we use helm under the hood so it's stiller versus not helm three yet we are almost don't know yet so it's using helm two and at this moment you see the operator and tiller are running the cluster and now we are going to check the loads of our operator uh so we see that the is is doing the the job not this expected and it's it's already watched uh and the action for the new charge here that the app operator has pushed to to this tenant cluster and yeah if you see here yeah sorry there's a bunch of this but yeah you you can see that this looking for this new charge here which is called my app and at some point is is yeah is deploying so deploying the installation and setting the the status to deploy uh yeah we see that the everything has been deployed to the cluster so we see the pods deployment cluster also all the packets and all the application packets has been deployed and all the resources has been created and let's make sure the the pods are running let's make sure the ingress is also is also there and now we can call this uh this host and check uh what is there so in theory we have applied uh different configurations right uh in the catalog in the in the cluster level and in the user level uh we are using we have used the same variable which is m um and in that catalog we define halo and the cluster value i define hey and in the user config i define hola so in theory we if uh app operator has done his work and it's matched currently the three levels of configurations it will have preference the the latest configuration that we apply the user configuration so we will we'll see um hola yeah so there and now what we are going to do to check this is working we are going back to the comfort plane quickly and we are going to edit the the configuration the configuration the user config so we are going to this user config and we are moving hola so now the configuration has changed the operator is going to act again is going to to generate uh all all the all this merging of the map config maps and generating a single config map with the m that now should be the app catalog value which is halo hopefully so now we check that the the values are changed so this is the the values that are applying to the chart values and we see that yeah the m has been changed to halo also see that the charger is created and it says deployed and has changed their vision history so yeah hopefully it's deployed and yeah and we can see also in the log that the yeah the the operator is is setting the value to deploy after some time um so yeah the the pod are running press and we can see now halo so yeah this is yeah this is the difference uh types of configurations and this is how we structure the the the the yeah the deployment filing pipeline of all this app platform um yeah sorry i'm all pretty quick and i wouldn't like to finish i have to finish with the future plans so what we are working on right now we are improving the user experience so for example as i said before we um the app catalog custom resource is just a information resource but there is nothing in there and we want to build an operator which actually takes all the all the entries all the app definitions in the catalog and creates a new custom resource why well because we have realized when when you have a big catalog like the Hermes table the old Hermes table catalog there is a bunch there is a lot of uh different applications there and having a UA or or a command line trying to manage all this indexed email with all the definitions is crazy and it's not as credible so we want to rely on the same patterns that we define like declarative and driving by Kubernetes and all the same to create a new a new CR that will hold this specific information for each entry of the app catalog also we want to improve the validation on the 14 so right now if you apply some some values that are not really uh fine so there is a typo in jammel or there is yeah some value that is not really in the schema it tells you that it's not fine but there are some of them that are still not right and we need to to fix that and also here the default in part is really important so in order to to enhance the experience of the users we want to really decrease the number of required files that you need to specify in the app in CR in order to apply something so we can't figure out a lot of things from from the environment and just with the name of the application the app catalog and the version the rest of the things is something if you don't want to apply any specific configuration or something and there's something that you can we can populate in our operator or in our admission controllers then we want to improve the the plugin that we have right now so it's not it does not have all the features that we would like to have so it's something we are working on and we want to pursue the role of having automatic updates so customers can define like okay for this application I want to automatic upgrades until a major release or something like that then we have an operator that manages the the app is always in the latest version and we want to create the concept of app stacks or template apps uh app of of template of apps and that will define like sets of different operations like observability stack and this observability stack will come with Loki with Jager with all the common tools that everyone use and even some of them can be personalized and so instead of going one by one you can with just defining this stack uh deploy everything that is new right and then there is a going effort in in the ccaps in the asyn community creating a new concept which is the application crd that will be a standard way to define all these things that we are talking about today but every every company will rely on this specification on this schema and we will adapt at some point our app platform to follow this and we can benefit from the work of different vendors and not only the one that we are doing so yeah that will be everything thank you for listening and if there's any question thank you very much Fernando for a wonderful presentation we have about five minutes for questions so if anybody has any questions please feel free to drop them into the q&a and we'll get to as many as we can by the end of the hour yeah I just read from Stefan how errors and invalid configuration are handled um so as I said the current API already when you define the schema it's allowed you to to define some some validation so that will be like the first line of validation then there we we are working on on an admission controller so we started with opa but we have found that it's not really helping with some parts of our defaulting and validation so we have created our own admission controller and in the admission controllers we want to configure or we want to check the adult the logic to to check that the for example the the version that you have defined assist in the catalog that's a simple check but right now it's not working we would like to know so if you commit like a typo in the specification of your resource we can quickly get your feedback like this this version is not there or any other kind of configuration error and also what we have implemented is when we are managing the values so different config maps that are defined through the through the different types of CRs and we try to check when you upload this this value statement that the value summary is valid and yeah it has not any error or type and in case that it's containing then we report it back to to the the CR status so that's will be the three levels of kind of validation that we found right now and that we went to improve can this handle deployment loads across multiple cloud providers uh yes it can so at the end the the applications here contains this cube config file and the the reference and and there in the cube config file you can put whatever cluster you want can be eks it can be i don't know your mini cube your machine or whatever uh and then from from one cluster from your this meta cluster you can you can employ in whatever type of of Kubernetes cluster i mean our our interface is Kubernetes API so as long as as you have the operator the chart operator in the in the terran cluster in the n cluster and you have your app operator in the kind of meta meta cluster then it should work okay do we have anybody else we have about one minute left once going twice okay well if no one has any other questions uh we'll call this a wrap we'll call this today i want to thank fernando again for a wonderful presentation today and for everyone for attending as i said before today's recording and slides will be posted later today on the cncf webinar page at cncf.io thank you again everyone for attending today's webinar and we will see you next time have a wonderful rest of your day thank you very much