 Everyone to this CNCF webinar, my name is Diego Braga and I'm working as a solution architect in Kieratec and here with me I have my three colleagues Mauro, Yanis and Luca and today we are happy to show you our new open source product that is called Crateo platform ops and we are gonna delight you with some live demos and we hope that you are gonna be excited about what we are doing with this product. So let me share my slides and I invite my colleague later on. So Crateo platform ops is an open source platform which was born one year ago which aim is to describe and create and maintain services and resources. So basically what we are seeing every day in our job is that it's getting more complicated dominating the ecosystem that is increasing in complexity, it's a richer and richer perimeter of solutions and our customers, our colleagues are struggling to find a way to dominate, to govern these ecosystems and so the platform teams are struggling to find a way to create all the services that their users need. These services can be anything, it can be infrastructure, Kubernetes clusters, networking rules, it can be infrastructure for machine learning models, it can be reports for data scientists so platform teams are looking for a way to standardize the way they offer services and resources and we found out that using Kubernetes APIs is the way to go for the standardization and so Crateo leverages Kubernetes APIs and their extensions which we believe and all the market believes that they are the fact of standard for products that are running in cloud native infrastructure so Crateo is based on Kubernetes and is leveraging the Kubernetes APIs. Basically what we are doing in Kirapec, we are a silver member of the CNCF, we do cloud native service provider and we are also helping our customers with some vertical practices that they enable us to follow the best practice on keeping up and running in the best way their infrastructure and securing vertically the lifecycle of their infrastructure, their applications. So where Crateo is taking all its architecture, basically from the CNCF landscape that is our goal to achieve in order to find out what are the best of breed, the technology is the most up-to-date, there are some guidelines that help you to find what is the best tool to use based on the maturity of the product, based on the community and since we work every day on this landscape we do know what are the best tools to propose to the customer in our architecture and we put all our technology expertise in these open source products so anyone can take Crateo and scale to the right dimension the cloud native approach within their realities. So where we can use Crateo because a lot of customers, a lot of friends are asking us should I use Crateo in a green field company, what if I am an enterprise company, it is not like this, you can use it in any stage of maturity of your company, maturity on the cloud native approach. So basically if you are a green field company you are starting from day one you can leverage Crateo as a foundation to embrace the cloud native transition, leveraging the architecture that is opinionated and based on a central framework that helps you to connect to new services and centralize all the information you need but also if you are an enterprise company Crateo is a framework that helps you to standardize the heterogeneity of your services, your automations and usually we see that there is not a single glass of pain for any services that maybe your silos that are transitioning to a more cloud native approach still have in place so if you need to find a standard to agree on how you expose your services that have been deployed and automated from these different areas of the company, Crateo can help you to standardize the cloud native adoption in your enterprise level. So basically what are the use cases which can help you to can be solved with Crateo? I do love the the team topology's book so I put a lot of reference to it and the platform ops is the approach that Crateo wants to help. The phenopsis the second use case the phenopsis no more a buzzword is something that we need to focus a lot and 2022 it's really here for phenopsis because distributed systems, microservices, cloud native architectures increase the number of components that you need to be aware of and be aware of especially from a cost allocation and cost optimization point of view. The last but not the least is upskilling so the automation standardization kubernetes APIs are not a complicated skill to achieve and if you agree on a standard on a way to expose your services and consume them then you can move from manual tasks that are not not interesting in your daily work but you can move to more designer perspective for the automation you become an engineer of the solution and the automation provisioning of whatever services you are offering to your users. So to focus on the first use case how a platform team can provide services to their stream-aligned team. So just a very brief introduction of platform team stream-aligned team as I said the terminologies taken from Tintopology which is a book that I really really love. Platform team is basically a group of people that work to expose services in a self-service manner to their users. They can be internal users, external users but basically you never talk with a platform team opening a ticket and stuff like that sending an email taking the phone and calling the platform team but the platform team and the role of the platform team is to expose services with SLAs that needs to be ensured and so our platform team can provide these services to their users that typically are the stream-aligned team. Stream-aligned team are a group of people that aim to release new functionalities new products to in a more safe independent a quick way as possible so they never need the friction with the other team they need to go faster they need to consume services whenever they want and so this is the question how a platform team can help a stream-aligned team to consume these services. So the first point is automation you need to put automation in distributed systems you cannot have any more silos you need to jump on different platforms and you need to provision whatever you need to offer so automation is what you need to do automation is required and once you have put in place your automation you need to expose your services to your users using a self-service catalog and these services as I said before can be really anything can be infrastructure maybe you have a template of software that you want your developer to use when you are onboarding them maybe you have models of machine learning that you want to share in your data scientist community etc so as you can imagine platform teams are struggling to offer all these services the velocity that stream-aligned team need and to get things even more complicated services can be really anywhere they were on premise maybe they were born a green feed on a public cloud maybe there are your companies transitioning to a hybrid cloud model or even a multi-cloud model where you want to have a higher even higher service level availability and then maybe you spread your application your platforms on multi-cloud so you can imagine platform teams are looking for a way to build what is a platform ops so basically a platform to expose these services and there are some point of attention basically automation for each service is specialized so if you need to provision the same component in different cloud providers you have different automations to to keep and maintain the day two operation is still to automate so how can I avoid configuration drift if I provision with a very well written automation by infrastructure my services my resources and someone changes the characteristics settings the dimension number of CPUs and stuff like that manually how I can be aware of that so day two operation is still something that needs to be addressed and what about legacy environments legacy environments means something that is maybe on place in a data center on premise so the cloud native transition the cloud entity methodology is really great we we do know that everyone is going there but companies maybe they have still a 90% of their workload on premise in legacy environments that are not on Kubernetes so how can I handle these other parts of my data center and Calteo platform also want to address these open points for the platform team then there is the other part so the stream aligned team that wants to consume these services as I said in a quickly safely and independently as possible way so the only way to to help them in to achieve these scope is exposing this catalog of cell service internal services that these catalog must be user friendly and reliable I mean if I am a developer if I am that a scientist whatever kind of user I am I need to know the minimal set of information to consume this service okay and I don't need to put detailed informations that belong as a knowledge to the platform team and another interesting point is that once I create a resource that can be as I said a microservice can be a Kubernetes cluster etc I need to have all the data of the lifecycle of the service centralized because as you see that he's a colleague with a headache on the right part of the slider the onboarding experience right now is a nightmare because we have really a lot of tools and usually a developer as an example for a user has like those and tabs open in its browser one tab for logging another one for the continuous integration continuous deployment pipelines these user wants to see monitoring metrics code coverage reports maybe they need to access infrastructure so is becoming a complex to keep all the data you need in place in a central place in a clear way and the other interesting open point is the ownership of services so we do log microservices but because you can reuse them basically I don't need to rewrite a microservice it's already written from another team but I need to know who developed that microservice so if I want to see my team how many resources owns how can I share my resources with other teams where can I access to the documentation where is the open API documentation where is the repository stuff like that it's a task that it's becoming more and more complicated if you don't govern it from the beginning the second use case is phenopsis as I said phenopsis is no more a buzzword cloud cost management is a practice that allows you to have visibility on how you spend money on cloud how your distributed teams are investing their budget so it has it's written here it's more a cultural practice than just a cost allocation because everyone needs to take ownership of their cloud usage but you need to give them tools to help them and have insights on how they are spending their money and so cost allocation and cost optimization insight are really really really important and you need the right level of standard and abstraction to govern it the heterogeneity of your perimeter that can be really anything the third use case I said is upscaling so you need to you have to transition to upscale your knowledge to a more engineering approach so leaving the manual task and repetitive task to automation and transition in focusing on the design of the architecture and automation and Carpeo implements a we call it a flight simulator to help you with the demos an example on the adoption of this standard so just to wrap up Crateo is a platform that you can install on your Kubernetes it needs to be a certified distribution but it's universal because with the Crateo you can handle really anything you want everywhere it's deeply coupled with the self-service approach so anything you want can be any service you develop and you want to offer to your users can be exposed in the self-service catalog and it's flexible because there are no limits technical limits to Crateo to say okay I just I can handle just the kind of resources but the limit is just the imagination of the business needs this is the reference architecture so we do have our opinionated architecture but that doesn't mean that if you want anything else you cannot use it so it's since Crateo is on Kubernetes it's using the same approach so it's all the components are loosely coupled you can swap with what you want and then you can use it if you have already in place something you don't need to put a module from Crateo but you can use your own that is already in place so this is the reference architecture this means that Crateo wants to help you to centralize the lifecycle of anything you want from the builder release phase to the runtime phase there are some pillars that are vertical that are observability and that's the cops because they need to really be ensured from all the phases of your lifecycle but you can for sure use whatever you want this is just a reference architecture where you do believe these are some tools that are the best right now but anyway it's something that we are continuously refreshing because you can understand how the ecosystem is moving as fast as we can imagine this is the architecture the core architecture of Crateo Crateo is built on Kubernetes and Crateo deeply relies on really two great open source projects that are backstage for the developer portal three open source projects right now so the backstage is a developer portal argocd for the cd part and crossplane for the interaction between kubernetes and the outside world so our front end is based on backstage Crateo is an early adopter of backstage so we are working together on it anything that is created via the Crateo dashboard is a manifest it's a yaml manifest that is versioned in a git repo and then argocd is automatically configured to pull changes from these repos and apply them in the kubernetes cluster where the Crateo runtime that is built on crossplane handles his custom resources that are a logical representation of outside resources and my colleague Luca will demo we like demo so finger crossed for you Luca we like demo the provider that we wrote for interacting with VMware because we did some demos to some colleagues and they were saying as okay it's it's great that crossplane provides you providers for google amazon azure etc and the community is increasing the project it's great we love it more and more people are contributing with other providers we are going to contribute ourselves with our providers to the community and but our colleagues said okay but if i need to spin up a virtual machine on premise then how we can do it is the point of the legacy environments so Luca will show you how we did it and as i said we are leveraging on backstage that is an opera platform released from spotify engineering team to the cncf and right now is a sandbox sandbox project growing and crossplane is i think everyone of you knows about crossplane but still he has been written by the guys from upbound and released to the cncf community and he's leveraging custom resource definition on kubernetes with an opinionated way to write your own providers and interact with the outside world from kubernetes and crossplane is an incubating project of the cncf so i'm gonna leave the the stage to Luca that will demo you the the interaction with the rpmware infrastructure if you want to boost up your own premise or whatever you have a kubernetes cluster that is a certified distribution you can follow this link and we have the landing page where you can see in english and in italian where all the information that i explained you in these slides so if you want to keep in touch with us you can write in these channels and i will be more than happy to give you all the information and we are hiring so please apply to these channels and i'll invite Luca on the stage hi thanks for looking at this video and i let you share your slides okay and have fun okay thanks Diego hi to everyone and in the next few minutes i will show you what is a krateo runtime provider and how it works and we will do a live demo crossing fingers on krateo v sphere provider so let's start from the slide that Diego showed us here is the architecture of the krateo platform i'm working here on the krateo runtime so let's concentrate on krateo runtime and we see how a runtime provider works it is a smart client for any external web service smart because it's able to communicate to continuously monitor the asset exposed by services and make sure that they meet our criteria the only condition is that those services expose some sort of a crewed api can be accessed so any protocol it's fine it continuously monitors a specific resource exposed by this particular remote service compares the status of this resource with the one we want that is our desired stress defined in this YAML resource here we have a picture that shows how it works then eventually applied the necessary changes to make the specific resource conform to our specs by invoking the remote service crewed api we define our desired stress in this YAML manifest so in a declarative way let's see how to apply those concepts to our v sphere provider so the first thing and this this is one short operation we have to configure the provider to configure the provider we need to provide the server address of our vcenter a classic credential username and eventually a password the password that is specified in a secret as you can see here we have a secret reference to the secret that contains the password then we can specify other parameters such as insecure when we have in this case we disable the street tls validation because we have an handcrafted certificate and eventually you can put your provider in the debug so you you will have a lot of logs you can log all the remote soup calls it's useful for troubleshooting once you have defined your provider config you can submit these to kubernetes classes and we have set up our provider then we need to define the desired state for our virtual machine we want to create a virtual machine so we have to specify the parameters for the virtual machine there are a lot here we have only just a few of them so we can specify the data center where to put this virtual machine the resource pool the data store the guest idea of the operating system type the name of the virtual machine and then the total number of virtual processor the memory sites in this case we have two gigabytes of memory eventually we can specify if you want to start our vm with powered on or off we can specify the network to attach this virtual machine the disk the sites eventually we can specify also the controller there are a lot of parameters you can check eventually the documentation for all the parameters and then we need to put a reference to the previous configuration as you can see here we have the provider config reference with the name vSphere provider config that we have defined before here we have the name vSphere provider config now we will start with a demo we will create a virtual machine simply by defining our decided spec in a YAML file we will edit the virtual machine parameters like CPU memory using the pcenter dashboard and then we will destroy the virtual machine using the vcenter dashboard and we will see how the provider will restore the virtual machine configuration according to what we have defined in our manifest let's hope that all works okay so here we have our manifest it's the same that we have seen in the slides but with the real parameters the name of the virtual machine will be Krateo00 four number of virtual processors two gigabytes of memory the virtual machine will be powered off then we attach to this network and we will create a disk of 10 gigabyte okay so let's apply this manifest okay we have the vcenter with nothing as you can see there is nothing on here the virtual machine is created with the four CPU two gigabytes of memory and a hard disk of 10 gigabytes now let's edit using the dashboard let's change some values let's see i want a 12 CPU and a memory of 16 gigabytes okay let's apply here it's changed as you can see in the dashboard now if we can back to our provider let's see okay this is the debug okay it's running in the next passage he will monitor the current state of the virtual machine he will check the difference and apply the change according to our desired state because this is the only source of truth here you can see the log with the differences okay it's running as you can see he has detected the difference the number CPU is was four now is 12 the memory gigabyte decided is two now is 16 and if we go on the vcenter as you can see here we have again four CPU and two gigabytes of memory at the same time if we delete the virtual machine using the using the dashboard okay it's deleted now in the next passage there is a a time of two minutes i if i remember correctly so we have to wait just a little bit and we will see that the virtual machine will be recreated according to our specs that we okay it's here it's passing okay okay as you can see he didn't found the virtual machine and they recreated again so according to our decided state four CPU two gigabyte and you have this called 10 gigabyte so it perfectly worked the only source of truth is our manifest that is here so do you know that's all it's uh this this live demo went well so great job thanks so i have just i have just a question for you because i think it could be something that someone looking at this webinar could ask so is it based on terraform or ansible the provider no no no it's uh we we brought from the scratch everything so no he it's a client to the vSphere so API okay not okay guys so basically what we saw is that for any reason you are modifying the the settings of the virtual machine then uh kratel via its runtime that is based on cross plane is aware of the drift and the changes reverse they have the changes to the source of truth that is the manifest yeah uh as i told before there are a lot of parameters you we can eventually specify an iso so our virtual machine can do something and also we stay booked and we can specify different operating system there are a lot of parameters that we can our customer can can check on the documentation okay perfect thank you so much luca thank you to you and i think it's time to invite yannis welcome yannis and yannis is is going to show us another interesting aspect of kratel as i said in the previous slides but i'm gonna recap it whatever you create from kratel is deeply built with monitoring by design secure by design etc and if you do know what are blocks metrics and topology for your services when you create your resources then for my monitoring perspective you are helping your users to centralize all the information and the relevant data you need so yannis work on building a grafana dashboard which integrates the logs and traces from the architecture we chose in kratel so yannis the stage is yours have fun and thank you so what is the challenge like cloud native approach and devops approach solved many many single unit complexities like you like most of the issues now in a single unit have been solved but you have got one big problem there is a lot of tools and there is tools sprawl what already was mentioned so what we what we made here is single pane of glass for for sre for development teams for an observability how to integrate together like different components from cloud native foundation cloud native foundation ecosystem and non-cloud native foundation tools but they are default standards for for for devops and observability in this case we took we took cloud native foundation project kuma as service mesh which is devolved by by kong we put together with kong ingress we used graph grafana tempo grafana and and locky stack and and i will show how it works how easy is to debug microservices when something goes wrong so demo demo so for starters we have deployed Kubernetes cluster with with kuma service mesh it took only like or it took like few few minutes to set up and get it running we have some services already some microservices being served here we have set up the we have set up centralized tracing from from service mesh yeah and now i will show the application uh here is some simple simple microservice application based on grief services you can check out what's going on here we can see some info about books and now now the shelf let's go to the let's go to the let's go into the so here we go here we get logs from the from the corresponding application we can click on log and we can see there is some some activity happening the cool thing what we have here we have got integration with tempo tempo is a new approach to storing traces from grafana labs we click here and we see that how it how it goes the whole process how we go through the from ingress to product page to booking for detailed info and so on you can click you can see the latencies you can see how how long takes whole process and very cool thing we have got all as well node graph we can see in visual way how how it worked and where was the latencies where was the issues and so on pretty nice feature about the service mesh what we are using for this demo kuma is that it's it will work not just in your kubernetes cluster but it will work also as well as on virtual machines and you can deploy for example also in your data layer or your caching layer or or resources outside the kubernetes cluster so so you will have full visibility of the of the your of the your infrastructure and it can solve like you as you see here is single pane of glass for the platform team for s3 teams for development teams you don't need to go through the many other places you can just click you can see how the processes are still still happening here and you can also browse through the your tempo part you can see you can see you can browse your logs show logs and you can see the traces in this way as well not just from this so it works in both directions that's basically all from my side thank you so much Yanis no i i think that this second live demo went great and thank you so much i really think that having all the informations within the same dashboard really helps you to immediately focus on the issue whatever whatever it is and this is something that is the the scope of kratel platform wants to put everything you need in the same place even if you are using kuma tempo whatever tools you're using but the scope is to simplify whatever you need a single point yeah last last note from my side even even like even this is in in in normal way it could be very complex but in it works so easily and so flawlessly that you will forget how how easy it is and like you will forget how complex it is because it works so nicely and so ridiculous ridiculously yeah okay Yanis thank you so much for being with us did you have fun yes you have fun okay see you see you later and i invite on the stage by colleague Mauro hi Mauro hi you have the responsibility to take the last part of the webinar okay so not no pressure and the last thing we would like to showcase you is the dashboard that is based on backstage so how you can expose services from your platform team to your streamline teams how you centralize all the relevant data in the same dashboard how you see pipelines status how you see open api documentation with the whatever metrics and stuff like that without access to your infrastructure how your posts are working if they help you or not but i don't want to steal your job so Mauro please take the microphone and have fun okay thank you Diego okay i show you how to create an app a front end app with react back end up with the node gs and a database with the my sql the first thing we need to do for create this app is to sorry is to import a template in krateo and krateo after a some minutes create the application but what is a template the template is a yaml file with we can put all the fields required to develop our application and just for example i have make this this is a yaml file and this is the template i'll use to create our application okay come back so this is a template and now we create the the app i take the url url of my template and i go to my krateo platform ops dashboard in the create page i go to register existing component and i put the url of the template when i click on analyze button krateo analyze the template and check it if all the the template is right correctly in if all is okay you can import the template okay it's done i'm in the available templates i choose the aranomy app template so the first step is the name of our application just for example krateo slash demo slash hot the description i use these info for uh search my app in krateo platform ops the third field is the admin username this is the admin we have the default value because they who writes the template as a specify that must be the admin username but i can i can change and this is the ownership of this service these are groups that are groups from kratek active directory and i'll make a choice and use terbos owner the second step is to choose url this is the app url and the api url so is the DNS ns name where you you can expose our application just for example i i put up uno dot krateo dot io and ap api one dot krateo dot io so next step and the third step is where to put the git repository i have the host tab owner is me maro sala and repository is in this case the same name of my application so next step i have a summary and i can review all the fields that i field and if also it's okay i can create my template i show you that my repositories are 10 in this moment okay let's create my application it's fetching the template publishing registering creating the application rgo cd sonar cloud and github branch protection so all the steps are okay but what is doing so load template krateo fetched the last version of the template publish repo on github it saves all the files needed files to the github repository register the app in krateo platform ops create the app on rgo cd create the app on solar cloud and set the github branch protection rules after this rgo cd and github works in parallel so rgo deploy kubernetes settings start creation on the db on aws and github build docker images when the images are ready oh sorry mistake kubernetes can deploy pod and when the database on the us aws is ready application is ready and published i'll back on my dashboard i'll show you that in my databases i have the krateo demo hot is creating and so we go to the home and this is the application that i just created but i'll show you the fat squirrels uh back end and front end because it is uh fully uh fully completed so the overview tab is the first tab when i go in the fat squirrels back end we have the about card we can view the source we with a click we can go to the repository on github the tech docs and the api i show you later the description the owner is the active directory group the system i show you later the type is a service life cycle is experimental but i can specify production developing testing or it's up to you and the tags in this case i have no tags the second card we have in the overview tab is the links i can put helpful links for example an excel file on yosher point or something else in this case these links go to the repository again these components the back end does not have sub components and this is the first integration with external services the code quality is from sonar cloud i have the score card the summary but if i want to show more i can click i can go to the sonar cloud website and view more dashboard and alerts are the cards about grafana i have created a dashboard for grafana for for fat squirrels okay the second login and my dashboard has a number of pods the cpu the memory and the last latest logs alerts in this case is is empty this is the argocd card and i go to the argocd i show you that kroteo demo hot is the application that i created before is is running it is still creating and i can view all the components that are in my application but i have a summary an overview here the second tab is the cisd i go with the login okay and this is the list of the pipeline of my repository okay this is an error but just for a demo i can click on the title i have a summary and i can go to jobs and i can go to logs it's here it's not necessary to go to gtab it's all in kroteo api so this backend exposes the fat squirrels api but not consumed in the front end the fat squirrels front end is exactly the different because it not provides api but consumers dependencies backend not have depends dependent component but has a resource the fat squirrels db is the rds database i show you later docs this is the readme file that i have in my repository in my repository okay we have kroteo demo hot for example and in docs index md i have my readme file and it's the same that i have here if i change something here the change i can find i can find the change here the tab kubernetes is where can i view all the deploy of kubernetes of my application so i have the rds instance the mesh the deployment ingress service i can view the summary but i can view also the yaml file prometheus so like grafana has the halers card but no halers for now and a graph where a chart where i can i view the container cpu usage for seconds and the last tab is called insight is a insight about my application so i can view what language is i have used releases the comprehensive reports and something something more i back to the api of our fat squirrels and i click on the fat squirrels api so also the api has the overview tab we can view that providers and consumers back end provides this api and front ends consumes it and we have the definition we have the open api the swagger interface where can i view endpoint method and you can also try and execute the call to view some sample data i back in the overview and i go to the fat squirrel system fat squirrel system is a group that has all the components of my application so we have front end back end apis and database and we have the diagram so we have with a summary the graphical view of what there are in my application we have a domain a system api database back end front end and obviously i can click for example so on the front end i can go i can go to the front end section the owner is the key attack uh azure active the directory we have the group caribou so we have the email and other groups from the active directory the members of this group and here you can view what service what opening api has caribou and just for example you can go so to the user card and you can go for examples lorenzo lorenzo is a part of the group caribou and this is here is a email our fat squirrels api is just a to-do app this is the front end i can write something i can i can i delete we have logs logs as all the calls as logged and logs are saved in a persistent volume with the portworks and in the portworks dashboard you can view that this is the persistent volume currently i'm using about two megabytes of logs we have kuma the service mesh so we have the our fat squirrels services the overview and the yaml file and okay i'm back here okay this is the fat squirrels front end front end and back end but the last thing that uh you don't need to create an application from zero to croteo you can also import an existing application just for example i go to my repository in the repositories i have uh sample open api croteo for example i have this catalog info yaml i can get this url and import this application in our croteo platform ops i go back to create register existing component i put the url analyze all is okay i can import and now finish in my dashboard as you can see we have the sample open by open api croteo with the about links and all the the tabs at more interesting is that this application is the owner is me directly so maro sala has the ownership of five service and three open api and carverus has four service at your open api this for the user five and three is the sum of my services and the services of the group where i i'm in so i think when i'm coming back to the stage means that your time is finished now i'm joking thank you so much maro i have some questions for you so you showed us argocd in a certain point yes and i think it's interesting to understand that within the same application of argocd we are describing pods for application but also the database yes in the argocd dashboard i mean fat square's application i'll show you the database use this so this is the yaml that describes my database on rds okay yaml it's really interesting because what i would like to understand is that you don't need to use different tools you just need the templates you just need the argocd and then you can describe also mixed components in the same way so i think that this is really really great okay maro thank you so much thank you i'll take the stage for the final greetings and i'll tell you what we are doing with the with the karteo platform ops in the next steps so what you saw is one year of developing this this tool that as i said is is open source we invited the community to collaborate with us on karteo platform ops we do believe that there is a lot there are a lot of things to do the next goals that we want to achieve are the costing site plug-in for karteo so as you can understand once you standardize any resource any service within the framework that leverages kuba net is a ti stand also costing site and cost allocation becomes an easier task not easy but easier and as mauro showed us right now once you describe infrastructural application components using the same way then you can also simplify also the release orchestration part so you don't need to be cloud native to use release orchestration that is cloud native by itself so you can apply the same tool for release orchestration to the resources that are on your component is cluster but these resources could be also crs that are logical representation of something that is outside and once you have all your data centralized as yanis yanis showed us then you can do auto scaling that is not this just on kuba net is auto scaling but whatever resource so you are able to scale and based on your service level indicators or service level objectives so thank you so much for participating to the webinar i hope that it's been really interesting and you had fun as much as we have and please if you want to keep in touch with me you can find me on twitter my account is braghettos and we will share the slides so you will keep the contacts you can write us on our website it is krateo.io and i hope to see you next time and have fun be safe thank you so much