 To start. Okay. My name is Shaina. I'm from the Cloudify team and what we do at Cloudify is orchestration, actually, TOSC orchestration. And I'm going to talk how to manage multiple Kubernetes clusters about LM. So I use how to, how the L or how to LM do I package Kubernetes cross cloud. So all of us know that we have hybrid clouds, so we can have some workloads on AWS, some workloads on on-prem, it could be OpenStack, it could be Kubernetes, it could be bare metal, whatever, and we can have some Kubernetes clusters everywhere. So in this presentation, actually, I will focus on Kubernetes. But first, why do we need the multiple clouds, multiple clusters? One reason is because we have different functionality. Each cluster actually runs different things. One could run the database cluster, one could run the CRM application for example, etc. What we see more and more in the market is that you take, for example, on app, and you want to distribute it on multiple clusters. And the reasons are few faults. One is redundancy. So if one fails, you want to have high availability. Second one is load balancing and performance. So you can actually provision some components here and some components here and load balance them and get better performance. And of course, proximity. Think about Asian users and European users. Each one wants to go as fast as possible to its closest data center. So you can, if you are an application shop, you can distribute your application everywhere and each user will get to each data center the fastest way. So if we look at Kubernetes as the infrastructure, we at Cloudify actually develop a TOSCA, TOSCA stands for Topology and Orchestration for Cloud applications. And it's a standard, a Oasis standard. And we actually created in TOSCA an abstraction layer that can go and actually provision workloads on each one of the Kubernetes clusters. So just a few words about TOSCA. In TOSCA you have, that's in a nutshell, TOSCA you can have actually a node. A node could be a VM. It could be an application that runs inside a VM and you have a relationship. One relationship is that one component is contained in another component and another relationship is one component is connected to another component and on this connection we can run lifecycle operations and take for example the runtime attributes MongoDB port and connect it to the application in real time. So think about it as a huge graph that can actually connect many different nodes. Now workloads could be spanned across different clusters. In this case we have developed something that we call a composite service. This microservice can run on one cluster, this one can run on another cluster and it has its own lifecycle operations and we have a master blueprint that knows to combine everything together. Everything could be manipulated in real time. It's a graph so the orchestrator can change or delete or add properties in real time and for example I can add the VNF to a service chain or I can chain some components from one Kubernetes to another Kubernetes. So if we look at the bigger picture now so Cloudify can orchestrate multiple workloads on different Kubernetes clusters and as I mentioned so this is the Tosca blueprint. So we have a master blueprint. You see the Tosca graph in each one of the local Kubernetes clusters and we can use actually to you know the Kubernetes APIs to provision all the resources. It could be pods, services, replicas, etc. To make things much easier we actually use LM charts so you can actually encapsulate your workloads in LM charts and we can go and provision them to each one of the clusters. I will show an on-app example where we have an on-app workload and actually we connect to the TILA server of each one of the cluster and provision workloads on that cluster. So here actually the Tosca master blueprint knows to talk to each one of the different Kubernetes clusters and provision another blueprint. This is also a separate blueprint and use this blueprint to provision the workloads on that cluster. How we do this? So we have a Tosca infrastructure provider that knows to start a Kubernetes cluster and if we focus on LM we can have an LM blueprint so each one of the nodes is from on-app micro service Kubernetes and we have an LM plug-in integration that knows to talk to the TILA server here and provision the workload on that cluster so you just need to point to that cluster and provision the workload on that cluster and that's what we did for on-app but it's something that's generically available. So in this example you can see that we have multiple on-app clusters and we actually can go now to very quickly to the blueprint and I can show you this in the blueprint itself. So this is a blueprint that provision a Kubernetes cluster. Basically this is the Tosca language I'm not going to get into it it has different sections but you can see that in one of the sections we have a Kubernetes master and we have the workers here so you can define it dynamically. The blueprint that actually installs on top of it is the on-app that we use here and you can see that in this blueprint sorry one second you can see that I point to the TILA server here you see the TILA server and I provision where components on one TILA server I can do it for many TILA servers and you can see the different applications here ANI and APPC and CLAMP etc. To run fast forward let me show you a video that we actually provision this cluster so you can see that we have multiple edges and we have actually the kuba on-app and this is the master on-app cluster and it works with multiple clusters so we can see here the different components of the Kubernetes cluster so the Kubernetes host and the Kubernetes workers and Grafana and Prometheus and all the components that are needed for a cluster to run and here we can see actually all the on-app workloads that are defined on top of this Kubernetes so you can see the different applications that I mentioned before the ANI the portal the APPC etc. Now just a few words how we integrate with Kubernetes in Cloudify it's not just we can provision workloads on top of Kubernetes we actually look at Kubernetes as a sandwich if Kubernetes wants to scale out it actually calls the provider interface so we implemented the provider interface to add additional nodes into Kubernetes so Kubernetes take call the provider interface and give me another VM we give another VM to Kubernetes and this provider that we implemented actually can go and allocate VMs from multiple clouds so you can allocate from Amazon you can allocate on-prem from OpenStack etc so this is one integration point another integration point is that we implement the service broker the service broker actually allows you to access external services like a database on a VM as they are internal cloud native Kubernetes services so Kubernetes think we have a catalog of services so Kubernetes thinks that it's an internal service but this service broker proxy actually knows to connect it to external service so you can have actually a mixture of workloads on different clouds and think now about all the use cases that we mentioned here about the edges so you can have Kubernetes bare metal on the edge you can have a master Kubernetes you know on your own-prem cloud or on a public cloud so you can like that provision workloads and manage them manage the lifecycle operations using Cloudify on multiple clusters and one more thing is actually this whole process is based on Tosca and we would like to make Tosca like Lego blocks so we actually as I mentioned before we have the Tosca master blueprint and we have the different blueprints so you can actually let's say micro or logically you can actually create a blueprint a small blueprint for each VNF for example for each microservice and connect them using this master blueprint concept and it's very flexible like this you can for example we came to an environment and we defined function as a service as I showed in a previous presentation orchestrated like that even though we didn't know about how to do it from the beginning so it's very modular and it is built for the unexpected future scenario this kind of pattern so we like very much this pattern any questions okay thank you