 Hello everyone I'm Shaina from the Cloudify team and I'm going to present you our journey in on-app actually how we use and Cloudify and on-app to first bring up on-app and orchestrate very nice use cases. We started with one on-app cluster but now the demand we see from different customers that they want to go multi-cloud federated on-app as well as to the edge. As you know on-app now extends into a cry-no and there are many nice edge use cases. If I have time I will actually talk about orchestration models to the edge, autonomous orchestration etc. So let's start first with on-app and what is on-app. I will go over in briefly what is on-app. Then I will explain Tosca. We use Tosca as an intent-based model for orchestration and then I will explain how we can deploy on-app in one or more clusters on top of Kubernetes and show some nice use cases using the on-app SDC, the service design, the modeling and the SO, the service orchestration and Cloudify. Also I have another use case it's for a video streaming and then I will move to the edge and show how we can actually orchestrate an edge and a master deployment. So let's start what you see here. You see the little box here the vertical box that's the OM that's the component that installs on-app but first what is on-app. On-app is the open network automation platform. It has some non real-time components like the design time, the SDC, the service design and creation where you create the design, create the artifacts and then you push it into the SO, the service orchestrator. The service orchestrator in turn calls the application controllers the SDNC for creating the networks, the app C for creating the application workloads and it has something that is called A&I that's the asset and inventory, the active inventory and basically all the infrastructure components should be actually registered there. If you have an edge component it should also be registered there. It's not to register subscribers or some other stuff that is coming on top from higher level components but it should have all the active infrastructure components. So this is on-app in a nutshell and the vertical box on the right side here is the OM, the on-app operational manager that installs and manage on-app. Basically in Cloudify we are actually being integrated in three main parts. One is in the OM to install on-app and on top of course of a Kubernetes cluster. A Kubernetes cluster could be on top of OpenStack, could be on bare metal if needed. Then we are integrated into the SO. So from the Beijing release actually SO can call Cloudify and Cloudify can actually get a Tosca blueprint and deploy and execute the blueprint and also we are part of the DCAE, the telemetry part so we are the controllers there at the DCAE level. Another way to look at it is if you look at Cloudify is at the left side it installs on-app on top of Kubernetes cluster. Up there it's part of the DCAE and here you can look at it as part of the controllers. So you have SDNC, you have Upsie and if you want to execute a complex topology, a Tosca topology, a distributed topology you use Cloudify for that and already there are several telcos that use on-app together with Cloudify. I'm not going to get into this but what happens here in a nutshell, so you actually create something in SDC, it gets all the Tosca types, it creates the artifacts, actually it creates a CISA file which in turn is pushed to SO. SO has multiple ways to orchestrate things. One is the BPMN engine, it's more like a process-oriented flow so you say I want to do step A and from step A I'm going to step B to step C so like automaton and also it has Tosca using Cloudify which is intent-based orchestration. You say what you want to orchestrate and not how so for example you define the higher level abstractions and you don't care about the underlay so let's say I want to have a connection point between A and B I would define this connection point but I don't care if it's running on some kind of an underlay or it uses different kind of routing but it's the intent how to use it or for example I want to define a firewall between two points I don't care if it's a firewall from one vendor or another vendor or it can be even a router with ACLs but in the how in the plugins I will implement the how but the Tosca defines the what and not the how. So if we look at the OM architecture so basically you get a blueprint there is a Tosca blueprint and this Tosca blueprint is I will show you to you later is responsible to install on app. The installation is actually based on two steps first you provision a Kubernetes cluster and second which is an independent step if you already have a Kubernetes cluster so you can just use the second blueprint and install the on app components which are many containers or many services and pods installed often on top of Kubernetes so this is one cluster you see at the top the Kubernetes master we talk to the Kubernetes master and we can provision workloads and actually all the resources on top of Kubernetes which means the services the pods and the network and everything and it can run on open stock or on bare metal at the at the bottom we can deploy another on app cluster and the reason for having multiple clusters is because of high availability redundancy because of load balancing sometimes you want to actually distribute the load and as a reason is because of proximity so you can actually have multiple on app clusters and you can load balance the load between those clusters so these are the on app services actually you have the message bus you have the sdc day and I saw all the containers actually we have more than one other containers in on app and before I continue further I want first to talk about Tosca and what is Tosca and how we use Tosca so as I said Tosca is intent based and not how we do things it defines what we do so in Cloudify what we do we actually use a Tosca blueprint we push it into the Cloudify orchestrator which passes this blueprint it has a core component that knows to take this blueprint and separate the blueprint into the smaller components and parse it and creates a plan and then we have plugins that can interface with different systems like any cloud so we can run on any cloud provision workloads networks applications on any cloud like AWS OpenStack VMware etc it can interface any configuration management tools like Chef, Puppet, Ansible any networking tools like NSX and others and of course I will touch the Kubernetes and containers in more details but it can provision workloads on top of Kubernetes but it can do more with Kubernetes so it's a model driven automation and governance you can say who can execute the blueprints who can view the blueprints you can have different roles of admin and a user and of course everything in one tenant it supports multi-tenancy etc and it's a federated model of a master orchestrator and edges and when you come to edge it becomes more complicated because sometimes edge doesn't have connectivity to the master orchestrator and what do you do so the edge should work autonomously so you need to have copies of your orchestration both at the master and the edge and when there is no connectivity thinks about ships or airplanes so still you need to orchestrate the lifecycle operations of the workloads there and we are open source so basically it's like building the Boeing that you have many many vendors building each one of the parts but what is different here actually is that our parts you know is very dynamic so each vendor actually creates the part and the part is constant it doesn't change but in our life everything changes so fast and we need to make sure we can orchestrate at scale at very dynamic environments so this I took from the TOSCA definition but let's make it simple so here I have an example of what is TOSCA TOSCA actually defines a graph of actually components I think about the graph in memory each component has relationship one relationship could be contained in and in this example you can see that I brought up a VM which is a node and we have a Jboss container contained in that VM and we have a CRM application contained in that Jboss container okay so it's from type compute the VM the container is from type Jboss and the CRM is from type application so I can write like object oriented I can write my own types in TOSCA and extend it from a base root another type is connected to and you can see that the CRM application is connected to an Oracle application to a rock Oracle database that is running on a different VM now on the relationship I can define life cycle operations for example I can take runtime attributes from Oracle like the port that Oracle listens to or some other information and send it in runtime over the connection the relationship to the CRM application so like this for example think about edges so you can create a topology of edge and a master and things things between the two you can create VPN as a relationship etc so I'm not going to go deeper into TOSCA just to show you the overall picture so you create the TOSCA blueprint the TOSCA blueprint is being input to the domain as a domain model to the orchestrator the orchestrator parses the TOSCA model and creates the different nodes and arrows and the connections and there is a workflow and embedded workflow and install workflow that's the default workflow that knows to run on each one of the components and instantiates them and provisions them you can define also your own workflow on the graph and you can update the graph in real time so for example you can add a node you can remove a node you can change the properties of a given node and the orchestrator is responsible to execute it now there is the plugins concept that it extends the core by actually interfaces with many different data sources so let's say tomorrow I want to actually interface with authentication or with a kind so I can write a plug-in to LDAP so there are many plugins like to the different clouds to the different orchestration systems like Kubernetes as a domain orchestrator to different configuration management tools etc so basically to create a service you can create one or more blueprints and you after you create the blueprint the same blueprint could be instantiated and executed on on different locations but using different inputs so the trick here is to you can use the same blueprint but to give different input so it can create different things based on the same core blueprint and eventually it will create the graph that I mentioned so you can have for example if you have multiple domains and if they are similar but they have different inputs so each one can have for example a different you know IP addresses or different management administrators etc think about edges so edges you can group edges actually in groups and the edges the same edges in the same group are the same but in different locations they could be the different so you can send information to those edges with the same blueprint just giving different inputs so if we look now at this one Tosca example so we see that actually we have here a VM we can create a group of a VM and all its components like the IP address etc we connect the VM as I mentioned to a Tomcan container and then to a Mongo database and here the things are becoming more interesting we can create a composite service and the composite service basically let's say we have multiple blueprints each blueprint you can think about the blueprint as a microservice microservice one and two each one can get its inputs as his own lifecycle operations and it sends out to back to the master blueprint and basically you can do this on the fly so you can have logically separate for example your application you can have each VNF in a separate blueprint you can have a master blueprint and then you can actually have an orchestrate those blueprints with you know from top down approach you can create service chain and change them on the fly so for example let's say you have a router a firewall and now you want to add a DPI device so you can in this example I run a deployment update and I actually manipulate the graph in memory so I add the additional VNF and I connected into the service chain using this deployment pattern so basically what we I wanted to show here is that Tosca is very flexible it can actually manage complex topologies and you can create a topology in a way that it's very modular like Lego blocks so each component can run in its own separate blueprint you can have a master blueprint that ties everything together and you can change this on the fly and add and remove components like we see here at the bottom now I want to talk about on a piece actually installed on top of Kubernetes before I will get into this I will I want to say what we do at the Kubernetes level so we can provision workloads and resources on top of Kubernetes but before that you need to bring up the Kubernetes so there is a Tosca blueprint that brings up Kubernetes the challenge that we actually encountered in on-app that when we actually installed it so the containers were actually were by ship or were sent the pods were sent everywhere to the Kubernetes workers the minions but they need to access a shared file system so we created an NFS share in the blueprint and combine it together so every container can access the data Cloudify also implements the provider interface in Kubernetes so if Kubernetes wants to scale so Kubernetes ask Cloudify a I need another VM it will get another VM and this will be added as a node to Kubernetes also we implement the service broker what if you won't need to actually access external services as you want to refers them as an internal Nate cloud native Kubernetes services so there is the service broker interface that you can implement a catalog of services and Kubernetes will ask actually access it as a Kubernetes service but the services could be on Amazon and could be you know a database external database that is not part of the Kubernetes cluster etc so these are actually the these main things that we do around Kubernetes so the provider interface to add more infrastructure components to Kubernetes VM networks etc. deploy applications workloads on top of Kubernetes and actually also the Kubernetes provider that I mentioned before so basically we are like a sandwich on top of Kubernetes the provider is implemented in a native go application so it can access every cloud and create resources on the cloud and we have Tosca blueprints that can provision workloads on multiple stacks so let's say you use a alum a charge so you have you know one Kubernetes cluster in Amazon and as a Kubernetes clusters on Prem so you can use the same Tosca blueprint to actually run the workloads on multi clouds so in this example we see how this looks like so we look at first at the Kubernetes you can see the visualization of the Kubernetes cluster and you can see that you have the austere you have the different networks the security groups and the Kubernetes node also we can actually collect KPIs as part of the Tosca blueprint and you can actually define in the Tosca blueprints and monitoring component that you say I want to collect CPU or that's infrastructure component memory but you can collect also KPIs from the application itself for example number of connections or something that is interest for you and visualize it on the dashboard this is the on-app cluster that we provisioned using cloudify so you can see it on OpenStack so it's on top of OpenStack and here I want to emphasize that you can take all the on-app components and provisions them on top of Kubernetes on top of OpenStack or bare metal but you can also have an hybrid installation you can define that some of the components will run as containers pots in Kubernetes and some of them could run as VMs so you can have you know a mixture environment and this is true not just that on-app is just an example but you can have an hybrid deployment model that you can actually define some components running on bare metal some could run on VMs some can run on pods and also I have an example that some components are running as function as a service when I show you the actually some orchestration use cases using on-app just to summarize this part and then I'll show you the on-app itself so basically we have a Tosca blueprint that defines the infrastructure it define Kubernetes it starts a Kubernetes master with multiple nodes this is configurable it also installs the TILA server which is the LM client uses it we have an LM plug-in or LM integration you define in the Tosca blueprint all the Tosca applications all the on-app applications as Tosca nodes instances and it will use the LM integration to provisions them on top of Kubernetes LM as values global and local values so you can override those values using the LM integration so you can use the inputs and say whatever I want for example I want to define global inputs cluster wide or on top of multiple clusters or I want to override those values locally or globally and everything is here defined with Tosca and of course we have the service layer that higher level components can interact with the REST API and call cloudify to do for example to create a blueprint to deploy a blueprint and execute it and even to create a workflow so let's say for example you want to an example of scaling you measure the KPIs and you see that the KPI for scaling actually cross the threshold so you send it to the OSS or to other system and this system can actually trigger a workflow and tell cloudify for this blueprint I need to scale out okay so this is the on-app portal after you bring up on-app so you can see the on-app this is the topology of on-app with all the components so you can see the portal the SDC the robot console the message bus the app see etc the ANI so and if you see if you do kubernetes get pods you can see all the running pods and the same for on-app services and if we want to look at the blueprint so there is the Garrett the blueprint so you have the TOSCA blueprint here so I don't want to I don't have time to actually go into the blueprint into details but it is defines the TOSCA DSL it defined different inputs imports so it's basically some definitions of the TOSCA TOSCA types that you import and makes them as part of the blueprint it has the input section for the images the LM version and then it creates the NFS connectivity that I mentioned for all the nodes and defines the kubernetes master and the kubernetes nodes and so and the security groups etc so now after I have a kubernetes cluster running I can have multiple kubernetes clusters so I can go and provision actually the on-app components on top of it so basically what I do I point to the in this example to the tiller server or to the I point to the kubernetes master and I provision on-app on top of these kubernetes I can take different on-app components and provisions them on different clusters or I can create multiple on-app clusters and define what I want to provision where so you can see that each type TOSCA type here is very similar it's from type on-app nodes component it defines the same things except for the application here it's A and I up C clamp etc so you can go to Garrett actually and this is the link and you can look at it now let's look at interesting use cases one use case that actually the telco did almost by by itself with an integrator as use cloudify for two things one for a catalog it was for a streaming video so I have different domain and network controllers and different media environments so they wrote an abstraction layer on top of it and they used the TOSCA blueprint for first defining the catalog of services what the user can do with this what services the user can consume and after that they used on-app and the integrated cloudify as part of on-app so they pushed the TOSCA blueprint into the on-app so service orchestrator and this service orchestrator was calling cloudify to actually orchestrate the different domain controllers at the top is they used the TM form API layer actually to get integrated with their OSS BSS system and this layer actually the API translated request from the OSS BSS into cloudify and as a use case is I want to go more to the edge so we also did it with big telco we created an on-app cluster with three edges each edge was a Kubernetes cluster we defined the services in SDC and pushed it into SO and SO was calling cloudify in this case it was a connected car example so think about it how many times you use ways or Google Maps and you run into traffic jam and then it tends to turn right okay but you're already stuck in a traffic jam so we calculated a rectangle around each one of the edges and we know the density of number of cars there and basically we can tell you hey you have to go through another route so before continuing I'll show you what we did and then I will explain about the architecture we use in this architecture function as a service deployed on top of Kubernetes and you can see here you can see here that we have different car types like Ford Toyota and we use an IoT gateway for that and let me show you what happened I will go quickly here so we visualized everything on a Google map so we sent information from the car so you can see the density information and also we set a prediction back to the car to tell it where to go so we used the Google map actually and who is familiar with Boston here know that you have many times traffic jams here so we created a Tosca node to create a traffic jam here each dot represent a different car type like Mazda Toyota Ford etc now I will do a fast forward there you see the cars are getting to the traffic jam and I will send some of the cars to the destination point using another route so basically you see the cars that are going on this route get to the destination much faster than the other cars okay so let's continue so basically we used here a Crino and Kubernetes on top of it and Kubless as the function as a service platform we use an IoT gateway actually I'll show you we use an IoT gateway to get the car request it's a simulation from car request but we want to do it also in real time in real-life experiment we had the fast engine that automatically defined the Tosca type for each car type like for Ford Toyota we kept all the locations of the car in the MongoDB database and Cloudify using the Tosca blueprint orchestrated everything and also we visualized everything in Grafana and Prometheus if I go one slide here you can see the on up so on up orchestrated everything we had different edges and on up run the blueprints and the workloads on all these edges and basically the challenges were here actually that we encountered is to define things dynamically let's say I now discover a new car type I want to add it to the model dynamically and with the capabilities of what I mentioned before to do everything dynamically so we can create in the graph we can manipulate the crowd in real time and add different car types to the model itself without the need to tear down the deployment and of course when you add a new car type you need to put the function in place and you need to connect it correctly to the IoT gateway and get requests and do everything as it was initially defined okay let me go quickly this is the model that we created with SDC so you can see the IoT gateway in SDC you can see the Kubernetes master you can see the Kubernetes IoT gateway service and you can see the definition for the functions Mazda Toyota and Ford for each one of the car types and this is the Kubernetes itself and the metadata for it so this model was pushed to SDC using a Tosca blueprint and it was pushed to SO in on-up and it was called Cloudify to orchestrate things here now things that we learned about Kubernetes we with function as a service we can scale Kubernetes at three different levels one is in Kubless or in any other fast engine we have a way to define if there are more car types from type Toyota for example I can create more functions of Toyota to accept the load I can scale also the native way in Kubernetes so you can define scaling in Kubernetes and that's easy and also as I mentioned before we implement the provider interface so if Kubernetes needs another infrastructure node to add workers so it can call the infrastructure provider interface and this will call Cloudify and Cloudify will add the node into a running cluster all those scaling ways are dynamic and it is easy to scale at each one of the levels okay so now let's talk about multiple on-up Kubernetes clusters so you need multiple on-up clusters because of load balancing many load balancing and high availability so you can create multiple clusters and define what runs where and let's think about federated Kubernetes first so what is federated Kubernetes actually you know what is federated already Kubernetes federated by itself because it's node federation it runs different workloads on different nodes now federated Kubernetes Kubernetes in a nutshell as an API gateway actually the API for Kubernetes it has an HCD for configuration and then as controllers to bring the workloads to the desired states the pods to the desired state as you define in the YAML files now go one layer above it so think about the federated API so you can define for example I want to run object foo on multiple clusters so it will go to each one of the clusters and execute it so the same you have HCD for federated configuration and you have controllers to define the desired state so you can run workloads on top of Kubernetes in multiple clusters now sometimes it's the world is not only Kubernetes and you need to tie it to different things outside Kubernetes sometimes you need to run it on VMs sometimes you need to create VPNs you need to create some network on activities so for that you need a glue layer on top of it and here we use cloudify to first to define all this federation and actually define workloads on top of it so I will do fast forward so there are many use cases for from federation I go to edge cloud many use cases for edge cloud think about that you have you know actually a open stack in each one of their ships so people use it for you know there are many ships that you need it for people use it for gaming or for whatever they need so you can have open stack on a plane airplane so there are many edge use cases IOT etc so and as Michael Dell said the edge will be much bigger than the cloud itself so the reason for ad we're having edge and think about it tomorrow the central cloud is gonna broken to smaller clouds to the edge clouds so the reason are to the reasons are two-fold one is latency you cannot send everything to the main cloud let's say you need latency under 20 milliseconds so you need an edge and let's say you don't want to send enormous data points I call it that tsunami of data points to the master cloud because it's a lot and you are going to overwhelm the master cloud and you cannot keep all this information so the main two reasons are latency and lots of data points from IOT AI AR a machine-to-machine smart cities etc so in this case and how you are going to manage this so already touched on the serverless edge but the challenges are enormous here you need to define a complex model and you need to do a service composition of multiple master and edges now what do you do when there is no network between the edge and the master so the edge should work autonomously also the edge has limited resources resource constraint what you do is security who is allowed to talk from the edge outside or from the edge inside or what you do is security and tenancy inside the edge itself and many different things think about the satellite environment where the bandwidth is scarce so how do you manage this so basically I just quote someone here that say that even from operational point of view we are going to have more edges and people so how we can manage this so just to finish this I don't have enough time so there are actually many models to define and orchestrate an edge one model is to have master orchestrator that actually uses a control component at the edge but the master orchestrator actually manages everything and just send operations to the edge and more federated or distributed the way to do it is to have a local edge orchestrator that is autonomous it will run the life cycle operations like they configure provision install manage even healing and scaling at the edge itself and there is the master orchestrator the master orchestrator will connect to the local orchestrator and when there is no connectivity it will work on its own but then when there is connectivity it will actually keep the data that it collected and send it to the master orchestrator now I see that not far from today we are going to have lots of edges with connected cars say augmented reality and IOT so the master cloud will serve as a learning point like let's say you learn something at one of the edge and you want to send the data to the master orchestrator that can send it toward to the other edges or and do AI at the master cloud but the edge will need to work autonomously and be smart enough to manage all the components that are connected to it and just to finish this I don't have time to go into a federated model and managers of managers and call cross edge workflows but I just want to finish it with some examples here so basically we want to have a service composition using Tosca and you want to have it as a Lego blocks so you can combine different components together create a master service and you can create multiple master services have a catalog and each one can consume each one of the services and this could be used you know for a smart city transportation branch offices we have the VCP and SD1 solutions today for smart home and cities for military and defense energy etc. so there are lots of use cases and the challenge is to make it simple and to define you know the topology and what you want to do in 10 based and not to get into complexity so I see that I'm right on time let's say someone has questions yeah so I have several questions but I try to form rate in a single one so are you using Tosca the extradited standardized one or you have a flavor yeah we are part of the Tosca Oasis yeah we supported Tosca standard we many times run ahead of Tosca so we need to define our own types and push it back to the Tosca committee and try to to actually convince everyone that this is needed and this is based on real use cases that you need to do so and according to this the blueprints could be used by other renders as well so how deeply do I lock my infrastructure into the Clarify if I'm using which which ones do you refer to I mean created a blueprint blueprints are probably use userful only by Clarify yeah okay thank you yeah so you can define the Tosca types and use another parser to go and parse them yeah another question okay thank you