 Yeah, hello everyone, I'm shy from the Cloudify team and we are in the space of orchestration and now I'm going to talk about edge orchestration. So as all of you know, the big master cloud is going to broken to edge clouds because of many reasons. There are many use cases like connected cars, augmented reality, smart city, but their main two false reasons are latency. Sometimes you need to actually respond immediately in less than 20 milliseconds. So you cannot send all the requests and responses to the main cloud, but you have to process it at the edge. And the second main reason is you don't want to actually send enormous data point, I call it tsunami of data points from the edge to the main cloud. So you are going to have thousands or hundreds of thousands of clouds. So basically you can see here that you can have an edge cloud at the ship. So there is a cruise ship that people want to play, to do gambling, to play games at airplanes. And the bandwidth is scarce. Sometimes a satellite connects it, sometimes there is no connection, network connection. So there are a lot of challenges running an edge cloud. One of the challenges is actually, let's say it's a resource constraint. It's a very low footprint. And that's what I'm going to show in the demo. We, together with one of the telcos, we created actually Kubernetes on top of OpenStack and we use function as a service, and not containers because containers, it's even though they are ephemeral, but they are a big unit of work and they stay forever, even though they are ephemeral. So we used function as a service. We used Kubless to run different car types and I will show this. Just to complete the challenges, you can see that there are a lot of security issues here, bandwidth cost, scale. Sometimes the edge is not connected to the master so it needs to work autonomously. So you need to have an edge orchestrator that does all the life cycle operations, including healing and scaling. And let me get into the demo because I have only 10 minutes. So basically, I say this, container is still a larger execution of work. So we moved from monolithic to containers, to VMs, containers, and now we move to function as a service. Function as a service, they are good for event triggering. They are not good for long running processes, like if you have a demon that needs to run forever, don't use fuzz for that. So just in a nutshell, there are two models actually, or even three models to manage an edge. One is a controller and a master orchestrator, and the master orchestrator just sends commands to the controller, but the master orchestrator actually is responsible for all the work and the controller just take actions at the edge. A more advanced one is actually to have autonomous edge orchestrator that has its own life cycle events, but when there is no network connection, it knows to the agents are know to actually aggregate all the information and when connection returns back to synchronize with the master. The master will be served actually to manage all clouds. And if you learn something in one of the edges, you can tell the master to populate this knowledge to the other edges. So basically we developed at Cloudify, actually a manager of managers and a way to that the master manager can communicate with edge orchestrator and for example, in the case of NFV provision and manage VNFs at the edge and even lighter workloads as function as a service that I will show you now. So basically the demo that we did is based on a master cloud that is based on on app. It has Kubernetes and Cloudify is the orchestrator. We presented everything in Grafana and Prometheus. We designed everything in SDC. That's a service design and creation of own app. And then each one of the edges, we simulated car traffic actually coming from each car. Each car actually sends the location. And the main reason how many times you use Google Maps or Waze and you run into a traffic jam and then it tells you, okay, turn right but you're already stuck in a traffic jam. So we calculated a rectangle around each one of the edges and we counted the number of cars or the density and we knew that if you have many cars at that edge maybe it's worse sending the car through a different route to get faster to the destination. So basically we simulated this with an IoT gateway sending request here. And the platform actually, the function created a prediction and told each car where to go. So the architecture was like that. So the car sends request to the MQTT gateway which then sent it to a fast engine. And here we modeled everything with Tosca and Tosca create a graph of nodes and relationships. So we manipulated everything in real time. And for example, I discovered a new car, let's say Mazda so I can add it to the model to the graph. Think about the graph and memory so I can add a new car type and with all the properties and attributes. And then we kept all the locations in a persistent MongoDB database and presented it here. Let me show you how it looks like. So basically you can see here in Kubernetes actually that all the pods, the different car types and the different functions. And let me run the demo, it's a recorded demo. So we present this on a Google map and the one of you that are familiar with Boston will see the traffic gems there. But you see each car actually we calculated the density and we send predictions to the cars where to go. So I'll try to find this. So you see this is the prediction, where to go. And basically each car type here, each dot represented different car type. And we have Mazda for Toyota and here at this place at the Longfellow Bridge we created another Tosca node that created a traffic jam. And the cars needs to go here to this destination. And you will see that some of the cars will be rerouted through a different route and will get faster to the destination. So I will run this forward because I have limited time here. So you see some of the cars take a different route and they go from here. Okay, so the new, actually the car that the reroute already reached the destination while the others are still driving. And everything here was done with a function as a service on top of Kubernetes using Kubless and on top of a Crino at the edge. Now some providers actually want to create all this with Kubernetes on a bare metal. Evens they don't want to use an ad stack. They just want to run bare metal because of scarce resources. Now we presented everything in Grafana and Prometheus was the agent that collected the data. So you can see the function failure rate and the function call rate. And you can see that when we scaled actually there are three ways to scale this. Actually a prototype that we did. One way is to scale the functions, add more functions for the same car type is there are more Toyotas for example. The second one is to add more pods and Kubernetes already takes care of this and this is simple. And also there is a way to add more nodes. So we implement the provider interface in Kubernetes. So we can add more nodes to a running Kubernetes and they will be added to the cluster. So you can see here that when we create the load there is a high function call rate and then we scale this and it gets down. Now I have one more minute just to summarize this. So basically there are many use cases. This is only one example of that such a use case. What happened here? Okay, yeah. So you have smart cities and transportation and defense and energy, et cetera. And everything here was achieved using a Tosca model which in Tosca it's very easy to define a topology and to define actually microservices. So you can see in Tosca you have a node that is contained in another node and node could be connected to another node and you can define different services and different blueprints and there is a master blueprint that can connect everything. So for example, you can run a VNF in its own blueprint and as a VNF in its own blueprint you can add another blueprint on the fly and connect it to the graph. And basically everything is like Lego blocks. So the moment that you define it in a modular way you can extend it and slice and dice it like a Lego and that's very easy. So it was pretty easy to define all these demo using a Tosca model. And we in Cloudify actually orchestrate things using a Tosca DSL. So I'm done with time, so thank you.