 Češtje, ki so tudi zelo, da so vednočili, da so smo vzelo v sezivnih sezivnih sezivnih. Jaz sem Johanni Merlino, rečetnik, v Universitetu of Messina. Zelo sem prezento s mnim kolegjem, Josef Petricomi, s Sebastian Dupont, z Cetik belgijom. So we are presenting here quite a number of results and achievements within a European project, which is called BICON, and it is about enabling federated cloud networking. So, OK, here is a quick outline. There's a lot of ground to cover, so let's skip to the content. So what is the problem? The problem is really about globally operating companies that may need really to deploy tires of their application across different time zones, and about diversifying their choice of cloud providers, of course, for a number of reasons for minimizing cost. What is the approach then? In this case, we are talking about federating the cloud networks. So what we have really is a project which embraces two middlewares, in particular OpenStack and OpenNebula. And what we have here so is a number of developments that really provide us some basic mechanisms for federating cloud networks, as well as advanced features on top of that, such as automated high availability, location aware elasticity, and automated service function chaining. What are the benefits of federating virtual cloud networks? Of course, we are talking about virtual networks. So we are talking really about getting flexibility, having security by means of isolation. Network federations, what we have is the possibility to avoid to manage single instances for networks, but possibly manage all the networks as an entity via API and tools. What are the cloud federation types we identified? We identified three, which are the period federation, where we have really cloud peers that interact through the administration APIs. We have a hybrid kind of scheme, where you have a cloud which really interacts with other clouds via user APIs. And the third type, which is called Brokered kind of federation, where we have an external entity, which is really a multi-cloud orchestrator, which is in charge of interacting with all the clouds. And in this project we have covered the second and third option in particular. And what we have enabled so are loosely coupled scenarios, where we have on one end the possibility to, of course, federate private clouds, such as on the left, the Open Nebula case, an instance with public cloud instances, for instance, AWS. And as we see, a number, for instance, of Open Stack instances. But we also tackled the interop challenge. So also the opportunity to have really Open Stack and Open Nebula in particular interoperate. And so here we have a high level overview of the beacon architecture. Let's focus on the blue boxes, where we have a federation management system, which really interacts with the cloud management subsystems on both sides, on both clouds, which are depicted here. And underneath the network management layer, we have the federation agent, which is really the component tasked with establishing actually a federated networks. That happens by means of interacting with the federated data path and establishing a federated tunnel, as we'll see in a moment, because here we have depiction, a high level depiction of the workflow in terms of federating networks. So we have really the federated network, federated SDN, which is the component which gets a list of network segments. They get passed to the federation agent under the guise of a number of tables, where we have really tenants, where we have networks when we have sites, so really clouds. And what happens is that then the federation agent interacts with the distant federation agent and they get to agree on a number of networks which need to be federated. And then it passes really the information on one end to the federation data path below, so southbound API by means of the open flow protocol to establish the federation tunnel and then relay that information also westbound the corresponding SDN controller in order, of course, to attach the local networks to the federated data path. Ok, here I hand over to Giuseppe to present. Ok. Hello everybody. I won't start giving you a picture of the scenario in which the beacon broker works. We have several clouds spread on a wide geographical area and one or more federation tenant that is used to provide to its then customers the possibility to have a fully federated user experience. This federation tenant has the contract with all or a subset of this cloud and the customer of this federation tenant won't deploy their application only via the selection of the components, the areas where the components will be deployed. To make this possible for the customer of the federation tenant we have created the beacon service manifest that is a custom extension to the hot standard in order to provide the possibility to manage the geographic placement via a new orchestra template resource named OS beacon georeferenced deploy that is a sort of container for the geojason element that describe the area of the space in which the cloud needs to be present. The second important element is the component grouping functionality that is provided with the resource OS beacon service group management. This resource has two important properties field that is the geodeploy, the link with the previously described resource and the group element that is a list for the resource contained inside the stack. And the last element is related to the elasticity management. The resource is OS beacon scaling policy composed by three main element that is the policy type, the link to the geographical area in which the cloud the resource is deployed and the group monitoring that is the link to the service service group management. What the beacon process does is it sets the federation process for the network sharing the network federation sharing. This means that the beacon broker invoked the fedsdn services in order to create the network table all the stuff needed to share the network between the two clouds. It instantiates the resources and activates also the whole process linked to the system manager for these resources instantiated. It manages the geographical placement and the deployment. We can go in detail of the geographic geographical deployment because inside the beacon service manifest the beacon broker takes the georeference resource and make the query versus the MongoDB that extract a set of cloud and the cloud endpoint of these clouds and also the credential the borrowers credential used to interact with those clouds. After this we create the hot manifest starting from the beacon service template. The hot manifest is contained inside the beacon service template and the beacon broker extracts each hot manifest and provides it to the right cloud to the deploy. Now we will start with the video of which is described below. We have here two clouds this cloud is disconnected for the moment and we will show via a simple test with a ping between two VM deployed one in each cloud. In the second cloud we have this VM named connectivity one. I try to go a little bit. Now we will launch the ping and what we say is all the packet is lost. Now we are instantiated the service manifest on the two cloud. What is instantiated with this action is simply a stack named federation and it contains a single resource a zero VM that has a fixed EP. The second stack instantiated is no more that a replica of the first one. We follow this approach because we need to minimize the moving time in order to minimize the latency versus the operator of the customer application. When the instantiation of the VMs is finished the monitoring policy is started and the replicas VM are powered on. Instead the previous VM is shut off. In this moment we have the stack instantiated on the first cloud in the second cloud and it needs some time. I try to speed up the VM is generating now there is a check in order to verify if the VM is the same created from the system and now we can see the output produced by the gcloud connector inside the beacon broker. All this information is a result of the gathering data from the to cloud in order to create the table needed by the federation agent the fedsdn and this table is used to create a connection between these two clouds. The VM in the second cloud is closed and the policy management will... The policy is started. What we can see is the string where there is a display shown the idea of the cloud, the working time in that cloud and the gap respect the UTC time of the active cloud. After this I'm going to change the hour of my laptop where the beacon broker runs and this is translated in the working time of the active cloud in the second line we have a static is 182. This means that the threshold is overlapped and the migration is started. The VM is moved from the first cloud to the second cloud and it will translate in shutdown on the first cloud and activation in the second one. Now we will test the ping between the two VM after this creation operation. We have the VM created via the service manifest and now we will ping the VM in the first cloud that is the connectivity VM in the second one and the ping works. That's all from my side. Let's hand over to Sebastian. Hi everyone. I'm going to talk about a bit of security consideration because when you do federations of clouds new security problems arise. You can have problems of trust between the different clouds that you want to federate. For instance, you might have some clouds that you don't trust in the federation. By default, you should not trust any of the clouds and explicitly specify that on the global level at the federation administrator level you may want to put in place global security policies. For instance, you may want to add intrusion detection on the whole federation independently of the different clouds. There is a bunch of security tools. You may want to use firewalls and so on. Since we are doing cloud stuff here, we may want to use something called network function virtualization. Basically, if you want a simple explanation, when you put a firewall inside a virtual machine, that's a virtual network function. You can scale up to have some elasticity. If suddenly you have a lot of traffic going to your cloud, you can just add more virtual machines to the firewall and it will accept all the traffic and when the traffic scales down you can just remove the superfluz virtual machines. Something else that you can do is change in this example. We have three virtual network functions. One is a firewall, another one is the intrusion detection system and the third one is some kind of monitoring just for logging stuff. In red you have the default data path. Basically every packet will go through the firewall and it will go to the monitoring and then synchronously be delivered to its target inside your cloud. But you may also want to introduce intrusion detection on those packets but oftentimes intrusion detection is quite heavy so you don't want to do that in a synchronous manner so you do that on the side. We never look on a very concrete example of a network function virtualization and service function chaining, which is anomaly detection. In this example we are just looking at one of the clouds of the federation because basically we will apply the same rule to all the clouds in the federation so in this case we have two network functions and what happens is that we may want to detect anomalies in this case the policies we don't want FTP to be done on our cloud because it's insecure and nobody should be using that anymore so the deep packet inspection will detect that there is a FTP transfer and from there it will on one side use the open stack security groups and quarantine the VM through those security groups and on the other side it will tell the firewall to the software firewall to drop all the traffics from that VM so it's properly quarantined. So that's one use case another one would be encryption where one of the clouds you don't really know what's inside you don't trust anything what's inside and in this case two of the VMs in that cloud are actually compromised but we still want to be able to talk to VM number three so what happens is between cloud one and two which trust each other you can have traffic which is not encrypted but as soon as you want to talk to the VM number three and trust it cloud all the traffic will be encrypted so that's another use case ok so now back to open stack because that's why we are here open stack provides a nice little tool to do network function virtualization and managing all those network functions chains that you can have and it's called tacker here is the architecture of tacker but basically you have two big blocks which is orchestration so that's why you chain the different network functions together and the management itself of the virtual network function where you create, delete or update the network functions ok on the software side we are looking at doing intrusion detection for that we are looking so first we looked at N-top and DPI which is a deep packet inspection library and then it will talk to the open stack security groups in the future we will be looking at another tool called snart and on the firewall side it's gonna be probably pf.sense and opn.sense will allow us to in this case detect FTP traffic and quarantine the compromised virtual machine according to a little demo so we are just creating what we call a virtual network function definition which is basically a template for your DPI network function it's created and now we can instantiate a virtual network function which is a DPI virtual machine there we go so next we want to check that it's actually created inside the open stack dashboard there we go so that's my compromised virtual machine which is simply a virtual machine inside my cloud then we have the virtual network function definition and finally the virtual network instance themselves so what we're gonna do now is try to access the FTP server which is on the compromised virtual machine and look at the DPI output to see if the FTP traffic is detected and once it's detected what we want to do is quarantine the virtual machine so on your left you have the DPI output the logs and on the right it's somebody that just will try to access the FTP server there we go the FTP has been detected and now we have a look at the virtual machine and check that it has been properly quarantined so we look at its security group and instead of the default security group it's now associated with a security group called Quarantine which doesn't allow it to be accessed or to access the outside of the cloud ok, so getting full circle we'll also look at another important topic related to the networking part specifically we're talking about having the tools to really debug and troubleshoot the networks so in this case we're talking about networking visualization there's a project for that which is called Skydive and there are connectors for OpenStack what's interesting is that even in this case the beacon project has extended this tool as we'll see in a moment what we have is the federated environment we have the analyzer and a number of agents that are deployed on the hosts and the networking elements here is an architecture of Skydive so as we said the analyzer and the agents there is really a graph processing engine and graph language, description language to have ways to really capture flows a way to enhance on those flows so being able really to make some sort of detections here we have a bird's eye view of a beacon environment which is actually deployed in this case in two clouds and we can see here so the web UI of Skydive and we can see that we can have this very high level overview of the network but of course we can zoom in and so get to the single cloud view so in this case for instance we have a controller and two compute nodes and we can see the little two element blocks which are the VMs which are deployed in this screenshot on the computes and here we have a single node view which is useful to understand what we can see here because we have color coded visualization so both the information about the hosts where the agent is deployed which is depicted here in green we have for instance ports which are down at the moment and they are depicted in red we have the tunnel endpoints which are really instantiated by OVN so OVN really establishing the overlay networks by interconnecting OVS switches instances and those are depicted in light gray and so on the information is really available and when we click on one of the elements in this case is the element on the center which is gray but has an open stack icon on top of it which is black red we can see on the right side bar that there are all the information and metadata which are neutron related so what are the contributions in the project to Skydive we are talking about real time traffic stats visualization which are overlaid on top of the topology we are talking about calculating aggregated traffic over the federated tunnel and showing a button consumption on that tunnel visualizing network load so showing really L2 bandwidth on the topology highlighting as we said color coding really network links based also on thresholds if you want determining bottlenecks in each cloud and on the cloud interconnect on one hand on the other we also have the tools in this case to have a real multi region network topology visualization because we have enabled the definition of multiple separated clouds and their network interfaces so we can explore them and we can also group each cloud network with all its components in a specific area and that is good for let's say usability of the tool what are the impact and the benefits of the activities within this project the integration of course of network virtualization and software defined networking mechanisms and tools with cloud middleware it's very important to stress especially in this venue that the code originating from research is being published under open source licenses and some results are being fed back already upstream so we are talking about for instance OVN we are talking about OpenStack especially the Neutron subsystem and to open Nebula which is the other middleware here is the beacon website is www.beacon-project.eu where you can find all the information you want including also the deliverables which are available to the public and here we have at a glance the consortium so the University of Messina CETIC, IBM Open Nebula Systems University of Complutancy the Madrid and FlexiOps and Lufthansa sorry and also I want to stress that this is Horizon 2020 funded project by the European Commission I want to ask you please to help us by answering our brief survey if possible that is the link and we are open for questions any questions? Thank you very much