 Hi, hello. My name is Aleš Černívac. Please don't try to pronounce that. I'm from company XLAP. I'm a developer there. We're situated in Slovenia. And we are working on the Contra project, which is basically federation. We're building a federation of clouds. Luka will also present Kufr, which is a sub-project, now already a startup, kind of merged from the Contra project and also works within the XLAP. So first, a little introduction. Cloud system, as we know now, for instance, Amazon Cloud or Google, Apple, Microsoft, we think that this is only a start. So now we're moving from these big, massive providers to more specialized ones. But this is, on one hand, good because the markets, new markets emerge. So we have a lot of developers, which have wider choice and therefore also bigger, so large number of consumers that have also, on one hand, larger choice. Now we have big but somehow limited cloud providers. On one hand, we have unused hardware, like big data centers and universities and schools. On the other hand, we have businesses, different data providers who want to use such resources and need to share data. And this is where the federation comes in. There are also other issues, for instance, security. So if you don't want to put your data on a specific geographical location, this is more evident with data providers, which currently provide only the API to their services and running on their infrastructure. OK, let's move to the applications. So what is needed when a developer creates an application and wants to deploy that application on the provider? So first, the developer needs to choose the right provider. And in order to do that, each platform, so the developer needs to get to know the platform and set up these deployment configurations, which is somehow specific to that provider. So if we have a large number of platforms, we have different user experiences within different user interfaces. And the last thing is also, well, if the developer needs to deploy his own image, how does he do that? So for instance, we have a market where we can have a market with already provided images. But if the developer needs somehow specific image, then it's a bit difficult to do that. So to move an image to the provider, and for instance, also between the providers. And this is where the federation can help. So in Contrail project, we're building federation of clouds, which means that we're creating an abstraction of providers, which will enable user to automatically select and deploy the provider based on terms specified within open virtualization format, describing the application based on the terms given in the service level agreement document. And the Contrail project provides a unified approach to do that. The Contrail project is a European frame program, so seven frame program project started in 2010. And the consortium working on the project is 11 partners. The idea is to create an integration software stack with standard interfaces for cooperation and resource sharing within the federation of clouds. So we are building an abstraction of IAS, which will give the unified way to describe and to create an application and also provide supporting services, such as aggregation of monitoring metrics on the federation level, aggregating the monitoring data to the accounting service, and also support billing services. And how does the application? So this is an image of supporting services on the federation. We also have our own PAS, which is Contrail Pass. It uses the federation just like any other IAS providers. So basically, it uses the API provided by the federation and all the features of the Contrail. We have also the components such as Xtreme FS, which is a global autonomous file system. So it provides unified way to exchange the data. We have SLA component, virtual infrastructure network, and some other services, which we'll get to know a bit later. The architecture of the Contrail, it consists of three main layers. We have federation layer, provider, and resource. So let's drill in a bit. On the federation layer, we have a plethora of services. For instance, we have a federation portal service with authentication and authorization components. We have the API component. It provides the RESTful API to the, for instance, applications or PAS, so the compass. We have usage control system, UConn. We have user registration and management system, consisting of certificate agency and our own identity provider, which is set up as a bridge between other external providers. So not only that we can support Contrail users, we can also support users from external identity, so from external communities, which have their own identity providers. And the mapping is done by this component. We also have the SLA manager, consisting of lifecycle management system. We have the coordination services, which provides negotiation and also has a template repository for the SLAs. And there are also the security services, such as authentication and authorization parts with policy enforcement points. On the provider level, we have some counterparts to the federation level, for instance, usage control system, also the lifecycle management and coordination on the SLA manager. But the main service on the provider level is actually VEP, which is virtual execution platform. This is basically the interface towards the provider. So for instance, if we have several providers, like OpenStack or OpenEmbula or Amazon, the communication to that provider goes through the VEP. Besides that, we also have accounting and monitoring agents. So the accounting service provides also the API to the gathered monitoring terms, which were aggregated from the lower resource layer. So here we have resource reservation services. We have appliance management. We have our file system, which is distributed file system. Security services, authentication and policy enforcement point. And also the appliance hosting services, providing virtual infrastructure network, and monitoring services. These monitoring services provide monitoring terms, which are then aggregated to the provider level. This is the whole contrast after stack. Federation level, we have multiple providers. And on each provider, we have a different set of clusters. And each cluster and each provider, which is integrated with the federation, has these services, needs these services in order to fully use contrails features. This is a minimum deployment scenario. So we have one physical node with all federation services deployed. We have head controller and physical machines containing the resource layer services. This is the federation level. The resource, so they provide the level. VEP is the main component. And the resource layer with the physical machines. How does the deployment of the application look like? First, we need to create an image. We need to deploy the image to the provider. So to transfer the image to the provider. This is where Kouffer comes in, which we'll look at present a bit later. And then we need to describe the application. So how many instances of that image or any other images will we use within our application? This is what we describe with open virtualization format document. And afterwards, we need to negotiate with the provider regarding the SLA terms. This can be done within SLA document. This negotiation is done on the contrail in multi-step scenario. So first, user or developer provides a list of SLA terms. Then this document is sent to the provider, which can be. So those terms cannot be fulfilled, for instance, by that provider. And the different, so a bit modified list of terms is returned to the user. And this process can be iterative. And in the end, user can agree to the SLA terms given by the provider. And this is where the deployment document is then created and specific SLA is built for that provider, for that user. So within the OVF, we refer to the virtual machines used within the application. We also describe the size of those virtual disks and also which kind of network are we going to use. So we can also choose if we use specific contrail network, so the solution which was developed within the contrail or, for instance, native network by the provider. And the SLA is, so first, user selects the SLA template. These steps will be shown in the demo just a few moments later. So first, user selects the SLA template. Within that SLA template references to the OVF, which will be used in that deployment of the application. The SLA is then negotiated with the provider. And the agreement is made after that. User gets the specific SLA. And that SLA is then used on the deployment of the application. This is user interface. It will be shown with the online demo just a few moments ago, a few moments later. So we have sliders where we can specify monitoring, so where we can specify terms on the provider. And these terms are then used to filter out providers which fulfill these terms. At this point, this step is manual. But in later releases, this step will be somehow abstract to the user. And so the provider will be chosen optimal for the terms and for the amount of money that user will want to pay for the deployment of that application. Negotiation process will be shown in this demo. So just a second. This is a bit blurry. I hope you can see this. So first, we log in into the control. Sorry that you can't see this. So on the left side, we have on the dashboard, we have SLA negotiation button where we can create a specific SLA, which will be then used to register application on the provider and deploy. So first, we can create an SLA. Here, we can specify terms of which we want that the provider fulfills. For instance, if we move this slider, we can see that we have registered two providers beforehand. When we change the slider, it filters out the providers that don't fulfill this. Yes. So the terms are RAM. Yeah. The terms are RAM total. We have CPU cores. We have CPU load. We have free RAM, CPU speed, and CPU load 5. So CPU load 1 and CPU load 5. So here, we can see the amount of each term that the provider can fulfill. So we mean the size of the image deployed or the geographic location. This can be specified within the OVF, so in the open virtualization format. At this point, we don't have the editor for the OVF yet. But the OVF can be pre-created. I will show you the OVF a bit later. And there you can specify. Sorry. Oh, so the question was that the important terms are the date. So the terms regarding to the storage. And these terms can be determined within the open virtualization format. Because the SLA actually refers to the OVF when deploying the application. So now, at this point, you can do that. But for now, there isn't. So we don't support the whole plethora, so the whole possible terms within the SLA. So now we support only these, which we have right here now. And so in the later releases, we are planning to support also the more specific terms for the storage. OK, so first, we create an SLA. Let's say open stack test. We'll use the, yeah, maybe a bit better, right? OK, I can choose the SLA template. So here we can see the template, which is already registered in the database of the federation. And this template can be negotiated with the provider. Here we can see the open stack test. Then we can register new application based on that SLA. What I did was I dragged the SLA negotiated a bit earlier to the newly created application. So here we can see the application. And what we can do now is we can, so here we see that we chose the cloud provider 2 with this SLA. And now we can deploy the application on that provider. So now we're deploying the application on the OpenEbular. And so this was before the deployment. Here I issued a command of listing the applications running. It was only one virtual machine. And now, after I deployed, these new application, it consists of two virtual machines within the OVF, defined within the OVF. And here we see that the application started. So now we have running state. OK, let's get back to the presentation. So the security components on the federation consists of three main questions. Yes? We're going to do the backup in one hour, ten hour, or 30 days after, so there are different levels of SLA. Yeah. But here, how do you are going to define different levels of SLA? This is like fixed P. There is no operational aspect there. Yes, this is like a fixed thing. So before deploying the application, we specify the bounds in which my application will, for instance, live. So the lifecycle of the application will stay within those bounds of the terms negotiated with the provider. And if those SLA terms are violated, then the exceptions will be thrown on the federation level. And for instance, the federation could suspend the application, or the user can be charged with a large amount of money, for instance. So this is like a fixed thing. So if the amount of machines will be larger, then those which were negotiated with the provider, then the exception will be thrown. Yes? Yes, at this point, it's only a dashboard. And so this is the version one control, which I'm showing. In these next few months, we are deploying a new version of SLA manager, which handles this. So it evaluates the terms online and throws exceptions based on the terms which are violated. So the question was if we are using the SLA at SY project, so for the SLA manager part, which is also one of the European projects. So they created a model for this SLA monitoring and SLA negotiation process. And we are using their framework, so the framework from this project. And there is no standard yet, but we're trying to reuse what's already there. And maybe it will be a standard later. OK, the security. So we provide single sign-on towards the federation web portal, which we saw a bit later before. We support federated identity. We support authorization framework based on XACML language. And also the delegation process, which is the core authorization framework in order. Core services of the federation use data for the user. So the user first needs to authorize that the control services will use the user's data. And then the user can also revoke, for instance, specific provider access of the specific provider to his data. And he can control the whole workflow. So here I depicted security components on all levels. So here we have usage control components. We have the single sign-on authentication on the federation portal and federation API. And here we have authorization framework with policy enforcement points on all stacks. So this is the portal authentication framework. This is the model of the usage control using XACML. We use a bit extended XACML language usage control. So we have a policy, which is split it on the static policy decision point and online policy decision point. Context handler of the usage control component evaluates these policies first on the first attempt of when, for instance, deploying the application and the online version, so when the application is running. And the policy enforcement points actually enforce these policies. Here are the depicted Yukon components on the whole control stack. And as I already mentioned, delegation provides credentials, delegated credentials to services. And also delegation, we support delegation with individual infrastructure network. For this, we use auth2 protocol with two different data flows, so the flow authorization grant flow and client credentials grant flow. This is for the initial step when user logs in and this is for the core services that use the Contrail. As I said, we support delegation on a network, virtual infrastructure network. Virtual infrastructure network is actually elastic private, supports elastic private networks between different nodes of Contrail application. These nodes are primarily virtual machines but can also be a global autonomous file system, instances. And we also use it on the physical machines. So DaVin is designed to scale well and also to handle networks that consist thousands of nodes, for instance. And now something about storage. So we needed a solution to transfer the images to the providers and also a way to move storage instances. So for instance, disks which are used in the application securely and efficiently on the providers and also support moving the instances between the providers. So for this, we thought of Kufra, which is hybrid cloud storage interface to the provider. And Luca will explain a bit more on that. Hi, my name is Luca. I'll need to be quite short because I only have four minutes. So I'm chief architect at Kufra. It's a local startup in our country where a spin-off of a bigger research company called Xlap, like Alish said. So Kufra connects to any number of OpenStacks Swift clusters, Amazon S3 accounts, and Linux servers, PCs, and Macs. It acts as a top layer of hybrid storage exposing all devices via unified API, web, and mobile interfaces while providing out-of-the-box support for file sharing, messaging, and automation features. So storage as a service is the only viable solution for enterprises wanting to remain in control over confidential company files. And storage as a service is one of the fastest growing markets since cloud computing expected to reach $6 billion in 2015. Kufra can be used as a wide label solution for internet service providers and cloud infrastructure providers. It can also be used as a complete hybrid cloud solution for small and medium enterprises. It can also be used for transferring big files around clouds, like for transferring images between storage providers. So OK, Kufra can be licensed by internet storage providers as storage as a service interface, connecting their end users with provider storage systems. And by using Kufra, ISP can lower outbound transfer costs, offer the fastest data transfer speeds, and also increase user retention. Kufra can be used as a package service next to internet, telephony, and television, and on-premise deployment is fully brandable. OK, so the SME part, Kufra allows enterprises to combine internal IT infrastructure, external data centers, and employees' own devices, like devices meant as a computer, like Linux server, Mac, or PC. And it can be combined into single, unified, easy-to-access storage service. Users can start using Kufra with existing infrastructure and scale out to the cloud if and when needed. And clients can download and upload files faster than existing cloud-only solution, and download and upload links can also be branded. OK, so now I'll have a demo of Kufra. Maybe that will be better. OK, so this is Kufra interface, prepared for demo. This is, it's not working. Let me refresh. I have some problems with internet connection. Let me just try to, OK, there's no internet cable. That's bad. Now it's working. So with Kufra, I can get files. I can list files or objects in existing Swift containers. And I can also list files from my work station. OK, now it works. So these are objects in our local Swift cluster. And I can just copy some files. This is just sample image file, but it could also be cloud instance image. And I can just copy it to the server. So I can just paste it here, and it will be uploaded. So I can also create direct links. I can create a link, even protect it with password, and send it to someone. So he can download file directly from Swift or some local computer or server. I can also create upload link so someone can upload files directly to our Swift cluster or local hard drives. Here it is. So this is direct link to the Swift file, and I can protect it with link. So this is the upload link. I can upload it to somebody. I can send it to somebody. And he can upload files directly to my server. So if I need some client or partner to upload cloud computing image directly to my server, this is really convenient. OK, this is for my part. And thank you. Are there any questions? Well, Kufur provides a human interface to the Swift cluster. So it can be used as interface like draw box or something like to the Swift. So you can use existing API for object storage, and you can also use a web and mobile interface. So we also have mobile clients for iOS and Android. Any other questions? Thank you.