 Hello, welcome everybody. My name is Blastin Melholler. I work for the OpenEmbla systems and right now we will speak a little about the Edge Clouds and how we understand the Edge Clouds in context with the OpenEmbla. I have to say that I won't do any like a real demo, but I'm starting just to show something and I will describe it a little later, just for the reference. About the Edge Cloud, it's necessary to say that our work, our current work and our future works receives the funding from the EU Union from the Horizon 2020 research program. This talk is divided into mainly three parts. In general, what is OpenEmbla? If you don't know, then how we understand the Edge Clouds and then what we do or what tools we provide to build the Edge Clouds in the OpenEmbla. So, just a quick poll. Who knows what is OpenEmbla? Just a quick, good. And who uses OpenEmbla? Great. So, just to summarize, OpenEmbla is a framework to build infrastructure as a service cloud. Mainly we focus on the private cloud. We talk about virtual machines powered by KVM or vCenter and the system containers powered by LXD. We support various cloud deployment architectures, but mainly what we focus on is on-premises private cloud. Usually, OpenEmbla is appreciated for being light and simple, extensible, easily upgradable when compared to other systems. It supports various popular Linux distributions, CentOS, Rail, Ubuntu, and so on, and it's fully open source under Apache license. And it's with us for some time already. This is how it looks like from the perspective of the graphical control interface, a list of virtual machines at the top, a list of buttons. You can control reboot and power off destroy and these kind of stuff. If you want to see all the features, there's a Discover page which lists them. If you want to try the OpenEmbla, there's a really cool tool written in a shell called MiniOne, which simply configures all the OpenEmbla, like a front-end part and the hypervisor part on a single selected node. It simply creates some kind of evaluation or testing or maybe development environment. So it's really great and it takes just five minutes or something. If you're even interested more, let me invite you to the OpenEmbla conference, which is like every year this year it's in Brussels just a few months later. Now, let's focus on the Edge Cloud. What is Edge Cloud for us? For us, the Edge Clouds are the microdata centers, which are with some kind of cloud-like capabilities and are deployed very near to the end users and devices they need to interact with. So the benefits are to provide lower latencies, provide like new features which this lower latencies allows. Probably some preprocessing can happen in the Edge Cloud, so the data which are sent to the central cloud are much lower. Or there can be a need for some security on privacy reasons. But everything is not just a green or good. There are some limitations. Limitations comes with a limited offer of hardware or software we can use in the Edge. And there are also some risks, potential data loss or maintenance overhead and so on. But from the perspective of the OpenEmbla, Edge Clouds are very similar or should be very similar to the on-premises cloud which is on the left side. So the main difference is we can expect there will be much more such smaller clouds and they will have kind of dynamic nature. They can be created dynamically, destroyed at any time we need. So they are kind of ephemeral. They are also restricted and kind of limited. From the implementation perspective, we talk about the infrastructure Edge. So that's the part of the Edge which is powerful enough to run like a more demanding computation. And so on. Our aim is to take the technology we are using for the on-premises cloud or the KVM, LXD, VXLens and all these building parts and just move it to the Edge Cloud. Of course, with the help of some specialized drivers and in a form which is like a design to run in the Edge Cloud. We very much rely on the existence of bare metal cloud. So OpenEmbla never installed the physical hosts operating system and these kind of things and it won't do. So we expect there is some service provider which manages the infrastructure and is able to give us or give users the resources which can be used to build the Edge Cloud. The second important part is automation. Automated on bare metal cloud we don't like a carry that much. It's the provider's duty but on the host level everything is automated. Hypervisor is installed operating system is configured to run as part of virtualization cluster. And it's good to say that the only thing we do is just take all the great open source tools we have, distribution hypervisors and so on and put it together to build open source Edge Cloud. So to summarize, the OpenEmbla Edge Clouds are just limited OpenEmbla virtualization clusters you might be running already on premises. They are deployed on infrastructure of some third party. They are managed fully automatically and I've introduced this kind of password infrastructure service in infrastructure service. Because when you are running virtualization cluster on premises everything is kind of okay for you. You have hosts under full control, you have storage, you have network and choose your addresses. So you don't have any problems but if you have to deploy the similar virtualization cluster on some third party infrastructure probably hosts are also okay. Storage as well but when it comes to networking we can expect there will be some limitations introduced by the provider. And regarding the IP addresses there definitely will be some restrictions. So these two things, network things are kind of challenging parts we have experienced. From the network perspective it's very environment specific because various providers introduces some various features like a dedicated VLAN for you. But also some limitations like no multicast support. So the solution or the approach we have to take is introduce some kind of common virtual network model which is able to work no matter independently on the provider. So we are using the overlay network with VXLANs but we don't rely on the multicast so it's just for the unicast on any environment. The more complicated thing is the IP addressing. In case of private addressing maybe we don't care because we have our overlay network and we can do anything and nobody cares except us. In case of public IP addresses the situation is much more complicated because we just can't take our favorite IPV for others and put it into a virtual machine and expect that everything will work. Usually the IP addresses are kind of agreed with the provider. You ask the provider for some pool and they give you some addresses back. So this like a workflow needs to be automated and in case of Edge Cloud is automated through some kind of IP management IP management drivers which exactly comes to the provider and tells give me some IP addresses. So that's just one part of the problem. Another part of the problem is that you have some IP addresses but usually you need to notify the provider when you want to use the IP address on the selected host so that he updates the routing or something to get the traffic to the right place. So about the IP addresses like I get two problems there. The conclusion from this part is simple. If you take some existing infrastructure as a service framework and try to run it within different or maybe the same infrastructure as a service. You can expect the things to work without problems. Now about the provisioning. About the provisioning and how do we like a build this cloud. OpenML comes with a set of specialized tool drivers and configurations which simply talks to the providers and builds all the cloud just like with a single command run. Mainly we target on the Edge Clouds but possibly it doesn't have to be only Edge Clouds. So once again it's one provision tool which manages the whole life cycle of this Edge Cloud. It's common line only. Then we have some kind of integration drivers. It's good to say that when some like a third party or provider is selected there needs to be two kind of drivers. One driver which is able to allocate some hosts from the provider or release the host back. And the other thing is mentioned IP management integration driver let's say. And of course we have hosts with base operating system. We have some addresses IP addresses. Then the missing part is to configure the hosts so that they can be part of open a block cluster. So the last part is like a configuration playbooks and roles for the reference architectures. That's what user or cloud administrator usually gets but he has to do something. He has to write some provision descriptor which exactly specifies what provider to choose, what are the credentials for the provider, what hardware configuration to use for the machines, what to create inside the open a block or data stores, virtual networks and so on. And also how to configure the host inside. This is created by the infrastructure administrator and the process is the high level process is as simple as this right here. So basically prepare the descriptor pass it to the tool. Wait 10, 15 minutes and you will get an independent virtualization cluster at the edge. Management features of this tool are like a very simple, very limited. It can create an edge cluster and destroy the edge cluster and the other options are more host focused. Power of a reboot resume and so on. So to summarize the current state. We have an integration. I would say a good integration with the pocket provider and we have some partial integration with the easy to we have a tool. I will show it a little later. Hopefully which can deploy the clouds. It's more like an advanced tool. There are some missing features like a cluster can be scale out or scale in and the architecture which is deployed. There is very simple like a single single static on the future plans. As I mentioned in the beginning, we have received some funding from your European Union. The idea is to take what we have and to build from that some like easy to use edge cloud solution, which which incorporate some catalog for for the edge providers. Marketplace for the edge application and so on. Mainly that mean that we will get new integration drivers for new providers. We will get the new features like a cluster scaling or cluster update. The work in progress is a support for lightweight virtual machine monitors like a five cracker and cashing data stores and possibly cross locations networking. Good. About the documentation. If you go to the open abla or docs dot open abla dot org. There is a section which is called this aggregated data centers which describes all these things and tooling and how to write the the provision descriptor or what configurations you can use and how to parameterize it. Also, I've mentioned the many one tool at the beginning, which usually deploys just KVM or Alex the single host and her evaluation environment, but it can also deploy the edge as seen on the screenshot. It just needs some some some parameters like a select the provider and give some token and project and it does everything automatically. This is not something like a artificial artificial thing, but we have we did some use case validation demos. The most important or the most interesting thing is the video gaming. When we have used this tool to deploy around the world, 17 edge locations, which were like a small KVM clusters and run on each location. One virtual machine which was running inside the Wolfenstein enemy territory game server and then from the office we connected to random one in Sydney and we could play and it worked simply. It was as easy as running just just a tool and waiting. Unfortunately, 25 minutes because some locations like Japan took more time than than some other locations, which were much near. We have this nice data sheet. We describe all these all these demos we can we can give you if you are if you are interested. Also, we have a stickers if you are openable user or just interested come come to us and we will give you all like a like details you would be interested in. So maybe we have some time for demo. Firstly, I will probably show how the provision descriptor to build the cloud looks like it doesn't have to be necessary to understand all the parts. But just the concept of what what is what is necessary to specify, right? So on this page, the most important part is the playbook exactly the configuration applied on the host. Then there are some defaults which are specifying the like a driver and credentials. We are also choosing the bare metal hardware type and sentos and so on. But on the on the in the next part, we just list what host and the number we want to we want to deploy. Then we specify data stores which should appear in the openable and the last part are the networks. Exactly this is the this is the thing which creates the public IP network with the IPM packet driver and it requests to IP addresses to IP before addresses from the provider. And there are some private networks. So basically this is enough. And I've started this provision command before I before I started the presentation and you can see it took. 20 minutes. Oh, sorry, 12 minutes to deploy. Two hosts to deploy three virtual networks and some some data stores. Maybe I can I can try to some try to start some virtual machine. I have to do some some like a work around to make this working because I have alpine image here locally on my laptop and I just share it to the edge cluster. I have right now deployed and I go to the openable. I'll make it a little smaller. Yeah, I'm finishing. And the thing is that I will run this alpine Linux on the edge cluster I have right now deployed. It would run as is the what is the what is the the interesting part is the networking. So I will specify a host only network not to break it and I will give it an alias for the public networking. So maybe I can try to virtual machines and if we wait for like 20 30 seconds. It's already deploying. We can check what is the host exactly. Okay. We can't check but trust me it's a it's a packet hosted packet hosted machine in in Amsterdam. And maybe I can show the packet dashboard. It was updated. So you can see this is the FOSDM note 7577 which is exactly what is what is in here. The unfortunate thing is that it's copying the image. But if that happens we should be able to to ping the virtual machine there. So and that that that's the goal. That's the goal on third provider on third parties infrastructure be able to build the KVM LXD virtualization cluster. We are used to and have the most of the features we are used to from the on premises and also be kind of integrated with the provider have a public network working and so on. So yeah let's check the first one just it's booting. And I can just try to log in while I could log in there. I can check the I've config and I can see the private address which is assigned inside is the very same point point three address. And trust me that it's the alpine Linux. I just deploy time zero minutes. So that's it. The very last thing I will show is that the way I've created this this virtualization graph cluster in just 12 minutes. I can destroy this cluster the very same the very same way. I just need to remove the heck I did previously and trying to list the provisions deleted. And it won't work right now because we have running virtual machines there. But I can ask it to clean up and it simply terminates the virtual machines. Both. And when it's done it cancels the hosts and releases the IP addresses back to the provider. Let's check. There are no hosts. And we can check the packet that everything was released. They are not updating it. And here's nothing. So as said we have this this this data sheet here describing the use cases and and and plans. You can talk to us if you are interested. Also we have a sticker please come to us. Thank you.