 Hello, everyone. Welcome to my session. I hope you're enjoying the summit already. My name is Titus Gurek. I'm a product manager at Khan-Liqal, and during this session I'm going to talk about putting OpenStack at the edge. So edge is a very hot topic, and obviously there are so many definitions of the edge, like there are probably so many definitions as the number of people you talk about the edge. But here at Khan-Liqal we have a very strict approach towards the edge and towards putting OpenStack at the edge, and it's called Microstack. So this is what this session is actually going to be about. So let's dive in and see what we can learn from it. So I'm going to start with a very strong statement that OpenStack is actually awesome. I love OpenStack. Its combined market-sized worldwide is around 8 billion of dollars, which is more than a combined market-sized worldwide of the containers. So OpenStack is bigger than the container platforms. It's one of the three biggest open-source projects with regards to the number of contributions. And only three projects achieve this level of contributions from various companies, from various developers, and those are OpenStack, Linux kernel, and Chromium. Another important thing to note is that its adoption constantly continues to grow. If we have a look on the chart here it's coming from one of independent research institutions, Statista. If we have a look at the OpenStack numbers, over years they have been only growing, and they are predicted to continue to grow in the following years. And finally, one last point, but not least it's a very major project. It's celebrating 10 years this year. So many companies involved in OpenStack development so that is considered very, very mature. So OpenStack is awesome, and you probably know that because in most of the cases you are probably already using OpenStack in your data center. It's managing your distributed compute network and storage resources. It's acting as an extension to your public cloud offering like you run. You probably run some of your workloads in a public cloud, you run some of them in a private cloud, and OpenStack powers this private cloud. So you're already using it in your data center. But now what about the edge? So why not to use OpenStack at your edge? Why not to go outside of the data center space and use all of the benefits that OpenStack brings with all of those management capabilities on-demand virtual machines, nice dashboards, and why not to use all of them at your edge? So at your local micro data centers with remote management capabilities where you could run your lightweight virtual machines. Wait a minute for a second. So let's define what's the edge, right? Because as I said at the beginning of this conversation there are so many definitions of the edge as the number of people talking about the edge. So if we have a look on how does the IT infrastructure look like nowadays, we have public clouds where most of people start when they run their business. They choose to run their workloads in a public cloud because it's cheap at the beginning and it provides an instant access to the infrastructure. But then over time you extend your infrastructure towards a private cloud because it's more economical to run it in a private cloud in the long term. But then if you go outside of the data center space, there are these two small areas. It's a near edge where we would propose something called a micro cloud as a solution for running your workloads and there is a far edge where at the end of the day there is a single appliance or a single device. So it's basically the internet of things. Now those two approaches, those two areas, so micro clouds and internet of things is the edge computing and this is what we at Canonical understand as the edge. So this leads to the following question. So why not to use OpenStag in your micro cloud? So when we go outside of this data center space, we have those micro data centers. There is either a single server or just a bunch of servers like maybe free for high availability. Why not to use OpenStag in this kind of an environment in a micro cloud? So there are a couple of things to consider when talking about micro clouds because micro clouds are a little bit special and they are different compared to the private clouds running in your big data centers. So first of all, micro clouds have to be cloudish in the same way as regular private clouds running in big data centers are. So they have to enable things like no full automation with regards to the underlying infrastructure as well as services running on top of the cloud. They have to provide programmable access through the APIs so that various clients could contact those APIs and various external tools, could use them for integration purposes. They should provide diverse substrates so that at the end of the day you would be able to run either virtual machine workloads or containerized workloads on the same cloud on the same environment and they have to be designed to fulfill the economic benefits of using a cloud environment which is different compared to using a legacy IT infrastructure. But on the other hand micro clouds have to be localized. What does it mean? So first of all, there may be thousands or tens of thousands remote sites where those micro clouds would run and where your workloads would run. This is especially important in case of telecommunication service providers where there are tens or even hundreds of thousands of sites where these kind of micro clouds could run. Second thing, they have to be optimized for running services with a very small footprint because the number of resources is limited. Another thing, there may be no human interactions when running your micro cloud. Everything has to be fully automated. They have to be remotely manageable because sending a technician to the side location where your micro cloud runs may actually cost you more than replacing the entire micro cloud. If you're again running thousands of them, you can imagine how many manual human interactions would they require to be kept up to date. They have to be designed to be run with no human help and they have to be remotely deployable and manageable as a result. This is what makes micro cloud different from regular private cloud running your big data centers. Now again, just to recap operating micro clouds requires some very special approaches. First of all, you have to be able to use APIs everywhere so that all of the services that are consuming the cloud would be able to connect with it remotely. Their deployments have to be made repeatable like there's no way to build snowflakes in a micro cloud. Everything has to be fully repeatable so that you can just power and forget your micro cloud and be able to replicate it for your another thousands of instances of a micro cloud. There must be some established centralized management capabilities for your micro cloud so that you would be able to monitor and control running micro clouds from a central place. Finally, workload placement matters so you should be able to use some tools which would allow you to orchestrate your workloads when running on top of your micro cloud. Either you're running your workloads inside your virtual machines or inside of containers. Let's now have a look at the ingredients of a micro cloud solution that we as Canonical propose. We start at the bare metal layer and for kind of a bare metal cloud we propose mass, metal as a service, as a solution that enables full automation on the bare metal layer. If you're about to provision your servers for your micro cloud either it's a single server or there are again like three for high availability or five, maybe it's a little bigger micro cloud mass fits for all of those use cases providing bare metal automation and bare metal provisioning capabilities. Ultimately turning your edge side, your micro cloud into a bare metal cloud. So you can fully automate the provisioning process of the underlying hardware. Another layer on the top of bare metal cloud is storage and for storage for your micro cloud we propose SEF as a software defined storage solution that scales and provides all of the replication, features, encryption and stuff like that. So it's a kind of a standard for building a storage solution in a cloud and it's open source fits very well for your edge, for your micro cloud. Another tool in this micro cloud stack is LexD which historically used to be just a Linux containers hypervisor and providing the capabilities of running machine containers on demand but has recently evolved to provide virtual machines on demand as well and being able to manage virtual machines instead. So it's kind of a more lightweight solution than open stack and now for the containers for the regular Docker containers, process containers that you know, like you know them from your data center we usually do them with Kubernetes. We offer microcades as a lightweight Kubernetes distribution that is capable of running your lightweight containers on your micro cloud with confidence and it installs very easily being a snap-based Kubernetes installation. And now there's one more thing and here we go to the biggest point of this presentation. It's open stack. So can we run open stack on the edge in the same way that we discussed you know Mass, Seth, LexD and microcades, is there a solution that allows us to run open stack on the edge with all of those special requirements, answering all of those special requirements of the edge that we discussed. So also being you know zero open stack single command install open stack being able to replicate and yes the answer is micro stack and micro stack is a snap-based open stack installation that we develop and maintain here at Canonical and this allows you to do exactly these kind of things with your open stack in the same way as you can do with containers and microcades. So let's now briefly review Canonical's micro cloud solutions. So for this kind of a micro cloud stack consisting of bare metal cloud storage, LexD for machine containers and VMs, microcades for process containers and micro stack for virtual machines, we provide two types of services. So first of all a deployment service where we deploy the first site for our customers and they can learn and replicate to their another thousands of micro cloud sites and what we include in this deployment service is full knowledge transfer about the underlying technologies a central management solution so that you would be able to centrally monitor and manage all of your micro cloud sites. Full deployment of the first site and obviously design to make sure that the micro cloud being deployed answers customers needs and this kind of a deployment service is available from Canonical at a fixed price of 50k of dollars and now obviously since you have to be able to maintain your micro cloud post deployment Canonical provides support commercial support services including all of the layers of the micro cloud stack so bare metal, LexD, open stack, Ubuntu images for your virtual machines run on top of open stack, Kubernetes and again base Ubuntu images running on top of Cades and these commercial support from Canonical for the entire micro cloud stack cost $1,500 per node per year and is a full stack support again including all of the layers and providing phone and ticket support, guaranteed response time and aggressive SLAs and that's pretty much it with regards to the presentation but at this point I would be more than happy to answer any questions that you may have so thank you very much