 Hello everyone. It's great to have you all here. My name is Artem Andreyev. I'm a product manager for Morantys. Morantys specifically opens stack line of products that includes all the known existing distributions of opens stack we ever did or ever going to do. And today I would like to talk to you about our new launch that is just about to happen in a matter of several weeks. And this is Morantys opens stack on Kubernetes, our new next generation opens stack distribution. So I would like to highlight a few topics or questions that you might probably have. First, well, why Morantys is building a new distribution? Why do we need it? What's wrong with those that are currently there? What exactly is Morantys opens stack on Kubernetes? What's going to be a part of the package? What's going to be now going to be there as of today, as of tomorrow? And well, how exactly Morantys opens stack on Kubernetes works? What's on the inside on the architecture and things moving together? How exactly Morantys is solving opens stack management complexity with the help of Kubernetes? Let's start. So why Morantys opens stack on Kubernetes? Why do we need another distribution? Well, Morantys has a long history of deploying and managing opens stack, and over these past years we've learned a lot. So we realize that some of the requirements that modern customers for open stack have, they're quite hard to implement, hard to provide with our current product lines with our current technology. And while these are actual requirements much higher than they used to be, so the expectations of the market are growing and we need to move along with these requirements to sustain our open stack business. So what we want to achieve with Morantys open stack on Kubernetes is first, reliability and robustness because well, this is important, especially in some areas like telco, which, well, they did not forgive you any down times. Like it's a second, a minute down time means a loss of thousands of dollars. Security and transparency, this is very important for those customers who operate in highly regulated areas, having to comply with a lot of standards and governance around data placement, data transparency of the infrastructure, and well compliance with, of course, Morantys standards for encryption and well data protection. On the other hand, the distribution needs to be simple and easy to deploy and operate because well, you don't have to, as a cloud operator, don't have to read a thousand books before actually doing something. It should be intuitive. And well naturally, one of the biggest challenges we recognize with open stack is that, well, sometimes our customers are quite reluctant to update just because, well, they believe that the updates and upgrades are complex and are impactful on their workloads and well that introduces a lot of interesting situations like when, well, we have to patch a very, very old code base and somehow deliver that to the field. And of course, well, if we want to compete with the public clouds, well, there needs to be some level of feature parity in between what public cloud providers offer and what our open stack can do. So Morantys open stack on Kubernetes, the launch is on November the fourth. So the initial general availability version is going to contain the core set of services like standard set identity management, images virtualization, block storage, networking, Verizon, orchestration. Well, but of course there's some extras like bare metal management service, DNS as a service, still a meter for metering of the and auto-scale on the workloads, management of secrets, and load balancing through Octavia. And all this mix is augmented with Morantys know-how subsystems that provide ability for operators to manage their infrastructure full stack from the very bottom of the hardware to the very top of the software. Of course, operation support system, quite well-known Morantys stack-light, logging, monitoring, alerting, also containerized. Oh, naturally, zero touch day two operations capability, so meaning that while adding a node, removing a node, changing configuration is a matter of one or two API calls and not a thousand manual actions. And of course, continuous and seamless updates to the OpenStack installation in the field, meaning that applying new code should not be a matter of hours but rather minutes and should not have any impact on the running workloads. And we will be achieving that from the very beginning, from the very first version of Morantys OpenStack. And by the way, it's going to be OpenStack or Surrey release. So quite new, quite stable by that time. Okay, so how exactly Morantys is doing that? So how are we exactly deploying and running OpenStack on top of Kubernetes? So we achieved that through the integration with our new line of products. We call it Docker Enterprise Container Cloud. This is a Kubernetes as a service distribution and tool set around it. So on which OpenStack relies, just like any other typical Kubernetes workloads relies to manage and control the underlying infrastructure. Hardware, whole step breading system. Well, naturally the Kubernetes, underlay Kubernetes cluster itself add-ons to this Kubernetes cluster. Well, yes. And while we as developers for OpenStack, we rely on these capabilities that Kubernetes provides us with and we'll just deploy it and manage it just like any other Kubernetes workload. So, yeah. However, on the packaging side, we not only ship OpenStack as it is, just as before with our previous products, but also Container Rights staff as a major backend for storage, naturally Container Rights stack light to do login, monitoring, and alerting, and two software-defined networking solutions based on OpenWiz Switch, classic architecture, and Container Rights text and fabric. And while Docker Enterprise Container Cloud solves the tasks of managing all of this stack and actually can do a bit more by, for example, allow you to deploy, to give your customers user's ability to deploy their own small Docker Enterprise Kubernetes clusters on top of your OpenStack, Container Rights OpenStack. Okay. How does the deployment and management look like in terms of from the operator perspective? First, well, Docker Enterprise Container Cloud takes care of the provisioning of the infrastructure through the data-special bare-metal management system. So, all the host-to-est provisioning, all the host-to-est configuration, discovery of the nodes, et cetera, et cetera, it's all handled through this mechanism. Then the next step is to deploy Kubernetes cluster on top of that. So, essentially, everything is running on top of Kubernetes. Any component you can think of which is in a user space of the host-to-est is a container. Right. So, we install Kubernetes cluster. We install add-ons, necessary add-ons like operators and we implement life-cycle management for OpenStack and supporting components as Kubernetes operators. So, there's a Kubernetes operator for OpenStack, there's a Kubernetes operator for SAF and, well, that will be, at some point, a Kubernetes operator for StackLab. So, having these life-cycle management models and exposing their APIs as a part of the Kubernetes of this Kubernetes cluster API, an operator creates their corresponding clusters providing by means of providing a YAML form definition of what needs to be there and all management models handle all the rest of the deployment and management. So, changing things is actually as easy so you use the same APIs you use to deploy. Basically, you describe what you want to get in the end and, well, Kubernetes handles get in there to that desired state. Well, naturally there are certain pros and cons to this approach. Well, someone might say, well, this approach is quite complex. Why do I need that? Well, yes, but managing OpenStack in a classic way through packages for some sort of configuration management is actually as complex. Kubernetes does a great deal of work to help us achieve self-healing auto-scaling, for example, for the native Kubernetes way of doing that. Well, isolation of components and libraries is successfully addressed by means of using Docker images. We're all in updates out of the box mechanisms for Kubernetes workloads. Well, networking and building blocks like load balancing as a service and then interconnectivity between the containers solves a great deal of job complexity to manage the huge, robust amount of services that OpenStack has in connecting them together, making them talk together. Well, and as I said the reconciling mechanism so operator describes what he wants to get and the rest is handled through the internal mechanisms is actually a great contribution to the simplification of the deployment and management process. And we are scaling up of controlling as easy as just provisioning a new node, making it a Kubernetes worker and assigning label to that so everything else is automatically handled through internal mechanisms that Kubernetes provides. Well, of course, you as an operator you would have to learn how to deal with all that but well, since the market is still actually moving towards containerization very fast so it would probably be a beneficial benefit. Yes, OpenStack is not a typical 100% typical Kubernetes workload because it has certain components that are stateful that need to be pinned to certain work nodes and while this overhead logical overhead needs to be implemented somewhere so inside the life cycle management modules. Yeah, that's true. Well, of course one would need to take care about allocating a dedicated set of resources to run Kubernetes control plane so that OpenStack as a workload does not for some reason being overloaded does not affect the ability to operate the underlying Kubernetes cluster that could be leading to disastrous consequences. The long story short this is what OpenStack on Kubernetes is going to look like and we are all very excited about it. Well, as a sneak peek, I would like to show you a demo of one of the most powerful and exciting capabilities that OpenStack on Kubernetes is going to have. It's the upgrade in between OpenStack releases done within a matter of well hours, not days as before and well with very minimum impact on the users. Today we will upgrade a very small but still featureful OpenStack cluster. We will start the upgrade process and watch the main services getting replaced with the newer versions. After the upgrade is done, we will check if the cluster is still operational and in the background there will be a rally job running to collect some statistics about the successful and failed user operations. One more little thing before we start. This demo was created with the help of Lens. Lens is the integrated development environment created specifically for Kubernetes applications and since we say that OpenStack for us is just another Kubernetes application this IDE seems to be a great fit for the demonstration purpose. Okay, let's start. Here we have a nine node cluster running the latest Docker Enterprise Kubernetes. On top of it, there is an installation of Mirantis OpenStack on Kubernetes which has all the major services available. Every user space service is represented by one or more containers that runs under the control of Kubernetes Orchestrator. Pieces of lifecycle management logic for all of the components including OpenStack are implemented in a form of Kubernetes operators. The lifecycle management module for OpenStack is called OpenStack operator. Let's check which OpenStack release we are currently running. Okay, the version of Nova API is 2.60 that represents OpenStack Queens. Let us also check the version of the Nova service itself by means of running a Nova Managed version command from inside a container. The version of the Nova service also maps to the Queens release. Let's set up a rally job to simulate some user activity during the upgrade. The job will be starting up and then shutting down box of VMs in cycles. The simulation has now started. Every LCM operator exposes a custom resource on Kubernetes API. OpenStack operator is not an exception to this rule. OpenStack deployment CR is a single point of entry into OpenStack cluster management. The target OpenStack release is specified just as another parameter in the custom resource. Changing this parameter to something different will trigger an upgrade. The upgrade has now started. First we need to pre-cache all the new images for the containers. This usually takes a while. 20 minutes later we are ready to proceed with the rolling upgrade itself. The rolling upgrade starts with the Kiston service. While the service is being renewed let's check if its API is still available. Kiston API is still running on Queens release. After a while we can see it has bound to Rocky. Now let us give the OpenStack operator a chance to finish with the Kiston service. Operator reports that the identity service has been successfully upgraded. Now since we are done with Kiston upgrade let's take a look at plans. Let's check how the process of upgrade affects the end user experience. And let us give it another try after a while. Everything is okay. The dashboard seems to be working. In the meanwhile the image service has been also upgraded and now it is time for the networking. Taking the network's dashboard in horizon to make sure there is no impact on the users. Everything seems to be working fine. The time is running and we are done upgrading the Neutron service. So now it is time for the compute. Nova. We are filtering out the Kubernetes pods that comprise the Nova service and watching them getting upgraded one by one. Let's check the version of the Nova API. 2.60 is still Queens. Let's give the operator a bit of time to complete the upgrade of Nova. Checking the API version again and we see it changed to 2.65. This corresponds to Rocky release of OpenStack. And just for the sake of prudency, since we checked this in the beginning of the demo, let us see what Nova managed version command produces as an output. Nova service has been upgraded to version 18.31 which is Rocky. The operator keeps upgrading the rest of the components in the background. The upgrade normally takes around 2 hours. Since everything is running in parallel, there is no linear dependency on the size of the cluster. After roughly 1 hour, the whole cluster has been upgraded. All of the OpenStack services are now running in Rocky release. Now let's do a quick health check of the cluster. We'll play with it a bit to see if it's still operational. The compute, volume, images, and networking dashboards seem to be working fine which is definitely a good sign. Now let us try the Disognate service. We'll create a new DNS zone and a record set in it to make sure that Disognate is still operational. Creating a simple record set let's try creating a new volume from an image to see if the render and glance services can work together. We have a set of tenant networks pre-configured and we need to adjust the rules in the default security group to make sure that the incoming SSH and ICMP traffic is allowed. Okay, now it's time to launch a new instance and try to log in into it. We will be using the previously created volume to run an instance from it. Okay, the instance is up and right now it's time to float in IP. So that we could access the instance from the outside of the cluster. All set, now it's time to SSH into the guest. Using the float in IP address previously attached to the instance. And we're in. So, we have proven the OpenStack cluster is fully operational after the upgrade. Now we just need to give Rally a job a bit of time to finish and see the statistics of the execution. The full upgrade has taken roughly 2 hours minus 20-30 minutes for the Rally job to finish with 98.4% of cloud user operations finished successfully. To conclude, we have just seen how Morantis OpenStack on Kubernetes solves the challenge of upgrading such a complex application like OpenStack using the advanced mechanisms the Kubernetes platform provides. The first release of Morantis OpenStack on Kubernetes is going to become available on November the 4th. It's going to provide enough functionality for any company to build their own private infrastructure as a service platform. From the very beginning, Morantis will start delivering continuous updates to the product. A new release will be available every 6 weeks with a zero touch and zero impact migration path in between the versions. The second release of the product which is going to become available in the end of the year 2020 is going to support OpenStack Victoria together with an upgrade path from OpenStack Osury. That is it. For more information about Morantis OpenStack on Kubernetes, visit our website info.morantis.com Thank you everyone for watching the demo and have a nice day!