 What are you doing today? Is everybody enjoying the lovely Tokyo rainy season this morning? My name is Bruce Basil Matthews. I'm the Western Regional Solutions Architect for Mirantis USA. And I bring with me today a distinguished figure from the Murano PtL project, Serge Melakian. And so the topic today is really why have to reinvent the wheel? How you can accelerate your delivery of legacy applications by migrating them into newer technologies, such as containerization, et cetera, and a path as to how you would get there. But before we can go all that way, I'm going to walk through a set of specific changes in your environment that may take place. I'm going to tell you a little story. It's the once upon a time story that I usually tell. I've been in this so long it happened as the dinosaurs roamed the earth. But once that's done, then we're going to talk about how people fall through the stages of virtualization, automation, and then finally reaching an inflection point. And at that point, there's usually a redesign and a re-automation of your applications. And you get to redeploy them with far more benefit than you had originally. Then I'm going to turn the floor over to Serge. And he's actually going to tell you some of the details as to how to do that. So once upon a time, back when the dinosaurs roamed the earth, this was kind of how we did this. Applications were siloed into single racks. Everything was in twos, web services, application services, database services. The utilization, even in that high availability configuration, was typically pretty low. But it kept the application users happy. It also made the hardware vendors ecstatic, because we not only did it one time for high availability, we always ended up doing it twice for disaster recovery. And in doing that, we were able to kind of make the users happy, but the hardware vendors were dancing up and down as that was going on. So typically, once we got over the idea of only on bare metal, we started doing virtualization and then automating that process. So the first step was, I'm the IT guy, and I know how you're going to operate. So I'm going to be the only one to provide it to you. In the second case, it was automating what they had done. So in the first and second case of this virtualization and automation, there was a lot of end user frustration because of the service levels they were receiving that were lower than bare metal. There was the required a lot of interaction with the IT guys in order to make it happen. And they were saying, IT, it's your responsibility to recover for me. I don't need to do anything from an application perspective. The IT guys perspective, on the other hand, had, OK, we're getting better utilization because we're compacting on virtualized environments. I get to manage it in a single pane of glass, a v-center-ish kind of thing. And of course, the costs went up incredibly to the vendors that were associated to it. And that was sort of where we stood at the virtualization and automation phase. But at some point in people's organizations' development and evolution, you finally get the idea that there is something better, a better collaborative way to do this. And one of the fundamental things that OpenStack brings is that sense of collaboration between the IT organization and the end user community as a natural progression. So we're all using the Horizon UI and all of the CLIs from Python and the API RESTful interfaces in order to do our jobs, either from an operational standpoint or from a end user member standpoint. And of course, we provide a tool that would allow us to do the deployments in an automated fashion. So we can get that environment for OpenStack up and running fast. We all use the same identity mechanisms, the same APIs. And as a result, you get a very rich list of services that we're focused to here on the right and the one that we're going to be focusing on most intently is at the top of the heap there, Murano, the application service catalog, and how it's used with heat and several other options to be able to redesign and redeploy your architectures using even if it's a legacy application service. So from the IT's perspective on the new back end, this is how instead of having to deal with physical single cabinet application service layers, IT organizations are beginning to lay out their environments in order to accommodate this new application service structure in a way that makes sense. You'll notice there's set up here is four availability zones. Controllers are spread out across the AZs as well. So they become protected. Databases that run a salometer, et cetera, spread out across it. And then in addition to the availability zones that are fault domains, you end up layering on top of that host aggregations to allow you to isolate out windows-oriented environments from Linux-oriented environments, one organization from another organization in your company. And because people need different service levels in terms of storage, you can actually isolate out by tier in that same environment and by default domain, separate things for providing sender with SSD, providing sender with higher spinning disk media, et cetera. And finally, a lower level of storage just to provide the ephemeral storage for VMs if necessary. From an application service perspective, now that we've got this new infrastructure and platform in place, I can then take advantage of that from the application service end to ensure that now my high availability in separate fault domains is a part and parcel of my application. So the web services are spread across the host aggregation through the availability zones. Application services, the same thing. Database services, the same thing. All are provided from different tiers of sender-provided storage or other storage mediums, Swift, et cetera, to provide the proper level of service for the application. Now, everybody got this far because you're all here from an open-stack world, but there are several perspectives on this change that have occurred in your environment to have gotten to this level. So the first one is, of course, now from an application perspective, I'm self-managing the virtual machines that I'm operating, and I have more control, but I have also a greater degree of responsibility. The failure scenarios are now planned into the way I lay out my application services, and I can acquire new resources myself. I can spin up VMs, I can add additional storage and everything else. The IT side of the house is now also benefiting because, of course, instead of troubleshooting individual platforms and things of that nature, they're able to plan for the growth of the environment by adding additional compute and storage and such, and they're able to sort of take a greater look at different new future capabilities that are being generated within the open-stack and open-source communities. So now I can monitor and manage as opposed to dragging my cables throughout my environment and racking and stacking the way I used to do it when the dinosaurs roamed the earth. Because of that, I can spend more time planning, and quite frankly, planning an open-stack environment is far more complex than planning the old IT infrastructures because you have to take the breadth of your application services now. The workloads become important into account. Okay. If you've followed me so far, we've only been dealing with virtual machines and baked images, and the real heart of this talk, which is coming up, I'll try to speed up so that I give Serge more time, is this idea of containerization, microservice delivery. Now, it may seem really weird that you'd take an application like .NET and want to provide microservices to support it. But here's a couple of the things, the good reasons why that's a good investment of time and energy in your environment. First, you get the portability from platform to platform because Docker or containers run in different environments in exactly the same way. You can prepackage them, and because they're containerized and the operating system that they're running on, already is running, they spin up faster. So you get faster service delivery out of the new application. From an IT perspective, you can unify the capabilities of providing both bare metal through Ironic when needed and for providing virtualization when it applies. Virtualization of the storage and network mean that I'm not dragging cables anymore, which is a big plus in my book because I'm getting too old to drag cables. This whole idea of multi-tenancy layer that's fundamental to the way OpenStack was developed allows you to do that, providing monitoring and management and elastic kind of infrastructure that expands and contracts as you need to. And because the deployments can be automated in a way that's integrated with the OpenStack environment, which is what we're going to be focused on in a minute, it saves time and energy with the orchestration already taken care of. So there's a couple of use cases in terms of containerization that you really want to take into account. The first one is what's my security model internal? And if that security model says, I can only have certain IP ports facing out to the world and I can only have a certain amount of data available to an individual container, then you need to host it in a virtual machine. Why? Because the container itself, if we were just on bare metal, sees the entire platform that it happens to be operating on. However, if what you're going for is the density to be able to put as many containers, et cetera, on the hosts as they will fit, then for density and performance, you want to take a look at using Ironic and developing a bare metal strategy after deploying containers on bare metal. So here's a few of the frameworks that are associated with being able to take advantage of containerization on the open stack framework. The one that we offer out of the box is Kubernetes, but these others can be integrated. We just don't offer them out of the box. Here's some of the pros and cons in terms of each. We'll make sure that this information is available to you on the website to allow you to review it on your own. So what does all this change mean to everybody? Mean to the end user community and mean to the IT organization? So now, those dockerized containers, those microservices, can be moved almost instantaneously from one platform to another. So if your overall utilization of your environment becomes too high on a given set of compute platforms, you can move them really rapidly. And because the operating system once again is running, you don't need to take the time to spin one down and spin one up, you can just move it. And that applies to not only the Kubernetes cluster as you've seen depicted here, but it also applies to the storage that's associated to it because it's a shared distributed storage platform such as SAP. Now my friend, Serge, actually has a question to pose to you folks, Serge. Thank you. So we talked about applications and your applications are running on containers. And later down, you have some container orchestration engine. Is a Kubernetes, Mezzes, Docker, Swarm, whatever. It takes care about lifecycle, reliability of your application. It heals it, it scales it. And everything seems fine so far, right? And under layer, you have OpenStack, which is also already for you scalable, high-waible, and you probably have some distribution that takes care about scalability and high availability of your OpenStack. But the question is, what is in the middle? Who will take care about scalability and reliability of your container orchestration engine? Who is going to do this job? So I would like to introduce several projects and point different features in these projects which are designed to solve this question, provide you tools around which you can build solution to reliability and high availability and after scaling and after healing for your container orchestration engine, whatever it is. So first of all, Morana. It's application catalog for OpenStack, serious in OpenStack, and main idea, it give you a way to deploy any application with a push of a button from UI or API at your choice. Second project is more interesting in this case. Container service for OpenStack, Magnum, which gives you ability to manage clusters of containers and orchestration engines. You can deploy different orchestration engines through Magnum, such as Docker Swarm, Kubernetes, and Mesos. Magnum is able to deploy them and scale them for you. And this is the answer to how to get containers in your cloud. So Morana and Magnum, both of them, provide you several features. First of all, is REST API to deploy and scale your container engine. Just a few calls and you have your Kubernetes cluster deployed on top of VMs in your OpenStack cloud. One more call and you scale your cluster up or down depending on your needs. Magnum provides you a choice of three engines. Morana only gives you one, Kubernetes, but you have a choice from three most popular and most reliable orchestration engines in the market, container orchestration engines. And also, Morana and Magnum provide you an API to schedule containers on these engines. In case of Magnum, it provides you two ways of scheduling containers in these orchestration engines. OpenStack native API, in case of Magnum, you can schedule your container to run on already deployed orchestration engine through a Magnum API. Or you can use directly API of your orchestration engine. For example, connect to Kubernetes API and say, please, deploy this topology. Take care of these containers. Take care of my application. And Morana also provides you a way to contact directly to Kubernetes and talk with it and deploy containers, applications, whatever, through this API. And I would like to highlight, first point, REST API to deploy and scale container orchestration engine. So a little bit about what Kubernetes in Morana can and what it is. First of all, it's just another application. Morana have a lot of them. Same kind of application as WordPress, Apache server, anything. It's just an application for Morana. We don't have anything specific for Kubernetes in Morana itself. So Morana, Kubernetes application for Morana is just a really seen wrapper around Kubernetes which is able to install Kubernetes on top of OpenStack. Our current version of Kubernetes application uses a lot of stable Kubernetes version. At least it was a week ago. V1.0.6. And by its nature, as any other Morana application, it's scalable and it's extendable. So what I am talking about, what do I mean by extensible for the first? So Morana applications are designed in such a way that they provide object-oriented description of your object model, of your application. So essentially, for example, if you would like to take current implementation of Kubernetes application and add some new capabilities, like ability to scale, change replication number for your pod in Kubernetes, or for example, be able to migrate your Kubernetes cluster between different regions or availability zones, wherever. You just inherit current implementation in your own implementation of Kubernetes application. And users instantly gain new capabilities that you edit in your application and you're just reusing everything which is already there. You don't maintain new codebase. You use current implementation as just a library, as any other programming language or anything. You just reuse existing implementation. So you combine different applications, different libraries for Morana which provide different capabilities. For example, scaling models. And get what you want from your orchestration engine on top of OpenStack. What about scalability? So current version of Kubernetes application supports scaling, upscaling, downscaling of masternodes, of childnodes, and gateway nodes which is take care about traffic routing. And also it provides ability to increase and decrease replication number for your particular pod in case you need to have more scalable application itself represented by a pod. Morana provides orchestration native API for accessing container management. So we don't wrap Kubernetes API. We just provide the entities. You deploy application, you get back endpoint which you can access from CQL from your machine and do whatever you want. But at the same time, we provide API for another applications and ability to them to be deployed on top of Kubernetes cluster. So essentially you can take some application which is number of containers possibly, but possibly something mixed. Something is installed on VM, something is in containers or something is running outside of the cloud. For example, on some hardware rack which you use for already two years and it's worth great. But your application use these three components to provide some service to the end user. So Morana provides API to make this application as another application for Morana and deploy on top of Kubernetes cluster. So you don't need to write some shell script, some orchestration for yourself to deploy this application on top of provided Kubernetes cluster and your open stack. So Morana takes care about that. But at the same time, if your developers are working only with containers, everything is migrated to containers, just use API as another tools. And provisioning is scaling is available from UI and API. You need to scale your application. You don't need to go and call some APIs. You don't, for example, or you don't need to go to tune something or deploy something manually. You just click one button. Or you have some automation around this. Just call the API and it will take care about scaling. So next question, next project that I would like to introduce is Cilometer. Cilometer is essentially telemetry service for open stack and it's reliably collects metrics, utilization of different physical and virtual resources. And to make your container orchestration engine be scalable, automatically scalable, you need something which will take care about it. We'll measure load on your current cluster and scale it automatically for you. And in after scaling use case, Cilometer except gathering metric information have very important feature, events. Ability to call, to make something if your metrics deviate from some parameter. Hit some threshold. You have more CPU load on your compute nodes than you planned. Events is raised and you can do something with this event. And more importantly, in case of Cilometer and I'm sure about Zabix, that event can trigger some URL. And since we have in Magnum and Murana ability to scale cluster by just calling REST API, you can tie these two features together very easily. Scale of when Cilometer sees that metrics are raising, it calls API, scaling API for Magnum or Murana and your cluster is scaled. So there's two features that are available out of the box. You don't need to do anything specifically to make it happen. In case of Magnum, we need small wrapper which will configure collection of metrics in Cilometer and even configuration to call event on some threshold manually. In case of Murana, you can automate this in your Kubernetes application. By extending this, or for example, writing some specific monitoring implementation for Kubernetes application. For example, for Zabix, which in my case, in my opinion, slightly more suitable for concrete use case of monitoring and after scaling. Cilometer is great tool for measuring but it's only moving in direction of being able to achieve these tasks. But it works out of the box in any way, in both cases. There's only one concern in case of Cilometer and Zabix is authentications. Both APIs, Murana and Magnum, require OpenStack authentication to do the action because we need some rights to scale. So in case of Cilometer, it's easily sold by plugins. For Murana, we don't have a plugin for Cilometer which will provide ability to seamlessly scale up and scale double Kubernetes cluster. I'm not sure about Magnum. But it's easily soluble and I hope we'll achieve this goal in the next milestone, Mitaka-1. And if you can help us to do that, it will be great. Cell healing, unfortunately here, I will talk only about cell healing in Murana because I'm mostly familiar with Murana. So Murana provides a following capability which is greatly suitable for cell healing. It's workflows which you can call by API. And workflow, by nature of Murana, is some imperatively written, it's written on imperative language, workflow can do anything about object model. Add new VM, install something on VM, execute some shell script, Ansible or puppet manifest or Chef cookbook on the VM, whatever you want. Take some action about VM, about application installed on VM and do it by calling URL. So combined with metrics and ability to call URLs from existing monitoring tools, it gives you a ability to implement cell healing in case of some possible disaster situation with your cluster. One node goes down, compute node failed, you don't know what happened but you need to continue to provide service to your customers. It gives them ability to run containers, schedule them more and not shut down half of them because one node failed. So monitoring detects that load increased and node is failed, calls URL, Murana kills this VM, brings new one, install same application, same configuration brings this node to the same configuration and joins it to the cluster. And cell healing is not only question about bringing physical resources. Most of the time is question about reconfiguring, changing the state of application. And it's tricky because you need to take care about on bigger, on different levels. Simplest way is recreating VM from the scratch. If it's easily pre-backed and cluster can be joined, it's great but not always the case. But it takes more time, for example, than checking if all the services are running and just restarting one of them, which also can be achieved. So three features, ability to call some URL from monitoring, deployment, scalability and ability to call any workflow in Murana by URL gives you a way to create a layer which will take care about reliability and scaling of your orchestration engine. So wrapping this all, using Murana, you can create applications and deploy your orchestration engine, some different frameworks, some platform services, for example. Publish them in the catalogue and give ability to deploy your customers, clusters for themselves, easily, without interaction with your DevOps guys, with your IT organization. Consume them directly once application is deployed and it provides them, teller them what they need, scaling them, downscaling them. So, questions? Yeah, so there's a mic there in the back of the room if anybody wants to ask a question, please do it from there, otherwise we'll have to repeat the questions after you ask them. Hello. Hi. Yeah, so I thought, I'm slightly confused. I thought I was coming to a Murano sort of sales pitch and in a way you've kind of disappointed in that respect, but I'm a great buyer of the idea of having an application management tool and all the great stuff it's supposed to provide, but at the heart of it, there's got to be applications that can be delivered by that mechanism. So Murantis delivered, Murantis 7.0, including the Kilo version of OpenStack, which admittedly is six months old, but only about a month ago. And the application, the Murano application catalog has exactly one application, which is based on the previous version of Murantis OpenStack, which delivers the Juneau version of OpenStack, only one, which is the Raleigh application. So I'm kind of wondering why is it that, you know, is Murantis expecting all its customers to move to the next version within one month of it being released, or is this a glitch? What sort of glitch? Let me answer the question regarding applications. First of all, as one of the Murantis employees, Murantis doesn't distribute applications. Understandably. Yes. All applications available in community app catalog, in GitHub app for Murano, applications developed by community. And Murantis as a company doesn't support them and doesn't commute direct resources to that. It developed by Murano team, part of the Murano team who are working for Murantis as part of the upstream job. Nothing is related to Murantis. This is a question about applications. That's fair enough, but why is it that there are something like 30 or 40 applications available for Kilo and non for Juneau? They actually are available for both versions. And I hope during this summit, we will talk with community who supports community app catalog and we will solve question of versioning. Currently, community app catalog doesn't support versioning. We can't publish of a new versions of applications there. That's a problem. So it is a glitch then. Yeah, so if you want these applications, just go to the GitHub, download them by the tag and you will have Liberty, Kilo and Juneau. Okay, thank you. And you asked second question about marketing pitch. So our talk is called, don't reinvigorate the wheel. That was the point. Everything is already there. Just use existing features. That's why I didn't try to promote new technology or explain new great ideas how to implement that. I'm telling you, there's three features which are already there and you don't need to do anything to use them to achieve your goal. Other questions? Any other questions to the application service? Well, folks, thank you very much for taking the time to chat with us today. And if you have any other questions, I'm Bruce Matthews, Western Regional Solutions Architect, Serge Malikian, PTL for the Murano project. We're happy to answer them after the session's over. Thank you very much. Thank you.