 Let me start it. Hello. I'm happy to see so many people here because this is the farthest corner of this huge building. So I could start this presentation with like far, far away was one presentation happening. So welcome, folks. I would like to start my presentation with introducing my ingenious co-worker, sorry, cost speaker, Madhuri Kumari from Intel. And today we are going to talk about one of the biggest questions that was asked last year, Magnum on Murana. So I will start this quick introduction of Murana and Madhuri will continue this introduction of Magnum to who don't know well this project. Then I will quickly apologize for one small confusion that I created half a year ago. Then question of the century, Murana or Magnum? We'll continue that with a short demo by Madhuri. And finally, we'll give an answer to you guys. And we'll listen to a question from you. So Murana, this slide goes from one year to another year, but I really like it. So Murana is an application guideline for OpenStack. And our mission is to provide ability to application developers, create easily application for the cloud, and bring the application to the cloud as a main goal, and also give cloud.means a way to publish and provide this application to the end users. So end user who is using this cloud can go to the OpenStack, click one button, and receive the WordPress, the CRM system, or whatever in one click of a button. So I will start with what is actually Murana. Murana consists of four parts. First part is a catalog. It's kind of obvious. Second part is most interesting thing which I would like to talk today is that Murana is application interoperability layer. We also have orchestration, some parallel, and configuration management. Because in order to deploy application from beginning to the end, you need all the four parts. There's also historical reason why orchestration and configuration management was built into Murana. Murana started in 2012 as Windows Data Center as a service. And basically given that in OpenStack, there was not so much Windows at all, we needed something to provision machine with the Windows and then do configuration management on it. Install software, configure the software. At that time, we used heat for orchestration. So basically in 2012 and still now, provisioners do a resource allocation in a cloud using heat. And at that time, we used agent and PowerShell for configuration management. So going further, first two parts, Cadillac and app interoperability layer, I think is the most important part in Murana and what it makes Murana different from other products in OpenStack. So Cadillac is represented by glare and a little bit of Murana API and a dashboard. Application interoperability layer is represented by Murana PL and Murana Engine which implemented. Murana PL is a language, application definition language which we designed in Murana team. And it's slightly different from other languages in a stack like heat and Tosco, for example, by being imperative. So basically, it's object-oriented imperative language that gives you ability to define your application and chance to this application with other application using this object-oriented interface. So each application provides you an interface, so many workflows, so many properties which you can access from other application. And this is how we can easily combine applications in one stack and make them reusable components, truly reusable, which you can swap at any point of time in design time when you're creating a stack in a dashboard. Murana Engine, which is responsible for up interoperability layer and also for orchestration, has several major parts. We use heat and for resource allocation in the cloud. Heat translator, we use a separate engine for deploying Tosco applications, which actually does translate Tosco to heat and then feed it to the heat. We also support heat templates and heat a solo orchestrator in Murana, so you can take your heat template, make application of it without writing any line of Murana PL code. Just taking this template and pushing to the heat and giving it the same time catalytic capabilities to the end user. And you can integrate any third-party engine to the Murana, like Cloudify, we did it in Mitaka. Configuration management in Murana are available only when you use Murana PL. Obviously, in heat, it's available through heat. But if you're talking about Murana stack, in Murana, it's available through Murana Engine. So essentially, a small piece of software which is installed in your VM either when you're spawning this VM or pre-built before, and which gives you ability to interact between your application and this VM and sending comments to this VM, like, execute this, give me these results, do this, do that. And do this and do that is batch scripts, puppet manifests, chief cookbooks, or PowerShell in case of doing configuration management on Windows. So this is the whole stack of Murana. And once again, first two layers is most important part, which is unique for Murana, and which does Murana and application catalog. We have two other layers, but they used and needed when you need to complete, which is missing in other engines. Or for example, you already have some pieces, and you just want quickly to run with them without converting existing Murana application or existing scripts to something. A little bit more about application interoperability layer. Murana PL imperative object-oriented DSL, domain-specific language. With trying from one point of, Murana PL is quite simple. It's very close to how Python and Java look like. So if your application developer has experience with any object-oriented language, it will be super easy to pick up and continue with it, continue developing Murana PL from day one. Murana PL gives you its sandboxed and built-in on top of Python. So basically, we didn't invent a whole language per se. We don't have traditional stuff for languages, like compiler or translator or anything. We're mapping as this language as a Python primitive, which we built into our engine. This gives us a sandbox from one point of view and simplicity from another point of view. Extending Murana is super easy. We don't write in our compiler. It will be stupid. In Murana, everything is an object, including every piece of your application and application itself. So even when you're writing your application, you can continue using object-oriented design for your application, making each piece of your application also an object which can be used by other application from same stack. For example, if you're writing some complex application, one of the examples would be Kubernetes application, is several objects which can be put in one package in one application and inheritors which extend this application can reuse them. So dependencies on interfaces and super deep decoupling. And at this point of time, I would like to leave you with Madure and a little bit of information about Magnum. Thank you, Serge, for a very great introduction of Murano. So now I'll just try to explain what is Magnum and how does it work, what is the architecture of Magnum, and what features does it have. So to start with, Magnum is the container's project for OpenStack. It provides a set of applications to manage your cloud applications on multident OpenStack cloud. So it allows a different kind of container orchestration engine, we call it COE, like Kubernetes, Swarm, and Messos to be available as a first class resource on OpenStack. So now this is the architecture of Magnum. So before starting talking about how does Magnum work, so let's just take a look. What are the resources in Magnum? So we mostly have two resources in Magnum that is related to Magnum. One is the Bay model and the other is the Bay. So the Bay model is, you can say, just like a Nova flavor, you specify which image to use, which key pairs to use, the network, and all the other kinds of parameters you specify in this Bay model. And then the next one is the Bay. So the Bay is a group of Nova VMs where your different COE services are configured that runs on the top of the Nova VMs and then you can run your containers on it. And we also manage other kind of resources like pod, replication controllers, services, and containers. These are all related to COE. So, and then we have two services in Magnum, that is the Magnum API and the Magnum conductor. And one of the most important part of Magnum is the heat templates. So we use heat for the orchestration of the COEs. So we have heat templates for each of the COEs like for Kubernetes, Swam and Nessos. So in these heat templates, we have two parts. One is the from the master node and one is the minion nodes. So we have separate heat templates for each of the nodes and we have elements also, how do we configure like the Docker services on Nova VMs and also various kind of configurations are there. For example, we need some kind of block storage for our nodes to run our containers. So we specify all those things in this heat template and we use this to deploy our cluster. So when you say like deploy a Swam or a Kubernetes cluster for me. So then Magnum conductor talks to heat and provides this heat template for specific COE, for Swam or Nessos maybe. And then heat talks to other open stack components like Nova, Neutron, Glance and Cinder to deploy your cluster. So once our bay is up and running, we have a group of Nova VMs. You can see this is the Nova instances where our Docker services are running and you can specify what number of nodes do you want in your clusters. Maybe you say I want two master nodes or two minion nodes that is configurable. You can specify it when you create any bay. So once this cluster is up and running, you can just run your containers on it using the native clients or maybe you can use the API available for this resources in Magnum. So now this is an overview like what are the resources and what do we manage in Magnum. So as I have already told you that we support three kind of COEs. One is the Kubernetes, the other is the swarm and my source. So and like we do support scaling of nodes. Like when you say I just want to scale up my node by two nodes or three nodes. You just have to specify like bay update and the node count. So Magnum talks to heat and heat does the thing for you. It's scale up or scale down the nodes. And then like we manage pod, service, replication, controller and containers. We have API for this resources. You can use it to like create a pod or delete a pod or maybe update a pod. And we like we mostly encourage people to use native CLIs for this management because it's a huge container lifecycle is huge. You cannot manage everything in Magnum. So we do have it but we encourage people to use native CLIs. So now the most important slide. This is the features we have in Magnum. So this I can skip I guess the cluster types and then we currently supports Fedora atomic images and the core OS images and Magnum and then the secure API endpoints. Like when we deploy any bay, we have Magnum services running on one node and then we have some Docker services running on another nodes. So this communication needs to be secure. It should be secured by some way. So we have this, we make this communication secure using TLS certificates and we do manage this certificates in Magnum. We have a way to store it in Magnum or you can use Barbican also that is configurable. It depends on user, which kind of storage they want and yeah, that certificates are encrypted before we store it. And then the next part is the sender. So we have two use cases for sender and Magnum. One is the Docker volume. Like when you want a host where your services are Kubernetes services or maybe swarm services are running. So you need some storage for your Docker services or containers to run. So we just specify like 20 gigs of block storage we want for our bay. So that is created by our heat and mounted on our host to be used by Docker services. And then the next is the volume driver. So like we need some persistent storage for containers so that containers share between them and they use it. So we have one for Kubernetes, we have sender. We just specify the volume driver in our bay model that is sender for Kubernetes and for missiles and swarm, we have rex tray. So what we do is when you deploy any bay, this services are configured on our nodes, Minion nodes and master nodes, and then containers can easily use this volume for their use. And then the next feature is the high availability. Magnum is highly available. You just, for example, you can just specify the count of the node or the master node you want in your cluster. For example, you say I need two master nodes. So Magnum deploys your cluster with two master nodes and the number of Minion nodes you specify in your bay. And this master nodes are configured with Neutron load balancer. So this is used to balance our APS service. For example, Kubernetes APS services. And then one more use case is we have containers running on our cluster. And we want it to be accessible from an external network. So this Neutron load balancer feature provides our virtual IP to the containers which you can use to access your containers from outside network. Thank you. Now I'm gonna hand over to Serge to just explain the question of the century. Yes. So starting from one query summit, I guess, or close to that, almost like the summit, almost each day from two to five people asking me this question. What should I do use? Magnum or Murana? I want one of them. Yes. Which way is better to deploy Kubernetes? Do you guys know about Magnum? And so on and so forth. So I would like to apologize. As a one-query summit, we talked a lot about how Murana deploys Kubernetes, how Kubernetes is cool and how Murana deploys Hila Web, Kubernetes and so on and so forth. But instead of that, we should actually be focused on the fact that Murana deploys it and which capabilities Murana provides in order to you write so complex application. Fact that it's a really complex application can be understood from the fact that OpenStack community created specific project to deploy Kubernetes. So it's so complex to deploy it. And instead of talking about how easy Murana can give you capabilities to automate such kind of deployment, we talked about that we can deploy it. And that was the mistake which lead to this talk. So in Tokyo, Intel, Rackspace, and Miranda's folks go through it, meditated a little bit and made few decisions. First of all, integrate Magnum and Murana by creating set of Murana application which based on Magnum for deployment of Kubernetes, Mezzes, and Docker Swarm. Makes them compatible with regular Kubernetes applications for Murana, meaning that all the applications which you guys created probably for using a Kubernetes application in Murana as application to provision your container management cluster can be compatible and easily switch to Magnum-based Murana application for deploying Kubernetes. We decided to do this talk and we smile and be happy. So what's the difference between previously written by Murana's Kubernetes application and way how new application is developed using Magnum? So first of all, we had only Kubernetes application. We didn't have support for Mezzes and Docker Swarm. We used our own capabilities for orchestration and configuration management and it was done quite like two pillars. Pre-backed image which contained binaries to speed up downloading of Kubernetes from internet and we use shell scripts to automate like orchestrating all the stuff to bring it up. New application from the application to the ability layer looks pretty same. We share same interfaces. So essentially other applications can use this one or that one without any issues by drop down selecting appropriate version or appropriate application in drop down in horizon. But all the magic is hidden from the user is hidden inside the Murana application and we use Magnum plugin for Murana and Magnum itself, which does the magic of deploying these COEs for you. And let's take a look at what it look like. Madri. Okay, so this is the demo for our Magnum app which we have in Murano. So let's just start how does it work and so we have this classes and UI files and logo and manifest it. This is the base package for any application in Murano. So we just create a zip out of it and then upload it as a package in Murano. So let's just upload it. So now you can see that this application Magnum Bay app is now available as a package in Murano. You can see this in the UI. So you just have to go to Murano tab and then look at the applications tab and there you can see the application. So we'll be able to see it under the application. So now you see that we have this Magnum Bay app here. You can just click hit on quick deploy to deploy it. So this is the form which will come out and you have to fill it. These parameters are specific to Magnum. For example, for Bay, you have to specify a Bay name and then the nodes count. So for example, this is the master node count and I'm just giving it one, the default one. And then for any Bay, you need a Bay model because you need to specify the image ID, the key pairs and everything for your Bay. So this is the Bay model which we create for our Bay. So we'll just fill out this form. I'm really very sorry, this is a big form. We are on a work to split it in smaller pieces. So now we are going to create a swarm cluster because that is not supported in Murano yet. But by using this, you can create a swarm or a mess house cluster from Murano also. So I'm just disabling the TLS to make it simpler for me to use the native client. Okay, so this will just take two or three minutes for our Bay to be up and running and our Magnum Bay will be configured with the swarm services running on these nodes. We'll have two nodes here, one is the master node and the other is the minion nodes. So then you can use any CLI to run on it and we are on a way to like, we have many application, Docker applications in Murano. So those applications need to be compatible with this app. So we are on a way to create this, all the applications compatible with this app. So now you can see here that we have this in the Bay list and the status is create progress. As I told you that we use heat to do this orchestration. You can also see it in the heat stack list that you have the stack there and it will just be just up in I guess two or three minutes and till then, Serge would like to explain about the plugins feature in Murano. As you remember, I will slightly move slides back a little bit to this page. Here, better to use this one, sorry. You see Magnum plugin, Magnum plugin for Murano. Why do we need it? So really quickly skip through this and while deployment is in progress, I have a chance to explain. So Murano needs to talk with third party service to say something to this service. And in order, your application contains most of the business logic inside it using Murano PL. But there's always connectivity to different services, to your hardware load balancer, to your database rack which is stored somewhere, to your Magnum inside the open stack but still a service which is not part of the Murano. You need a plugin, plugin which will expose this capability, this CPI to the Murano PL level. So using plugins, you can extend this language by support of different third party services. And after that, use regular Murano PL to orchestrate everything to deploy it. So I guess our deployment is really close to be finished. Yeah. The Sponic VM, we... Yeah, let's create it. So now you see our Bay is created and we can just go back and look, the status of the Bay should now be create complete. And we get the API address for our Docker services to run our containers on it. So we'll just get the URL and we'll just check whether our Docker services up and running or not. So just list the containers and see, okay, I would have used the API address, I'm sorry for it. And just we see that our container is, Docker is configured and it is running. And now just let's run a simple hello world container to see whether everything is fine or not. Here, so we have now deployed a Swarm cluster from Murano using the Magnum plugin. So that using this plugin, you'll have feature of both Magnum and Murano N1 plugin. That is Magnum plugin. Thank you. Yeah. So answer to the question. Not Murano or Magnum, Murana and Magnum. Murana gives you UI, nice UI for end users to provision this application, an API for social service provisioning. And application API to your other applications to interact these applications inside your catalog. And Magnum gives you provisioning and operations, automation API, development experience, choice of technologies for deploying different container orchestration engines. This constantly improved how this configuration is deployed. We just not really was the case with our application because we didn't brought it to give capability of deploying Kubernetes and focus on that. We developing Murana. So if you need to have some application in your cloud and it doesn't have service like Magnum, Murana can easily automate it for you. Thank you, folks. Questions? We have two mics in there. Adjur. Morning, folks. Scott Fulton from the Newstack. I want to make sure I understand the relationships properly. When you have, when you're using a container-based microservices application, do all the services share a single bay? Yeah, you have a bay, like nodes, where your containers are running. Okay, and each bay has one COE, container orchestration engine. So one Kubernetes version would run all the services on that bay. Okay, I wanted to make sure I got that right. Thank you. So which version of OpenStack are these integrated? Is it available in Kilo or Liberty? This is demo developed on Mitaka. Mitaka, okay. And the apps are also available right now? The use of the plug-in. No, the applications are like, still we are working on to make all the Docker applications available in Murana to make it compatible with Magnum. But what did you saw is already available and published on Murana App Supposer and Murana? So the Kubernetes is available, that app is available. Yes. Hi, great presentation. Thanks for the useful info. Well, Murano abstract the heat orchestration. So you have Murano as well as heat in parallel going on, right? So why we are integrating, like through a plug-in explicitly as a Murano catalog app for Magnum, instead, so Murano can provide a choice of whether you wanted to deploy your app like either on a VM using heat or on a container using Docker or on a Swam or a Kubernetes, right? So we actually do that. I mean, we didn't swap one application to another. Both of them will be available in a catalog and the user, when he will select what is the Docker application in which container management system to use can select previous Kubernetes application which is developed, deployed through heat or new ones using Magnum. Or even Docker, single Docker host which is available in Murano catalog which will deploy a single VM with a Docker on it and we'll run a new container. It's there, but it's abstracted in form of different application. We don't hard code everything in one piece and giving a choice in UI. It's completely is abstracted pieces, different applications with their own interface which can be used in UI by selecting them. You don't need to cram everything to one application. So, but what you said is like Murano is not giving you the option currently to choose now. So whether that would be in like, in future releases. It will do so, okay? It's actually the next step in our application and we with Madhuri working on it to make available all the COEs available in Murano catalog and as well as having some obstruction layer with previous Kubernetes application. Very nice, thank you. I have a question. So, just to be clear, the COEs are single tenant environments, correct? They run as a single open sec project. Which will be, now I know in each of these projects, the COEs, they all have their own separate tenancy models, either they have them or they're emerging. So how do you think about integrating the control planes for that when they finally arrive? Control plans? Making the API available to all the tenants in the cloud. So essentially, when you deploy your COE in the cloud, it will be available as well as the open-stacky page. Do I understand you correctly? Well. Multitenantly. Yes, yes. The COE being multi-tenant, that is coherent with open stack underneath. It's a tough problem. Not really. So I hope at some point of time, Kubernetes will naturally support Keystone. And we've slightly talked about that previously. And maybe you can use same backend for Keystone and Kubernetes like LDAP for managing the same set of users. But you always can deploy your Kubernetes cluster to some service tenant because user itself doesn't need to access to scaling of Kubernetes or anything. Access to scaling Kubernetes or to Magnum API in specific tenant needs to be given to Opstein, which manages this COE. Your users need to have access to the only COE API, which can be multi-tenant by using one backend of authentication. So I guess even now it can be configured, but it's not like this. And I hope it will be someday. As far as I know, I'm not a really expert, Kubernetes C group is working on deep integration with open stack. Meaning they will have integration on this layer too. Could you comment on how is this approach better than using a horizon plugin for Magnum? I don't know if there is a horizon plugin, but I assume that if there is such a plugin, what you showed can be done probably with that. Yeah, like Murano also, Murano is an application catalog. And it's have various applications. One of those is Kubernetes. And it's good for Murano to support as many as applications they can. They're supporting Kubernetes very good. So for users which are very less, yeah, sorry. So it is good for Murano to accept Magnum because both are doing the same thing. That's why we are doing this thing. And if you have a plugin in Horizon, of course that would be same. But it's at an advantage to Murano because Magnum has lots of features for each CUE. And that Murano can have. So quick answer, if you're planning only to deploy Kubernetes, you don't need Murano. You can use some Magnum dashboard for deploying Kubernetes or automating the CPS for shell scripts or whatever. Because you're using deployment of only Kubernetes. Murano is an application catalog. It's in Magnum, Kubernetes is only one application. It's a huge variety of them. So if you want several giveability to your end users to deploy a set of application, is it your IT department or real-end users? Then, Murano is the answer. Thanks. Thank you folks. We finished slightly quickly. So I will be here. So if you have any questions, we will be happy to answer you. Thanks. Thank you. Thank you everyone.