 Okay. Good afternoon, everybody. I'm Sabha. I'm a co-founder and solutions architect with Cloud Enablers, a company based out in India, Chennai. I head the Cloud Lab and I work on providing architectural design consulting for cloud-based solutions. I've been working with OpenStack for the last four years. So the topic today is orchestration across multiple cloud platforms using HEAT. And this is going to be the agenda of today, introduction of HEAT at a high level and multi-cloud orchestration. And we'll touch upon the TOSCA standards and the HEAT features supporting multi-cloud orchestration and the centralized template library using with indexing and search and what are the new features that is coming up in kilo. So these are all going to be the agenda. So how many of you have a hands-on experience with HEAT? That's quite a few numbers. Okay. So I will run through the introduction, basically. So HEAT provides a mechanism to orchestrate OpenStack resources through a template-driven model. So HEAT supports different templating model. It supports CFN and hot, hot is the hot orchestration template. It's OpenStack native DSL. And HEAT is not just about provisioning resources in OpenStack. It's also about providing advanced functionality like high availability for instances, auto-scaling, software configurations and the software deployment and also the nested stacks. So to look at the evolution of HEAT, HEAT was started in Grizzly as an incubated project to support cloud formation template in OpenStack. In Havana introduced hot template, basically they found a need for having a own DSL in OpenStack to orchestrate the different resources available in OpenStack. And also in Havana introduced the stack of resources and the lifecycle management of a stack. In ISOs, the feature was extended to support software configuration, auto-scaling, notification and alerts, abandoning of stacks. In Juno, then it was extended to support the recovery of the resources and improved scalability and the visibility. In Kilo, we see that like now there are some key features like multi-region stack and software configuration improvements are done. And Oslo versioned objects are also introduced in Kilo. So HEAT stack creation, so basically like now the end user can apply the hot template or a CFN template. It goes through the API and HEAT engine orchestrate the different tasks involved in the hot template to the OpenStack resources. It is authenticated through Keystone and the whole resources provision through the HEAT stack is maintained as a stack and we can do a lifecycle operations of a stack. We can abandon a stack, we can update a stack, we can do the lifecycle operations on the stack for the entire set of resources that we provisioned through the template. So now we will see how HEAT can be extended to support multi-cloud orchestration. So when you say orchestration, it is not just about orchestrating a single instance of OpenStack. We may need to have an environment where we maintain orchestration as a standalone one so that we orchestrate the different OpenStack installation within the environment. We may have a production setup of OpenStack. We may have a QA environment. We may also have a need for orchestrating some public clouds like Amazon or any other clouds. So if we keep the HEAT as a standalone engine, we should be able to orchestrate the multiple cloud using hot templates. So here we see some new standards coming in TOSCA. So there is a new project in OpenStack, HEAT translator, which translates TOSCA to hot template and then the hot template can be applied using a HEAT engine to orchestrate the different resources. So I will touch upon TOSCA. TOSCA is a topology and orchestration specification for cloud application. It improves the portability of application deployment across different clouds. And it is basically it is a language to describe the service components and the relationship using a service topology. So it is a OASA standard. This is actually coming up and this topology TOSCA is defined as XML. So this is the way the topology looks like. You have a service template. The service template contains a topology template and topology template, in turn, contains a node type and the relationship type. And we can put together all these different topology templates in a plan and execute the plan. So what are all the things that we require in a multi-cloud orchestration using HEAT? We will see a quick demo that covers the centralized template library and how do we enable indexing and search for the templates that we maintain in our environment? How do we define access control for templates? And how do we orchestrate across different clouds? Basically using a HEAT engine. So I will run through a quick demo. So this is how the typical HEAT looks like in Horizon. You can go and launch a stack here. You can provide your normal structure as a file input or a direct input here. And we can execute a stack. And then you can handle the lifecycle operation of a stack. So currently, there is no mechanism for maintaining the template library in Horizon. But HEAT does support a lot of command line commands to execute and apply all these operations. So what do I mean by template library? So we can define templates and metadata for the different templates that we maintain for different use cases involved in the environment. We can have, I will show you. So we have different templates here, which can be used for orchestrating different clouds. Each of the templates has a metadata. The templates are managed similar to the way the images are managed here. So you have a scope definition done. There are certain global templates, the project specific templates, and private templates. You can define this thing so that all the templates are maintained centrally in an environment. And then you can have a tagging mechanism to tag and define your own meta tags for the templates. You can classify the templates for the different use cases, provisioning, deployment, all this can be done using a metadata model. And you can index all this metadata and search through some search engines. Now I'm going to execute a hot template against AWS Cloud. So initially HEAT was developed to support CFN. Now HEAT has much more capability. Now we have to see how hot templates can be extended to orchestrate Amazon. So now with this example, I'm executing a template here. It becomes a job. And then I will, I can also go and execute. This is a template for creating a VM and creating a volume and attaching it. So I'm going to execute it, selecting which open stack cloud I want to run. I have some input parameters for the template, which are pre-filled in the template as a parameter. I'm going to execute that now. So now we can see the status of this particular execution in the destination cloud. So we created an instance called core. It is spawning. It is in the process of getting created. In Amazon, we created one stack. So it is a lamp provisioning AWS using hot template. So it provisioned in Amazon East region. So the same thing can be extended to apply to a brokerage platform now. It is provisioning a new workload called CNX workload here. OK, VM provision CS is happening here. It's in progress state. We can also see the status here. So we created one stack in Amazon. It created an instance. And then we created one in open stack. And then we created one in the compute next platform. The compute next is a cloud brokerage platform which connects to multiple cloud providers. It connects to close to 40 providers. So basically it provides us the ability to talk to various providers. So this heat is in progress. So now going back to just one minute. So we saw the centralized template library with metadata. We saw how the template metadata can be indexed and searched. And we saw how template can be defined with multiple scopes. And how it can be executed against the different platforms. And now we will see how this is possible. How we can utilize the existing heat features to extend the capability to support and orchestrate multiple clouds. So the first thing is heat architecture. Heat architecture itself enables us to define the resource plugin. And we can extend the resource plugin to support multiple cloud platforms. And heat also supports standalone heat deployment. We can actually decouple it from the keystone. And then we can orchestrate against the multiple cloud by dynamically passing the credentials of the different cloud. And heat also supports context. Basically, context enables us to run the stack against different regions within the open stack. So this is the heat architecture. So heat can be instantiated through the API. Or we can use CLI or Horizon. And whenever heat receives the request through API, it goes through the message queuing protocol. And heat engine process the request. The heat engine consists of various elements. It has the functions, like now all the functions defined in the template, like digest, repeat, all this are processed to function. And it has a parameter section where it handles all the input parameters, validates the constraints. And then there's a template parser which parses the different tasks involved in the template and execute it through the resource plugin. So heat engine basically orchestrates the resource plugin and execute all the connections through the API. And heat also has a mechanism to talk to the destination VM, the VM created through the CFN signal. And heat maintains all the data in the DB. And heat resource plugin has two sections. One is the native resource plugin. And then the second one is the contributor resource plugin. So what is the contributor resource plugin? When we see the contributor resource plugin, anybody can extend and put the plugins here. So rack space, we see plugin for Docker, and notchy. It's a time aggregation database as a service. It is part of Sailometer. And the Mistral workflow, Zacar is a queuing service and Keystone. These are all the different contributed plugins available. And each of the plugin has the resources. For rack space, we see server resource, DNS, network, and load balancer. Each of the resource has a lifecycle methods. Basically, a resource has a base class. The base class can be extended for different lifecycle methods, like create, update, delete, and resume a stack. Each of this method has the attributes and properties. Basically, the property are the one which we provide as a input to the plugin. And attributes are the one which we get as a return for that particular lifecycle method. And the resource plugin has a mapping section where the plugin is mapped to the template action. So whatever we specify in the template, say AWS colon, EC2 colon instance, is mapped to this particular plugin through this mapping. So the next one is standalone heat. So by the upstream heat, if you download, you can set up the environment. After setting up the environment with the open stack tenant name and the credential, in the heat configuration file, there is an option called multi-cloud. We can make the multi-cloud as true. And we can give the allowed auth URL of different endpoint of the open stack here. So whenever we execute a stack, it goes and validates against this allowed auth URL, and then it executes it. So here in this example, in a standalone heat mode, we are executing a stack by supplying instance.yaml and passing the parameters associated with the instance.yaml. It goes and create an instance. And you can see that in the horizon. And then the third thing is the context. What is the context? Basically, it is orchestration of orchestration. You can orchestrate heat itself. You can, through the context statement, you can pass the template as an input parameter for the stack. And also, we can define the region as an input parameter. So after giving the input, it goes and create another stack in the given region. This is used for deploying a VM across multiple regions. So I'll show you how this is done. So we can do a heat stack create, the multi-region context. We can specify the yaml file and give the parameters. It goes and create the instances in the different regions. Then the centralized template repass tree with indexing and search. So whenever we define a template, the template has two parts, like no template has the content, which has the set of instructions. And we can have metadata defined for a template, like what is the purpose of the template, the classification of the template. So we can define the metadata. And if we have a mechanism to index it through a search engine, like solar or elastic search, we can define a schema for the template. And we can index this data into the search engine. After execution, it gets created as a stack. And stack gets set in output parameter. The output parameter again can be indexed through the indexing bridge. We can have a different schema for the stack and maintain it in the solar so that we would be able to map the different resources which are provisioned in the stack. So if we are searching for IP, we know which template created it, what is the stack associated with it. We'll be able to map all this through the search engine mechanism. So by enabling the centralized template library and indexing, we get a faceted search, a full text search capability, and the searching of both the input parameter and the output parameter of a template. So there are the multi-region support for, these are all some of the references that we have taken. There is a heat translator open source code available. And there is a blueprint called heater, which talks about the indexing and search. Any questions? Yes? It's one of the GitHub project now. Translator. Heat translator, yeah. Am I supposed to use this? Yeah, I guess so. Are you building the network between these two clouds as well? Actually, we can specify the network elements also as the resource type. So the neutron as a resource type is supported in heat. So we can create a private network. We can create a subnet through the heat template. OK, but then part of you, it looked like you were orchestrating an application across both AWS and an OpenStack cloud. And then are you using neutron to build a connectivity between these two? No. So basically, you have to manage these clouds independently. But if you have to need a connection between these two, then you can extend the VPN capability of both OpenStack as well as Amazon to establish that connectivity. OK, so for right now, basically, the connectivity would just be via the internet or something, basically. OK, OK. Yes, that is possible through TOSCA. So I mean, the TOSCA enables us to make this template portable across different clouds. So we define a standard, your topology, as service template. And if you specify the provider, we can generate a heat template with respect to a specific region. So you will have one TOSCA template, which can be converted as per the different platforms of OpenStack or Amazon. OK, each resource on different cloud. OK, each resource on different cloud is not possible. Because the authentication happens before the execution, so each resource inside a template is not possible at this moment. I have a question about the directory. In which version of it do you support indexing, the directory indexing, and search? Is it just a blueprint and what to be done right now? Currently, it is just a blueprint stage. It is not available as part of upstream. Yeah, in Kilo, some features of CFN is being deprecated. For example, the CFN watch is being deprecated as part of Kilo. But still the CFN, the query parameter, the query API, is still supported. So because now the heat supports much more resource types than what CFN has, some part of it is still available. But I think down the line, the CFN will be deprecated. That's what I guess. Could you elaborate on how autoscaling works with the multi-region? Could you autoscale based on alarms from one region and autoscale on another region? OK. So heat has a watcher task as part of the heat engine. So the silometer can create alarm and trigger this watcher task. And it can execute another stack that will go and provision a VM in a specific region. So basically, it happens based on a matrix that is provided, an alarm provided by a silometer. And this alarm executes heat template. That heat template provisions a VM and adds it into the load balancer. What is the naming? Glance image? So if I understand your question correctly, how do we have the glance image? Correct. Right. Yeah, it is basically a parameter thing. Basically, we cannot have a same image ID across two different instances of OpenStack. We need to pass it as a parameter. Yes. Heat does not monitor it, but it gets the. Yes, right, right, right. I'm not much familiar with Congress and what it does. OK, right, right. I'm not very familiar with it. Maybe I can look at it and then come back to you. Yes, it is possible. Say now we have a mechanism to autoscale, right? So when we do autoscale, we basically trigger a template. If we trigger a template which provisions a VM in Amazon and we can add it to your global load balancer, you are basically bursting it. So now it can be basically the connection to the Amazon API is through the internet. But if you want the connectivity between the two VMs, we can establish a VPN through heat-templated cell. Yeah, another question here. So the example you showed where you provided the credentials in the heat.conf for a second cloud or a remote cloud. By setting the multi-cloud equal to true, are you able to provide credentials for each cloud provider or only one cloud provider in the heat.conf? In the heat.conf, we can give it for currently the upstream heat supports only OpenStack. But it provides a mechanism to define the endpoint of multiple OpenStack endpoints. But we can extend that to support the different endpoints like Amazon or various things. You can support those through plugins. But each one of these providers have their own credentials. And my understanding is in the heat.conf, you can only specify one set of credentials. Is that right? No, we can give the endpoint of multiple clouds. You can provide endpoints and for each endpoint a set of credentials. So if I have multiple accounts, et cetera, all of that can be specified. We can specify the multiple endpoints in the heat.conf. But currently, when heat establishes a connection, it only validates the availability of the endpoint in the heat.conf. Correct. Correct. Not only that, the users log in through Horizon, which accesses Keystone. The tokens come from Keystone. And if your heat engine is bypassing the Keystone, then you have a problem. So you have different users using different accounts. And through different accounts, they want to access Amazon or some other cloud. So if your heat.conf is providing one set of credentials, but the user wants to use another, that's a problem. Yeah, basically, if you are making your heat independent of one particular open stack, at that time itself, you are actually decoupling yourself from the destination Keystone. So the heat has its own local Keystone. That's what is used for authentication. All the operations that we do on the destination open stack is only validated through the Keystone of that particular thing. But you can execute any template through the standalone heat. Yeah, so what is not possible today, if I understand you right, is you cannot have different credentials to be used to access a remote cloud provider from your local heat engine. No, we can do that. So it is not available as part of upstream, but it is possible. It is possible, but. In the demo that I showed you, I orchestrated against the different clouds. And I have different credentials for each of these clouds. Maybe you can take this offline. What I'm saying is for a single cloud provider, I cannot use multiple credentials for different users who are going through Horizon. Basically, you need to decouple it from Horizon also. Yeah, so then it is no longer a standard open stack. I mean, you can do a lot of things on your own. But you no longer have the benefit of Horizon UI or other benefits you get from the open stack. Yeah, when you have to make it a standalone, you are actually decoupling it from Horizon. Because Horizon does not provide the ability to maintain the template libraries as of now. And the second question is, if you change anything in the heat.conf, you have to restart heat engine. Is that still true? Yes. So if I add another endpoint, another credential, then I have to restart heat engine. We need to restart it. But that can be bypassed. Well, I mean, sure, you can say you can be bypassed. But it's not something you can just like that do it in production mode. Yeah, basically, that's the one that I showed you. I can dynamically add a service account. And I can dynamically execute it against any clouds. So one question just to make sure. Even when you're provisioning against open stack, you're not using the heat, which is part of open stack, but then standalone heat instance. And you orchestrate from the right. Do you have plans to, this is the second question, to add more providers like vSphere, for example? vSphere? Yes, to support also. It's not exactly a cloud, but there is virtualization support. Yes, we support vCenter as well. We have hot extended to the vCenter as well. And this is also open source? Currently, no. But we are planning to open source some of the plugins that we have developed. The plugins for AWS are open source or not yet? Not yet. AWS is not yet. OK, thanks. Any other questions? OK, thank you. Thanks for your time.