 Good afternoon everybody. My name is Jim Kyle. I'm a problem manager with Cisco Managing the enterprise cloud platform solutions this include a couple of products prime service catalog and Cisco process orchestrator and We also have Matt Presenting so he will come up to introduce himself So for today's presentation, I'm going to cover a few topic here Start with a problem statement. Well, we're trying to solve with puppet and and heat and We're going to talk about how heat can be integrated into a CMP, which is cloud management platform Architecture I'm going to cover how the resource plug-in were built for integrating public and heat and How public can be used to provision application server and multi-tier applications and how to extend this architecture To orchestrate the component outside of the open stack We'll start let's start with the the cloud driver, right? So if you have been here for the last three days, you probably you know, anybody know these terms very well I believe infrastructure service platform the service application the service and hyper cloud, right? so those are the the driver for For the cloud to provide self-service To address the enterprise need there's another driver now these days is about the DevOps and CICD continuous integration and continuous delivery So enterprise is starting to adopt these a newer trend for the application development however When we start to look at those two driving forces You will see that the silo is being broken The the storage team need to talk to the network team if they haven't before or application team Right, and so so is DevOps right developers need to do operations Operations folks start doing development, right? So the the converge converging point is the cloud the cloud is where all the silos were broken or the infrastructure come together But very importantly, that's why there's a need for governance and policy right for so for you're looking at Managing the cloud in a enterprise adopting new trend in development Platform the governance and policies are very important and the solution also need to deliver a number of these components or requirements feel well need to have a comprehensive automation across infrastructure People are looking for self-service user experience, right? So they don't have to have somebody make a request and you have a whole team behind there doing things manually You want to be able to provide a self-service experience Furthermore you want to have the acceleration accelerated application development and deployment and Also has a hybrid cars. We just mentioned that is is a direction that many organizations are going So the problem which I'm address with some solution from Cisco We're going to use that as a as a context for this presentation. So there's a number of products that we have created a built and to address these in you know every individual area including UCS director that is a infrastructure automation Solution prime service catalog addresses the the catalog self-service aspect and ACI Vax with service KLO and UCS director all together that would deliver a accelerated Application deployment experience with one click I'm not going to go spend too much time just to give the context of this discussion especially Okay, so so this this suite of products actually is being you know Go to market and download as a Cisco one enterprise cloud suite Before today, we're not going to cover all all the product there. We're going to focus on the orchestration integration between prime service catalog and the UCS director, right? So UCS director is the one that that's specialized and very of Provides the infrastructure automation And then the prime service KLO provides the single plan single pane of guys user experience So we do want to start to look into what's the orchestration and provisioning Use cases or requirements That we need to address So I like to look at it this way, right? So for For this kind of the cloud solutions, we need to provide the ability to allow you the organization to be able to build you know build the infrastructure, you know build the the design or you know the Design of your infrastructure and then in a way that it can be instantiated Created by your infrastructure management components But then once you create those design you want to be able to publish it as a service and I think it's very important point here is The publishing is where one of the example of the governance and policy, right? So It's not like everybody can create a design of infrastructure and then anybody can order it, right? So the publishing is where you Put a put a control point in your organization where certain policies in place. So only a certain role of organization can perform the design only certain people can Can can then publish it only certain people can then The next step is consume right so the policy also goes to who can actually use The services that's being published So there's a while we are adopting DevOps and you know the cloud organization This is kind of the use case and paradigm here that The solution need to need to provide and then the consume basically has a self-service front-end that people can come in here to order and then automatically At a back end that if you know if he meets the requirements, then we can go ahead and automatically deploy the The request whether that is infrastructure or platform or application So let me just go through a scenario here, right? So When we do that we start with built so you can build the infrastructure We call it container that is not the docker container first of all, but it is the way that we describe the infrastructure components, you know virtual machine your gateway or you know low balancer and that's basically a network construct that Made up the infrastructure definition So you define those construct in the UCS director, right and then that will then be Discovered and imported into our service catalog where we are You know enforcing the the policy and governance And then that's where we can then visualize the container into the the stack designer furthermore, it's not just for visualization. This is where you can then overlay your application, you know whether it's a SQL server or it's a Oracle app And then you publish it, right? Then then it can be then can be orderable And then we go to consume Action here the the end user will come into a service catalog. So it would be a single click and And the first step then you will go through the UCS director to have the infrastructure provision right and then the the stack actually would then Include the definition of the the puppet modules and then you will be overlaid on top of that The infrastructure And all these are orchestrated through heat, right? Okay so so I think that gives you a little background of of the the problem try to address with heat and puppet and you know through that the design of the architecture We actually look for a number of options, but what we're looking for are Some of the criteria including, you know, we don't want to be model-driven, right? So it's knowing what to orchestrate and not necessarily how you know how you can delegate to your infrastructure management or you can delegate it to other component We look at template based a template is important because we think If you can define as a template Then you can store it you can import you can export it You can share it you can move it from environment to environment. You can edit it, you know, so there's many many good Very nice aspect about template based and then more importantly you can version it, right and then you can extend it Right so you can add capability as needed. So with that I would like to introduce Matt He's going to show everyone how we build the how we use the heat and the puppet in this solution Great. Thanks, Jim So my name is Matt Brown I'm the product owner for the applications and orchestration components of Cisco one enterprise cloud I want to talk a little bit about more in-depth about our orchestration and application provisioning with heat and puppet So as an overview what we've got is an orchestration layer here that has a standalone OpenStack heat and Keystone installed on it There is an AMQP consumer on it that receives the the payload from our service catalog and Then calls the heat API and then where necessary we have custom heat resource plugins to handle types that aren't necessarily predefined that are in our templates so as Jim mentioned we're this was developed to solve a particular use case that was driven by customer need In particular the need to orchestrate between Prime Services catalog and UCS director We also needed to provide application provisioning and this application provisioning needed to be based on a customer driven event A stack built by the customer out of components that either we provide out of the box or that they have the ability to create themselves It needed to be able to support multi-tier applications That was another another requirement And as I said a customer defined stocks and it also needed to have Windows and Linux support. So why puppet? Partially driven by again customer demand. They had puppet modules. They wanted to support an existing puppet workflow We needed to fit into And also because there's a wide array of puppet modules available in the puppet forage for us to be able to provide out of the box applications and Also out of the box requirements and configurations However, the way we're doing this with With writing custom heat Custom heat resources, you know, we're not necessarily tied down to puppet alone We chose puppet as we thought it was the best way to go but You know it is possible to extend this to use ansible chef or even just to put batch or shell scripts inside Inside of these calls. So There are some options there, but again, like I said, we chose puppet, you know a little bit of a component overview Starting off with the service catalog the orders orders created so the the customer configures their their stack by starting with a template of infrastructure and You know layering on their Applications they wish to put on there and configuring that and when they go to order it it publishes a heat template to a MQP which then is consumed on our orchestrator node and the heat API The stack create is launched with that heat template that heat template is where our custom resource types and you know existing resource types are all put together to form the order for this stack and And on the orchestrator with our standalone heat and keystone we have custom resource plugins to Regular resource plugins and custom resource plugins to handle all of the resources inside of the heat template To do things like create resources on the UCS director So that's the the resource template we talked about with VMs and a gateway and internal network there it's all preconfigured and For the solution it's running on VMware vCenter and there are other resources to for example Install the puppet agent on the on the target VM to bootstrap the puppet agent So that's install and configure and forward its certificate over to the puppet master and then also a Resource to contact the puppet master and say hey, you know sign this certificate And also to persist a classification into a database on the puppet master. So the classification is the desired node state Altogether like what what should it be when everything is done? And once all that's done there's a callbacks made to the service catalog to say hey this order is finished it You know it failed or it succeeded So these heat templates like I said created dynamically by the service catalog based on what they what they order past the orchestrator resources for each step and These resources each have individual parameters and there's top-level parameters the resources can can share And we have the ability also to Set dependencies between resources So for example, if we've got a SharePoint installation and the MS SQL cluster has to be installed and running before we can Move on to the application server then we can set dependencies on between the different the different aspects of it So here Sample heat template You can see the top-level parameters. This is clearly a database application here Top-level parameters DV name DV username, etc We've got some puppet specific parameters there the puppet master host and puppet master IP that's information that we're going to need for another resource down the line and Some infrastructure specific parameters. This is for our UCS director. Those parameters would come in there and be handled by the resource And here we have Resource a little stub for for resources would be a custom resource. We define we define not an existing resource and This is for our service catalog service for UCSD container Like we said earlier container not meaning Linux container. That's terminology for UCS director So our custom heat resources and plug-ins. So the plug-ins handle the work for these custom resource types They're generally defined with inputs and outputs parameters and outputs for them and Inside these resources we have Because they all extend the the heat engine resource type We have lifecycle operations such as handle create handle delete Handle update and that's where the work that you're actually going to expect this plug-in to do. That's where that goes Where you put them is into a configured directory the plug-in there's configured directory you place them there and upon heat starting up It automatically registers those resources as available by heat Let's see. Yeah, like I said, they're dropped in the heat plugins folder They extend resource But we also have some types that are that are not discovered from necessarily from plug-ins living in the plug-in directory Because they're types based on service items in our in our service catalog So obviously for each one of those service items, we don't want to have to go through and create a new plug-in for each one So these are dynamically created types So we drop a piece of code into the plug-in directory that isn't executable So as it's going through all of the plugins it hits this one Which makes calls back to our service catalog to retrieve the definition the name and parameters for these dynamically created types these are primarily just to provide callbacks to the service catalog and Then a resource mapping as I'll show in the next one is done that will map these resources to To a stub type and actually create these dynamically created types that are based on service items that are defined Okay So a resource plug-in example it requires a resource mapping to determine what kind of type it is and The class defined there with with its inputs its properties at the top level and Then you know exactly what does the resource return? So that's the bottom section there. So our output Inside the handlers here. You can see handle create obviously called when a stack create is called And this is an example of one that we have a custom resource So inside of the handle create you can see the API call to actually create To create the container for or container the resource on UCSD And then return that container So that happens and then it begins a poll and inside that and that poll is Handled by the check create complete which gets handed the output from your handle create and You define what it means to be done there inside of here so that definition is there and will return true when it's created and return false if it is not yet created and Likewise we have the handle delete that works in the same fashion that so on a stack delete for example you would have your API calls to delete your resource and Then return that resource as the output and then poll polling is done there to check if the delete is complete And again you define what that means and return true or false. So moving on to our puppet integration so We chose to use a master agent configuration again This is you know mostly driven by customer need where they are working in a master agent configuration type And we thought that for this for this workflow that it seemed to make the most sense for us We're also using an external node classifier to deal with classifications and retrieving classifications or that the agents retrieving classifications and We're developing right now I'm collective driven by heat. That's something that we're working on for this next release So puppet modules. We're providing dozens of out-of-the-box apps and modules that you know can define applications or configuration states or What have you? We're working on existing puppet installation support So we want to be able to drop into an existing puppet Installation without obviously causing any problems in order to make that happen. We're using a Model namespacing a module namespacing. So basically all of our modules are Our names specifically all the modules and all the dependencies are named specifically so that if we drop into an existing Installation we're not going to have any name clashes say we have a mysql Class that goes in there and then there's already a mysql class living in there We'll get the namespace so we ended up going with With a with a token underscore then the name of it and this mirrors the puppet enterprise workflow for the For the modules that they're providing. They did the same thing PE underscore, you know, so that's how we handled that And puppet enterprise versus puppet community so We designed the the the steps in the heat the custom heat resources that we made to work for For both and that's something that we're working on the next release that you know We're definitely gonna have like one workflow that can apply to both And to that end we're also working on having m-collective drive the puppet orchestration so for for for example accepting the certification and for Asking the puppet agent to classify itself And of course the modules I need to work with both that's goes without saying and It allows It allows the customer to continue configuration of the node for example in a puppet enterprise environment You know we're we're able to drop in so they can continue the configuration of the node without us interrupting Interrupting that workflow So our agent install is handled by a custom resource There There's a post initialization step that's run via the UCS director There's a rest API after the after the VM is spun up that we can call to run arbitrary code on the Target VM this differs from like cloud and it Which we don't have access to through UCS director so much as But the nice thing about this is that we can then support Linux and Windows So once we instantiate the will install and configure the the puppet agent then puppet master can communicate with With the agent whether it's Windows or Linux. It doesn't matter. It communicates the same And also through this through this workflow this this bootstrap We're also going to be setting up the m-collective key exchange and configuration So that's something that comes for free with puppet with a puppet enterprise public Community addition something we have to build up We're using if any of you use puppet for you probably familiar with this a role and profile module workflow and That a class here defines an application or an individual configuration and The profile will then instantiate that class and then and then you know continue configuration of it using using functions of that class and The the role will call those profiles to describe a complete state And then parameters get trickled down to the class from the rope from the profile and this is considered Generally best practice. It's not absolutely required. You can you know classify a class directly if that's all you really need to do but Most of the time that's really never the case I haven't really said much about hyra But our workflow does not really preclude the use of hyra. So That's something we're hoping to to not get in the way of it all a little bit more about a role It is where you pass in your your top level parameters. There's an example of a role It can include logic if you want to So you have the ability to to do some switching there if you'd like and what it does is it instantiates the profile When you see that the the parameters have now trickled down from the role to the profile And you can call multiple profiles from within the role So if you have multiple configurations you need on a single role, you can do that multiple times in here The format of a profile here It takes in the parameters from that role. This can also include some logic if you want it to And then this is the point where you would instantiate that class bringing in those Bringing in those parameters that have trickled down from your role to your profile and then once you instantiate it you can call functions of that class as well and We're using this anchor order here so that everything goes in individual order So the heat orchestration templates for for the apps So, you know the heat engine is fed that heat template for the stat create and There are resources that specifically specify the the public role and parameters that get put into the the external node classifier and those were interpreted by custom resource plugins and Then finally that orchestration template then we handle the VM creation and the and the continuing puppet steps like accepting the cert and and forcing the the puppet agent to classify itself and capturing that out, but I Been talking about the external node classifier Exactly what that is is it's a it is a puppet feature. It's something that's included with puppet enterprise But not included with puppet community addition something you have to write yourself So our goal is to have the the REST API for the ENC be the same type signature for Puppet puppet enterprises is for public community addition so that we don't have to write multiple multiple resources for that And Of course has done the communications managed to a custom resource plug-in communicates from heat so for the next release our plan is to continue with M-collective interaction to drive the drive the orchestration and collective stands for marionette collective It's a framework for parallel job execution Ideally in an environment you would use M-collective to you know, splay out commands to large groups of Nodes, but our idea here is we're going to use it for these individual These individual puppet calls And it's broadcast based as opposed to being SSH, you know You're not sshing into the note or doing anything all the nodes are listening to to this To this queue and it contain the message contains filters to let the node. No, I mean it does this message apply to me or not And it's also extensible For example the ability to accept the cert on the puppet master based on M-collective is not something that Was already a part of them collective. It's a plug-in that we found that we are working with and that needs to be And that plug-in needs to be on the client calling it and the client running it The M-collective interaction the master agent server client setup is a default in puppet enterprise The client needs to be installed on the orchestration node and the config agent Also needs to be set up during the bootstrap as I said before it comes for free in puppet enterprise and There is a you know the plug-in I mentioned for certificate signing from the orchestrator And then the plan is to use that to initialize and track the puppet run from the orchestrator So you run it and then there's the ability to pull afterwards and check to see if it's done And then from there grab grab the output from the last run There's another plug-in that exists already for for puppet and that's part of a custom resource plug-in that we're using so With all this Putting it together. How do we actually make these applications multi-tier? So That infrastructure template Structure has multiple VMs a gateway and a private network and this is all preconfigured as to which ports route to which VM Something that is set up in the inside the UCS director So having that we now have that we now have the the ability for these these VMs to talk to each other and The heat template can contain multiple resources As you see here, you know, we've got this is just a stub, but got a web tier, you know and a DB tier listed here and You know parameters can be shared between these resources say if you've got you know a port that you need to You need to have mapped on one to the other, you know, you can always share that information And we have the depends on functionality existing So like I said earlier the the example of SharePoint where you have MS SQL and that needs to be completed before you can move on to to the app server So that that depends on functionality so looking at this, you know, there's a lot of talk about containers and This week and how could this apply to containers? Well, there is a Docker puppet module already that we've brought into our solution in this current release and This Docker puppet module has all the has the ability to not only install Docker, but also to to spin up an image and you know create a container on a node and Since we're using that role profile paradigm, you can have multiple Multiple Docker profiles set up in a role So if you want your VM to have you know multiple containers running on it You have the ability with with this to go ahead and define that and everything that is available to To Docker to instantiate or spin up a container is available through the the puppet Module, so the puppet module will only ensure that Docker is installed So it won't be installing Docker for each time it runs just does it once and then continues on the configuration from there And this is containers on VMs This is not like on bare metal or anything. So that that's the current workflow but You know taking all this into account, you know, it is possible to use other Use other open-stack container workflows and call them via heat through here So this solution we may down the road as opposed to spinning up VMs We may use one of the many one of the many Open-stack container workflows that have been discussed this week to to instantiate containers as opposed to as opposed to VMs so Oh So that concludes my section of it. I want to Jim back up here to To do an would do it close it up and then we can take any questions You do have questions. Please go to the mic. So the good people at home can can hear you Yeah, I think you can stay here. I mean we just have a couple closing point, you know, I think Glad to have a chance to share about how we you know use puppet and how we use heat with everyone here Looking down the road. Definitely. I think this tremendous opportunity to continue to grow the solution and Leverage, you know, other Open-stack components or projects in our solution. So there's one example to show how, you know, how the Decouple ball if you will right or you can leverage any components in your solution, right? And then there's tremendous opportunities down the road. So the next thing we're looking at will be integrating with Open-stack instance. So the heat like we mentioned the template itself is transferable And then the engine itself we can have the template directly import into or send it into the heat in a open-stack and orchestrate the open-stack environment. So if your cloud is open-stack, you know, that can be done that way and I think I show the the state designer, right state designer is the one that you can visually create and design your infrastructure, then that's where or the The the so-called container and that can be a container on the open-stack if you will So I think that is kind of the future-looking I think we talk about the puppy enterprise support So that's definitely we we are looking at so if you know customer has of the enterprise already Then definitely it makes sense to connect to it All right, so I think in a few minutes any any question, please You know, let us know. Are we that thorough? Are you basically applying one container per VM at this point or does the future stand? No, we can so with the role profile model. You can have multiple Docker profiles that in each profile can then spin up a container You know you from the hub or whatever can you spin up a container so you can have you know as many as your VM can can handle really Using communities already hunting what are you using for content management? I guess Docker, so there's a Docker puppet module that we're leveraging there to do that with any question If not, this is our contact info So you have it here and there's a link for Cisco one Enterprise cost if you have interest Feel free to take a look All right, that's the top of today Thank you. Thank you