 סרט, איזה תרחל תשקע וכמובן פתר, אבל אנחנו לא נעבור את אחד מהם שני dyedales, אז, again, just to set expectations so that people wouldn't come out of this session disappointed, but we are going to do is actually cover how all of them fits together and what you could actually do by putting them together and I think that's the thing that you will take out of this session, hopefully you'll enjoy it. Just to set introduction, both myself and Schmuel are close friends and we know each other before we knew up and stuck so we're friends not because of up and stuck. I came from the orchestration and application background and Schmuel was coming from Radware and a lot of network background and we had chat a lot of times about the two worlds and how they're disconnected and show and share frustration as a result of that and at some point when the networking API came to the world, we found that we can actually do things more than just friends, we can actually work together and that led to a series of presentation, the last one was in the last summit and this is actually a second version of that, so let Schmuel take it from here, go ahead. Let's see that the clicker works or not. Green one, okay, just tell me and I'll do it. So that it will be the clicker, okay. Okay, so very quickly we'll talk a little bit about the networking in Neutron and why applications and network application and application orchestration and orchestration should be integrated together, talk a little bit about heat capabilities in Tosca and then deliver and record a demo on using heat to scale WordPress, we're actually using the WordPress example with slight tweaks and we'll work this, we'll work it over it and then Natu will also talk about the differentiation between heat, Tosca and how they can be used together. Okay, so let's start with a quick overview, looking at standard application, both of us are doing applications, centric development and networking for quite a long time when usually you look at your application, in general you can have a very holistic view, right? So you can have both the networking requirements, whether there's the isolated network, whether there's the layer three requirements or the layer four to seven capabilities which are kind of the networking team requirements and responsibilities and then usually where Natu was kicking in, it's basically on, oh well one thing that I care about is having an IP on the machine, connectivity and then the application port and then infrastructure etc. And up to recently obviously all those things were kind of two islands, right? So the networking team was doing their stuff, application team they're doing their own stuff, but with clouds and open stock in specific everything now can be fully orchestrated and orchestrated together, right? And really when you look at this the question is why wouldn't you do that? Because when you look at the application all of these actually make sense to treat it as a single thing. The networking requirements and the security requirements and the application requirements usually are working hand-to-hand with each other and now with the availability of APIs to actually drive everything it makes sense to actually bind them all together. So what's Neutron and again how many in the crowd have used Neutron or are aware of Neutron? Okay so I'm going to do it very very quick. So Neutron in essence gives tenants an API to deliver all what you would expect from a networking function whether that's the isolation of layer 2 networks, the capability to define the layer 3 subnets with the inclusion of the IP address management router gateway and the security and then now with the introduction of advanced services capabilities such as load balancing and service VPN and firewall, okay? If you have a very brief look at open stock the way this usually looks is that you get your isolated networks shown in horizon, VMs are going to be connected, a VM that can be connected to multiple network wraps you show connectivity to different networks, you see here the router you don't really yet see the layer 4 to 7 functionality, okay? And all of this is in essence could be bound together with the compute and with the application using heat. Yeah so basically heat was born originally as an alternative, an open stock alternative to cloud formation, actually it started as an implementation of amazon cloud formation with the same API, same syntax, same even blueprint model, I think at the get go. This is actually the definition that I took from one of the Reddit presentation, all the references by the way to the material, to the example, to the code example would be shared in this slide so you don't need to take it, we'll share it on a slide share just after this session. But generally speaking the main thing that I think if you really try to draw a line of what it does it maps the API that open stock provide into a document model and you could see every element in the API represented in that document model. Why is the document model is important? Why are we not just using the API? Because with document it's easier means to share things, to do versioning, to put it in GitHub, to script it, to obviously model it in a UI, a nice visualized UI, and also read it so you could read a plan and potentially even understand what's going on there. So the two models the API and the actual document itself interject and you could see the mapping actually maps to concrete elements that you should be familiar with if you're dealing with open stock API. But the core thing to take from that is that it's really a mapping, a one-to-one mapping between the API and the document model. From an architecture perspective there are three main components that you would see if you run it on DevStack. Again I borrowed that slide from a Reddit presentation. One of them is the API which provides the northbound API to the application itself which is marked as U. And then we have a different API or different models of API to interact with it. One of them is the CFN which is the CloudFormation API. Obviously there is the hot syntax that I'm going to talk more about it as we go to the presentation and the engine itself is decoupled from the API. So potentially you see a hint there in the heat architecture itself that it can talk to different models of blueprints and not necessarily just one even though there is one that is called hot and CloudFormation that is currently implemented. And if you're familiar with open stock you probably familiar with that picture you could see that it's actually part of the horizon stack and there is a tab there that is called orchestration and out of that there is stack. This is the graphical view of a stack after it's been deployed. Again this is for those who are less familiar with that. This is what you get even with dev stack when you start heat and when you deploy a stack on heat. Again just for familiarity there. So this brings me to Tosca. Tosca actually started from a relatively different model. It started with a more application centric view which is actually agnostic to the infrastructure itself and this is again a slide that was taken from one of the last Tosca presentation that represent the Tosca status in 2014. And the main difference is that with Tosca I'm starting with the topology with the application. I'm describing the component of the application whether they are software or infrastructure we don't necessarily care. Every element in the stack is a node. Every element can be described. It doesn't have to be an infrastructure element and therefore what you could see is that topology also model things like relationship also being model things like dependency being modeled as part of that because they describe the component of my application. And as I said every element in the application including the life cycle including the processes the processes that we do to actually manage the application can be described using that model and that language. And the other thing to note here is the portability. Because of that definition Tosca is not specific to an infrastructure and the relationship between the application model and blueprint to the infrastructure is that it's one fitting the other. So when I'm deploying an application there is the infrastructure that needs to match the actual application itself and usually that matching happens to be the actual result of the orchestration. So I'm defining in a way the requirements of what I want to achieve in the blueprint and then the implementation of the orchestration need to do the matchmaking with the right resources. And that matchmaking needs to be or could be agnostic to the actual infrastructure itself. So in a way one byproduct of that mechanism is that I can abstract the blueprint from the underlying infrastructure itself. We'll see how that looks like later on but that's kind of the idea here. One of them it's more application centric the other one is it's agnostic to the infrastructure itself. One thing to note again coming to what I just said is that we map not just resources we also map policies we map processes we map alerts there's much more into it but for the sake of time I'm not going to dive too much into the details there. If we really look at the state of where Tosca sits right now I think it's when we I think first presented it in Hong Kong I think that was two years ago or a year ago most people in the room didn't even hear about Tosca and most people knew Tosca was the people who actually dealt with that and today we see that the adoption of Tosca especially within the OpenStack community has grown quite substantially there and there is a reason for that. Part of the reasons is because in my view a lot of the users of OpenStack have almost the same requirements and they do look at OpenStack as one of the target environment that they were running with not mostly the only one so as as as close as we get to the application you'll understand why people are calling me this hour they're on towards the demo so it's actually part of that demo that I was supposed to be showing today but anyway that's kind of the state of of OpenStack and one of the things to note again is that it's becoming very very popular and I think the room here the size of the room and the people in this room is the best indication to that since we're running examples here we will need to run the Tosca implementation on a product on something we cannot just talk about it theoretically so you could see that there are several implementation for that so I'm going to use cladify this is where I'm coming from I'm representing cladify in this room and obviously OpenStack heat so most of our presentation would be around heat and then we'll compare what we've done with heat with Tosca and then we'll see how we can actually combine the two that's kind of the flow of the presentation another thing to note is that those concepts are merging so the infrastructure driven model and the application driven model are merging and the requirements between the two applications or between the two models feeds one another so again in Tosca we have an application centric view and it's portable in OpenStack we're coming from the OpenStack resources into the application itself and if we look at the next slide we can actually see one incarnation of that so in the latest OpenStack heat version in ISO it's actually not the latest there was an addition of a new component which is called the software config which really allows us for the first time to define things like depends on hosterdon connects to the software component that's a relatively new thing in the in the heat template on the other hand as you could see in the Tosca definition first of all you could see a definition for software component like MySQL that when I mentioned that every component not just infrastructure can be mapped you could see that in this graph there is lifecycle events which are more elaborated than the one that you would see with the heat and the things that is important here is this focus on the application management and the lifecycle that is mapped into the model itself okay so we'll switch to a quick model in this model what we'll show is we'll show a deployment of WordPress and then after that the WordPress deployment will be behind the firewall exposed through a floating IP it will be used the radar integration this is where I'm working so this is the backs okay and we'll use the WordPress stack and then we'll scale it out and we'll scale it in from a scenario perspective we didn't want to focus on deploying the actual networking we wanted to deploy on a more advanced use case which is the elasticity one so we modified the heat workflow to use a pre-existing network so using the standard demo project which is usually available with an open stack the hot script will deploy the two WordPress VMs the website and the database we'll connect them to the network we'll also deploy a load balancer configuration and we'll use the hardware integration for LBAS to actually create a service VM running in their own project highly available service VMs running on top of nova behind the scene then uh using uh we'll run a script that we create load and we'll see how the heat will actually scale out and then scale back in when the load will be stopped if we look at the key uh elements and I'm focused on the ones that are different and the standard WordPress deployment we can see first that there's the notion of a web server group the key ones are the min and the max size right those are the initial deployment size how many VMs will be for the web and then what's the max which is in this case three the resources are a specialized extension of the nova network called LB server and the extension is that the nova node basically in addition to being created gets registered with the LBAS API as a as a member in addition there's the scale up and scale down policy and the thing to note is that scale up defines 50% of a CPU alarm high as the trigger to actually scale up more nodes and remember that because I'm going to ask a quick question about that and then we have the load balancing API which is the health check the pools etc to create a new stock you currently must use a CLI command there's a small bug in the web UI that doesn't allow to spin up stocks which are which has embedded addition stocks in them the key things to notice is basically you choose the image and this is kind of a change between what you're going to see with toskar right so you specify a specific VM for the nodes the connectivity key then you specify specific flavors the network where to deploy it what's the subnet ID for that network and then what where the external network because this is where the floating API will be created I need to let me okay so as said the initial thing that's happening is that we are creating the the the stack as a result of that what we're going to see is that a stack instance going to be created the stack instance in essence what we're going to see is behind the scene the two virtual machines that are being deployed by the load balance integration we see here two instances of alto on load balancer highly available behind the scene which are now being will be used for the load balancing function so this is by the way running on reddit right right this is running on reddit sorry it's a this is a fully it's not a dev stack so we took the out of the box read that a ice house the distribution we deployed it and then we've used the scripts okay sorry the next thing is we can actually see that the stack is being created as you can see there is a specialized orchestration tab and where all the stacks are being shown and okay wait only one person who knows this okay um so the next thing is okay so we can see that instances were created and the key thing to see is that there are stack events that we will monitor in a second and the stack events are the one that we will actually see the scale up and scale down as I said this is the instances that were deployed and the next thing is that we we are going to connect to the one of those instances and we are going to run stress on it okay what stress will do it will actually create cpu load on that so then we'll start to monitor the sorry oh come on I just jumped to the to the wrong stage okay so what we can see is that the result of running the stack is that we actually got the load balancers now and then after that we're going to review the stack itself we can see that apology and the resources dependency model as you can see and then the key the key thing to actually see there is the event tab and event tab will basically show us the events that we care about second so those are the deployed resources and then we can see the events okay here is where we are actually going to see the scale up and scale down the last thing that we're going to see is that the I'm going to jump forward for a second we're going to see that the load balancing configuration has been deployed we see the pool we see the VIP we can see the health check and then we are going to see actual members okay so now we're seeing the the members being connected to the load balancer right okay and back to the stack which should be three members two members database sorry just one member sorry this case just the web oh that's before the scan yeah that's before the scan okay what we can see now is the floating ip that got allocated and was assigned to the load balancer and we're now going to launch the WordPress application and we can see that it's actually up and running okay we will provide the actual reference to the yaml files after this presentation as part of the slides themselves okay so this is the first stage the first stage is that we've created the new stack we've created the WordPress application it's behind the firewall which we've created those so the script has created those floating ip associated to the firewall configuration has gone into the firewall so now what we're actually going to see is we can see the actual application and jump very quickly to it up and running this is the first time so we actually register the first user and then we can log in okay so far this is the initial application deployment the next phase is to actually see how this application scales out so in order to do that in this example we are logging into the uh the node that the web node uh we've associated another floating ip so we can actually connect to it from the outside and we are running a simple stress script simple stress uh script i'm going to jump a little bit forward just to see triggering things through the cpu utilization right right so what we are going to see now is that on the machine itself we can see that the machine top is going to 100 percent and as a result we're going to see um heat a salometer triggering events into heat and out of those events uh we're going to see the scalar process right so switching now back into the stack we're going to monitor the events table so we can see that we initially get a scale down event but it didn't trigger because we only already have one but then there's the scale up event due to the fact that we are over 50 right so as a result of the scale up event another node and another web has been added to the configuration looking at the new at nova we can see that the second node was added okay another ip and obviously switching to the networking configuration we can see that also the load balancer configuration was modified to include this node okay so now we have two nodes or three nodes so now we have two nodes what we're going to see in this after that is a third node that was added so anyone has an idea why a third node would be added in that case right because if you are 100 percent and you add the second node then your average is still a bit over 150 percent which is how the original script was done so then a second trigger is going to be done and then a third node is going to go up okay so this is just to make it more different then we can see that the actual back and load balancer got configured and thus also has the three nodes so what we've done so far is we've actually demonstrated the scale up okay so the next thing is we've already logged in and we're going to shut down the stress and as a result what we're going to see here is that on the event we're going to see here a scaled down policy trigger okay the scaled down policy will take off one of the nodes okay so we see that we are now only have two web server nodes and obviously the load balancer the networking load balancer configuration is now get modified and we only see two nodes so that concludes the elasticity demo yeah so i think what we've seen so far is how we can take a template create a full stack application using heat template and in one click basically create that stack we didn't have to go through create nova create network create instances in sole software all of those things have been can be completely automated using that scripting model and using the orchestration itself and you could see that you could also apply a policy using kilometer and heat one thing that we'll see how toscar plugs into that is add portability that's the first thing so how we did define the same template without necessarily doing it in a way that will run only within open stack so two things to note i'm not showing the entire blueprint which is not that elaborated but you could see the idea here is that we'll use a nodes type in this case that have a software node that reference to a patch so we have components and not referencing to a software that runs within a machine but the software has an independent component here so that's how this is different than heat so in heat a patch wouldn't be a node in heat a patch would be an element that would run within an over compute node and that would be something that will know but heat wouldn't know about the fact that the patch is actually running within that machine so there's not anything that is mentioning a patch within the context of heat not in the monitoring not in the elements that itself that it's actually monitoring that's that's how it's built in this case every element as I said software component infrastructure is a node and you could express it explicitly in the blueprint itself and also apply lifecycle and all the properties that you could define that's one thing the other thing that you could see in the server itself we referenced into a type that is called compute and we're matching by properties we're not matching by an image id so we're specifying what we need and we're expecting the system to react to that it doesn't mean that we cannot implement node that will have an image id as a property for that but the idea is that you try to keep things agnostic from that respect another thing so let's say that we have a portable way to describe things that's one thing but if we're still dependent on things that are very specific to open stack and we would run only within open stack in our rules or policy management then we're still we're still very much specific to open stack we cannot take that blueprint and run it somewhere else so what we've done within the clarified project is we actually created or took open source project that are not specific to open stack like remand for policy management those are by the way very popular project in my personal view are much better than the one that I produced the equivalent one that I produced within the open stack stack forge project but that's a different discussion mostly because they're used by the industry they're proven and the fact that they're used by the broad industry actually makes them better so remand is a is quite an interesting project in that regard I think something that I would recommend any of you to look at the other project is called diamond d that's a collector which means that I can use it to collect s and mp traps and many other resources through configuration and we'll see how that configuration plugs into the תוסקה blueprint and also an influx db which is a monitoring database that's basically a specialized database one of the things that we're gaining out of that is that there is already ui and a lot of ecosystem that is already built around those tools and we can leverage them rather than reinventing the wheel and that's an important thing so by doing that by taking best of breed solution that are not specific to an infrastructure itself we can actually take that and put it also the entire thing both the blueprint and the implementation outside of that environment so this is how the blueprint would look like this is still not in the standard itself but something that we're hoping to be part of the standard so you can see it for example how I can configure the monitoring as part of the blueprint fairly easily I can install the agent and basically say which property I want to monitor and how do I want to monitor it and that becomes part of the lifecycle of the component in this case the WordPress VM itself the second thing that we could see is the policy itself to trigger the workflow in this case it's a scale up process so what we could see is that I can explicitly and very elaborately define the policy within that language again this is a suggestion not yet a standard and it's very readable I can read it and understand what's going on I don't need to integrate too many things to actually make it work and write script that behind the scenes will do the exact task so the idea is that the model the policies the workflow will be part of the language will be part of the blueprint will be part of the implementation and not something that I'm integrating through different components now how do we combine the two so is that an alternative to one another actually we did a project in which we integrated the task implementation in this case with cloudify with heat and we use heat for the place in which we feel that it fit best and the place in which it fit best is setting up the infrastructure itself and the way it's laid out is that we're looking at heat as the cloud interface so instead of going to nova we're going to heat to create VMs create network heat as a REST API so it's actually pretty convenient and we use it also for discovery so if you install or deploy the stack with heat and created VMs and created network we can actually discover that so as part of our blueprint the first thing that we do is do a discovery through heat see if that node already exists and if it exists we only use it and not create an instance of that what that gives us the fact that every cloud environment would have its own infrastructure orchestration we could layer the application orchestration on top of the infrastructure orchestration and in that case leverage the two together and actually build a much more robust solution as a result of that and still keep the portability as a result of that so from a net user perspective what we're getting is a portable management and orchestration that works across cloud as I mentioned and can also interact not just with the infrastructure layer but also with the orchestration of the infrastructure itself so the other thing that I wanted to show is we talked about WordPress but WordPress is a is a simple example the next thing that I wanted to talk is real life example and for example the interruption that we had in this demo and I'm not going to use that because we're running out of time was actually a result of this example and someone actually trying that out so what we wanted is based on an example that we did with one of the carriers that wanted and downloaded the project and tried to create your own Skype so imagine what we would detect to create your own video service your own chat server your own CRM your own SAS service that will include the huge stack with a lot of elements with a lot of configuration and the question can I model something like that that could be potentially very complex and this is part of that service actually being running right now and someone consistently trying to show that it works I told him to skip it for the demo but he didn't want to let it go so anyway you see that it works so someone Skype him to tell him that it works and he can rest anyway this is the stack and this is a project that is called Clearwater Clearwater for those who are not familiar let's see kind of Skype equivalent in the telco world it's called an IMS it's basically providing the multimedia services the same thing that you do with the video service church service very similar to Google Hangout very similar to Skype from that end but I think the important thing to see is the complexity of that deployment there is a lot of services there's a lot of dependency between those services and this is what you want to provision so we did create a model this is actually the clarify UI and we created a TOSCA reference to that I'm going to show you live right now that is actually running on open stack in this case it's HP open stack so this is a live deployment that I'm running right now on HP are we doing on time we should you have a minute okay so within one minute I'm gonna just give you a glimpse of the idea behind it that's what I did is I created a blueprint I can see that this was the graphical view of that and you could see all the dependency these are the software component these are the VMs the software component by the way are defined by chef so if I look at the source code of that I go to the TOSCA ימל file what you could see is that some of the nodes are actually referencing to chef to define the component themselves but I can also see the lifecycle for installing the monitoring and configure the monitoring in this case it's an SNMP traps and SNMP metering that I'm actually configuring to monitor because that's how that software component Clearwater specifically comes with SNMP monitoring and not something else but I could describe it in the in the TOSCA definition and not deal with that manually as well now going back to the topology once I've created the topology I can create different instances of that why do I want to create different instances for example for QA want a smaller deployment for production I want a bigger deployment maybe with different flavors of machines and so forth so the way I'm creating it is that I'm keeping the model as the golden if you'd like definition and then I'm putting properties and parameters for every instance of that deployment and that's the difference between what you see here is a blueprint and deployment so the blueprint would be the meta definition and deployments would be the instances of that and you could create more than one instance per deployment and we can see that they're all green meaning that they're all running which gives me another model to see the state of the union or the state of the application as it is running what we've seen before and I'm not going to try it out right now because again we're running out of time is that I can actually call that service and use a client here again you'll have to trust me that it works that actually do those video calls do those chats do those elements in the I hope it won't answer but anyway I can actually call that service it's a service that is ready and live and working and I can open a chat and I can open a video line with that service and it's not I'll wake the bear from his sleep again okay but you get the idea it's working trust me and you could see that some of the voice here actually indicates that it works okay any questions so far no questions all right why not using heat environment for differentiation no no no it's not the difference between heat scripts and toskai is that in toskai you could you could still map that into different blueprints and files actually if you'd look at the left hand side there is a lot of files there it's actually more of an object model versus a file model in which you describe the components and how you how you break them into file is less is less relevant to your point I think that you've seen and that was the purpose of the presentation that you could do that with it and you could do that with toskai and you could combine the two now the question is really uh that using the the right tool for the job and and in my view if you look at heat I think the heat would do a very good job for orchestrating open stack infrastructure you wouldn't use heat for example and you could as an alternative to chef or puppet or docker you would combine it as a as a as a plugin that runs within within heat for example right but this is an example in which you created a border in which you'd say well I can implement a configuration management for open stack but you stopped there and said I'm actually going to integrate with some open source that that exists and does a better job on that rather than reimplementing the wheel same thing that I'm saying about toskai there is better tools that does the application orchestration software orchestration as I would call it policy management workflow management and the other thing that I'm saying we want them to be portable because look look at even what happens in the application that runs on your iphone or your android you don't want the application to be specific to iphone or to android or only target iphone only target android right you want them to be running on both and actually all of the application so so the thing that I'm saying is that the more you're going up the stack you want the software to be independent of the infrastructure itself and that's a best practice the right thing to do things so it's not that you can't you can't just what's the right thing to do yes and and one of the thing that I'm I'm not sure if I touched there is actually a project it is called toskai translator for heat and the toskai translator actually create a toskai if you'd like a definition that could run within a heat engine so what I'm saying is that I think that that's the that's a good thing is that the two walls are converging and we'll see less need for doing mapping between the two walls which means that it would be much easier even to combine the two because they will be using the same language yes without the logic yes right other questions yes uh can someone close the door because I can't hear you you want to do it through the heat but heat itself is having software config and software deploy and apart from the toskai uh which which configuration management tool you want to use for the toskai for example you create the two-tag typology in this case yes so that's a good question the question is with which configuration management would they want to use within the context of toskai so what we basically providing today as a plugin is both chef puppet or docker the reason why I mentioned docker in the context of a configuration management is because I see docker as a portable packaging model for software because it's not independent on the actual infrastructure itself I can take a package that was built with docker and move it between open stack and amazon for example I know that it's going to run so in a way people would look at docker as an alternative to chef and puppet even though it's not doing the same thing and it's not the exact thing and they can be combined as well but from from from a clarified perspective we do support both of them so you could plug in whichever configuration management that you want including by the way bash so you could actually write scripts to do your configuration management so your recommendation is that you use toskai plus this configuration management tool right and then heat you just provision the infrastructure on exactly and don't try to use the software config and yes that's that's what I'm that's what I'm recommending you don't have to follow that but that's what I would recommend yeah thanks yeah so I would so right now it's a manual process you have to do that mapping manually I would envision that you could actually automate that process and have an auto generator that will actually go through an api especially if you know the structure of that api and auto generate the types that are relevant to that uh going forward and I think that's the the the best news that coming out of the fact that there is a toskar parser within heat is that all the definition would be compliant with toskar so I would expect that with every open stack release there would be a toskar resources and definition coming with that release and not something that you'll have to regenerate yourself and therefore we could just import that into our toskar implementation and not reparse it or regenerate it and that's the direction that I think it should go so that I can actually import those things again without regenerating that and it will be generated pretty much like how this being generated today there's no reasons why it wouldn't come with a release with every release that would be the toskar resources that maps to that release sure any other question I can see yeah light question for I think I know that for for now cloudify works very well with open stack using nova api so what heat brings to make you to decide using heat instead of using the open open stack yeah so the word instead is uh I wouldn't I wouldn't use the word instead in addition first the second is that people do use heat today and plan to use it probably tomorrow and we wanted to be compliant with that and and for as I said because heat provides the ability to do discovery on the resources that it created we found ways in which we can actually combine the two together and not force yet another implementation of the entire thing that different people already have templates that built in heat so in in this case we wanted to make the integration of the two much simpler and wouldn't you know we wouldn't want to throw on you the task of doing the mapping and do that type of regression we can actually combine it and just you're just still maintaining the uh just maintaining the compatibility with over nova api or yes yes so that's I can't answer that question the question for those who didn't hear if we continue to maintain within cloudify the compatibility with the with the nova api because we do have a way in which we can run very similarly to to heat directly with nova and through the heat api and the the answer is that at this point we will probably implement both because there is a requirement for both customers need them both and as long as there is a requirement for both we will support them both if at some point it will be all switched to it then we can deprecate it and basically just use heat as the interface as our main interface to open stock thanks any other questions so again as I said all the resources and we have a reference to that all the resources including the examples the heat examples the yamel examples the demo itself can be downloadable obviously the references to that maybe last point schmuel on the on this nope that's a short answer so you could read it yourself thanks very much