 Okay, back there. Okay, so sorry for late. We have some technical issues. So Let's start it. So welcome everyone. This is the heat team session one of heating sessions and just reminding We also have other like other two heat sessions. So you have like to happen to like have interest with heat So don't miss it So this this session is about the project update of heat and And I'm recalling from a mistake. I'm the ptl of the Pica release and then Peter from red hat and also the ptl for the general release and I like to remind him we also have The other we also we have totally like 100 and 102 developer for to contribute at the Ocata release so that is for like recognition for those names if your names are not on top and you have like Review or email. We also thank you very much as well Then and that we can do more more recognition next time, but just for thank everyone. So We start to update project date The first thing is that we like to update our goal like actually like present our goal to everyone So the goal the the goal used to writing on the wiki page for heat is about We like to say We always trying to do is like to create a human and machine accessible services so Is a to manage your life cycle of your applications and recently like Then has proposed for the government's Open State policy about We should do more to care about everyone's like Applications whatever the scale is so there's like Like for care like overall cloud not just in one project, right? Heat doesn't survive by its own. He is one of the project and and so in in means that what we care is about the automatic or the scalable is state like stable and What you really feel comfortable about putting your applications or the infrastructure of your applications rely on heat to to to doing your job and Make it like that heat job. So I you you only care about your applications and we should we should make it that way so let let in turn that that's our goal and and Before we start to like update what's the news I Relate to like show briefly introduce what is Heat so heat is a orchestration services of OpenStack Basically, we we manage your OpenStack resources like you probably have entire days that heard about instance in Nova Valence and Sanger and he is he trying to manage those services in and try to orchestrate those services and to manage their life cycle and we can even include like managing the applications on top of low services and Heat architecture you can see like you put in a YAML file, which you like talking telling the heat about I want 100 Nova services and with like What kind of a neutral even like I want some Extra like out like Kubernetes or any kind of thing I wanted to deploy to those bare metal to those all to those virtual machines and He can do it. Let's what our purpose and the user user interface to will will help to call our REST APIs or all REST API services will call heat back end and heat back end will actually call the clients and We will do more detail on the onboarding sessions so we just briefly talk about it and Now we know is about architecture and we start to talk about What what what we're trying to achieve to the code and how we're achieving it, right? So on the way what we do is that we have implement like for new resource type in a new term and like 22 complete blueprints and Okata we have 11 new resource type implemented and 10 blueprints, which is already a very incredible number because Okata is a relatively relevantly short release and We also have like down in pika one which is which is like Release at the April and we have three more resource resource type also with a Complete blueprint so as update we would like say we are in good shape and We have we have a lot of a developing underworld which we will introduce later and And to talk about those new resource type Which you might be interesting because right now you're probably using not I don't think you're going to use in Okata right now Not good high chances. So you're probably using like liberty Metaca so you And you facing like going to upgrade your your environment and those are resources that you can considering to to manage by heat which is already possible and In from Newton we have like monoscar, Singers, QS and also Okata there's like Nova Keystone, Sahara Newton Destinate also a and the Zarkar and we also deprecated Destinate for for It just version replace and there's also always glance image So we deprecate grants image because it kind of creates some burdens for heat engines, but We right now we haven't find our Solutions for like like over the glance images. So we even though we deprecate it You're still capable to use it. So So don't worry that much and in the pickup. We also have like Magnum clusters and Newton Sun Newton resource under developing And maybe there will be like freezers on or even possible ironic some of that isn't right now currently under developing so might be chances so We update some of the resource type, which you might be interesting and You can use in the orchestrate open say orchestrate Shouldn't resource type list to to figure out what kind of resource type you already can capable to use and those resource type You're feeding within the YAML YAML the template which will actually generate exactly resource you need it. So You might be surprised to to look at your resource type list and you might you might find some new ideas how to manage your clouds Maybe put it inside of heat resource Heat template will be a good idea. That's what we're trying to push here to to make make everybody you try to using heat and So try it and For really for each release that is a tradition that we will have a new template versions which will contain new new functions those functions are We we avoid like projects like like projects or users trying to using a heat template, but but be but being break. So right now we have like New tongue okata and pick our release template version and we also have the the other like names of like like 2017 Dash some some months kind of forgot But that's the name that I Don't I don't think you guys want to remember it. I don't see you can't you guys can't remember it So how about you just remember like? Heat template version pika okata. So that's an easy dance and For the function releases that We have done a lot of new function logic you can list here. You can see the list here So a lot of function logic is kind of like a good chance that you you can go back to to see your template And you might be fine with some useful one that To to imply to imply those logic into your current template to make it like work more smartly, right and Those those functions are some of the new functions in the pika release are still Underworking but basically for okata and new terms that is already stable We think it's also we think it is stable. So it depends on the only users to report Right, but our test showing that that is all very stable thing. Oh Feel free to try it. It will definitely surprise you in a good way and Also that right now we have supporting tempest You can you can using the tempest to to run a lot of a test Which is actually going inside your environment to do a lot of tests and all those tests are actually running every day by day in our upstream gate and jobs, so In case some user may may trying to our like developing their own That's our support is that you right now can using tempest to to to run the entire test and There's some step that you you can you need to apply before you actually using the Tempest command, which is very easy steps very easy steps. So you can check out on the Give link the integration to see some remit and so There's a lot another update is about Convergence so convergence actually not new developed out from okata cycle But we'd like to keep mentioning it out because that is like we very recommend for a user developer or operators to apply it because the convergence is For us it's like the very smart and new generation of the heat stick and what is convergence Ensured is about like for example, I have I have a Kubernetes cluster and look like The the size of like up they probably like you have a template to become a stick Right and inside the template you have you have like those are the Kubernetes meaning minion nodes Those are the masternodes So the the green thing is about your resource like each one like this one is a Nova server that one is a It's a glance image and let's I depend on each other so there's an image to show to demonstrate and On the bottom left side It's the we like to call it like legacy stake Which probably is the one that you're using when you like using the old version like Juno version or other version That stake will be when you put in the resources and the friend is Represent like a heat back-end engine was just a single heat back engine So that engine is actually going to set your entire resource tree and and do the job by its own even when you have like a Very usable h8 usable h8 Environment let's still be the case for the legacy stake So it will do its own unless there's the nasty stake he was separated by stakes and In this case even this even even so only just two engine and at the right side There's a I would like to say the convergence mode, which we will talk about more detail in Onboarding session, but in short it just it's possible that right now that you can separate to like separate your entire stake template like to to the entire Over your heat clusters and those older engines will accept like accept like single resource and and Help you have you like deploy the entire thing very parallely, so It would definitely improve your performance in your environment performance If you have like more than two controller nodes so definitely I will hope that user is trying to use it and if you're using like I think it's from me talker and You didn't like you didn't give any config say no, I don't want convergence So it will default to be your services Newton Newton Okay, yeah Newton so it will default be your services in from Newton release So you have new release you say I don't I don't say I want convergent Sorry, it is your services you and it will help you we promise and Okay, so We have convergence update which I'm I'm referring the the right side of the the three items, right? so the three item will be like a Significant drop-down of memory usage the left side the right side of the diagram is actually not for a convergence But it is a diagram to show the bow What an improvement in leg and sake and the memory issue is always the We're trying to talk it Not just for a very big large scale of the open stack environment We're trying to talk it like even when you just have a single heat engine note heat note We hope that entire combination can work for you not create so much burden and We have like fixed the update cancel error So you you your environment might be operates at the same time by two users try to update the entire steak And you I mean your entire resources and now it's they won't they should not be any problems do so Also, we have like a resource property a Reality that is When you're trying to update your resources and usually you will feel like heat just keep Just finally update it, but right now that it is a feature that almost complete that you can update the steak and you will actually go check the Actually reality of the property of that resource and and see if the properties is Actually is some difference with the the template you're trying to update and then it will actually send it the update request when it compared to your live Resource property, so that way that you can make sure manage it of your stack more smartly and Actually, we have other versions of the tent the entire PowerPoint, but so the right side of the diagram is for like it actually for a test a test diagram in the in for triple O project and about that the heat we using is for Lake and see steak, which is not non-convergence but you can see the memory drop down drop down and then you can probably update sure. Yeah, so Towards the left hand side they can see kind of during Newton development the the complexity of the triple O templates was increasing and the memory usage in heat was was increasing with it and Stuff we did around the end of the Newton Cycle as you can see not that right back and since then it's it's been pretty flat so the The increase in complexity in triple O is not resulting in the same increase in memory usage in heat That there's there's one little bump there towards the end And there's little the this kind of gray shaded period is not entirely representative because triple O changed their config to use fewer engines so it uses less memory, but So the the first jump there in the middle of the graph is is an artifact of that The second one is due to a change in triple O But you know Memory usage is kind of under control now it would appear so that and this is the legacy case the an update on the convergence memory usage is We don't have a graph like this yet because we're not running a convergence job in triple O that we're probably going to start soon But Towards the end of last year convergence was using roughly double the amount of memory in triple O It's now 15 to 20 percent or 15 to 25 percent sorry higher than Than the legacy So we've we've not the convergence right back and that's With still quite a lot of work That's kind of being proposed in pike that hasn't landed yet. So we're hoping to bring it down even further Thanks Okay, so We have a lot of discussions about the convergent next step which are very much needed for like We we're very urgent We're very designed to have like user feedback or operator feedback to to guide our developers to know what to do so we also have like a large large orchestration stake sessions and We would like to invite everyone to join if you have any ideas about the large scale because the convergence is about to auto do the automatic to your environment to manage those resources and Which we which the next we suppose we call the next generation is still under developing If you can put in your effort or your any of your ideas, which will be very ideally We're very impressed. It will very Appreciate and we talk about automatics of I'm hand over to then and he will talk more about What is likely use cases and a lot of us seeing new things that cool things that we've been doing so Okay, you're now So this this slide is the thing that if you saw my talk in Barcelona You'll be kind of familiar with already we did a demo there That was just kind of hacked up script because the heat resources weren't ready yet But one thing I worked on during okada was to get heat resources for all the stuff set up So this is this is an auto healing Stacks so basically you have a heat template which creates a nova server. It also creates an aid alarm Sakaq as car subscription, which triggers mr. Workflow, which is also created a template And so what this basically does is heat heat you go heat stack create sets all that up Aid is listening for events from Nova on the Oslo messaging bus And if you can configure what events you listen for but this one's listening for a stop error delete and That triggers an alarm and aid Aid can deliver its alarms to as a car queue Scars subscriptions can treble trigger a mr. Workflow and the mr. Workflow got calls back to heat and says Hey, we've got an event on this server It's it's gone. It's dead So use the resource mark unhealthy command And then just do a heat stack update with the The minus minus existing option, which is basically keep the same template keep saying parameters Don't change anything but but do another stack update and Because the resource has been marked unhealthy When that stack update goes through it's going to say oh this this resource is unhealthy It needs to be replaced and so it will create a replacement server Update all the other resources that are necessary in the stack because they depend on that server ID So it up ads the the workflow config and that kind of thing And you're ready to go again So if you have another failure it will it will handle it so all the resources for that are Available in our carter the URL at bottom there. You can find the template for that On on the open stack heat templates for repo And this is kind of How we would like to To do more things in the future, so we're giving Every application is a special snowflake, right? It's every application has got its own way of wanting to heal itself after a failure or whatever So we don't want to implement a bunch of stuff that's in heat that's hard-coded in python That doesn't work for your application. We want to give you Plugable tools to say Okay Here's an event you you deal with this like you can configure as a user not as a as an operator How you want to handle the event so you can write your own mr. Workflow to do whatever it is you need You know if you don't want it to auto heal, but you want it to just send you an email and stop and wait So that you can check on whether you really want to replace that server you can totally do that in Mr. workflow and That will probably tie in also to Some of you might remember we were talking One of the evolutions of the convergence architecture We said maybe in the future will continuously monitor all of the resources in your stack and anytime one of them changes will immediately go replace it And I don't I don't think that's probably the right model because not everyone wants to replace the resource as soon as he decides it's found But what you can do is you can set up a Mr. Workflow which does that You can put it on a timer. There's a mistral chron trigger resource, which just runs a workflow on a timer And it can go back to heat and say okay. Go check everything and see if it's it's all there so that's something you can set up yourself and That's that's probably the way we're gonna move increasingly in the future is We'll use other open-stack projects to make To allow you to do whatever you want in a flexible way That's that's kind of our overarching philosophy, I think going forward. I just said going forward. I can't believe it And here's the graph again Skipping past that since we already talked about it. So there is there is one known limitation with heat It's actually the limitation of keystone basically as Most you probably know heat makes heavy use of keystone trust to allow us to Impersonate the user so when you go back and heat needs to change something later. It needs a token That that you provided Only with the API request so if it doesn't have a token we use a trust to come back later Yes for a specific resource that's right. Thanks, Steve but And that works great and also keystone federation works great, but they don't work together And and the reason is that with keystone federation you don't get a list of roles or keystone doesn't get a list of roles from the other keystone So we're we're trying to work that one out with keystone folks But for now you are limited to if you're using federation then you can't use any of the resources that require trusts If you're using trust you can't use federation So that's that's one unfortunate limitation which will hopefully be resolved in the future So the roadmap for Pike We've got a lot of good stuff happening actually Platinum 3.5 support that is more or less done. I believe right Rika. Yeah so That's that's something that all projects working on the cycle The neutron segment resource that has landed so you can use routed networks if that is your thing That is definitely come in pike Neutron VLAN trunk ports. So if you're using a two dot one queue on Novice servers, so this is implemented neutron the world's no heat resource for it that is in review at the moment That should have no trouble landing Custom resources managed by a best for workflows. So this is an interesting one so basically There'll be a I can't remember the name. I think it's external resource and You can define You can define what happens to the resource on create update delete With a mr. Workflow, so you you'll have a create workflow and update workflow delete workflow You don't have to define all the model. They're all optional, but you can you can define server workflows And so if you have some external thing that's kind of complex and needs to Needs to be done in a certain order or it uses APIs that are external to open stack That that heat can't access and you're not an operator. So you can't put it in a custom plug in Then you can do this again entirely from user space are calling this for workflows So that is in review at the moment. It's there's a little bit more work to do on it, but it is coming along Couple new and transit functions make URL so One of the things I've noticed is like just about every template has an output where you can Caternate together IP addresses and paths and stuff to make URL and It's kind of a pain because it's you know if your servers has an IPv6 address Then you have to make sure you put brackets around it But if it doesn't then you have to make sure that you don't and and all this stuff So make URL intrinsic function basically handled all that for you So it's a lot tidier you avoid the the massive string concatenation Uh, let's concat unique so that's basically get get two lists and get all the unique Things in there and put them into a single list coming up some internal architecture stuff, which probably only really affects people who are writing custom template plugins, which I suspect there aren't too many of in this room, but if you are The good news is that it will be very clear from now on which parts of the plug-in interface You are allowed to override or you're allowed to call from your template plug-in and which parts are kind of Just internally hidden subject to change at any time So The the downside of that is that if you are using some of that internal only stuff And you didn't realize your your template plug-in will break But again, that's only for operators who deploy and custom template plug-ins It at some point in the future We would like to make the same kinds of improvements to the resource plugins But that will be a much longer process and given people plenty of time to To make sure that their their existing resource plugins don't break Stable attribute values. So this is This is one that we probably should have done a long time ago. So right now if you have a resource and and say you have an output from your template, which is grabbing an attribute from that resource I like the IP address or something when you go Show stack and try to get the outputs It will go make calls to Nova to get the latest IP address and return that to you which is One silly because the IP address shouldn't change after credit server and to it's really slow if you have a thousand servers So projects like Magnum, yeah, Magnum that yes, yeah, Sahara They're creating a large number of Nova servers and they go they go Ask for all the IP addresses of all their servers and it takes them like several minutes to get them all so what we're going to do From pike onwards is when we create the resource We will get all the attributes that are referenced and outputs or another resources and will store the value in the database And we'll never need to go get it from Nova again so That means when you go show outputs that should be really quick from now on the The downside of that is if you were relying on getting some live data by doing that So if you were changing stuff behind heats back and expecting that to show up through the heat show outputs First of all, don't do that and second of all you will break For that reason, we're only doing this for convergence So if you have the convergence engine turned on you will get that new behavior if you're still on the legacy path You will you will get the legacy behavior And just to remind of convergence is the default starting with Newton. So if you haven't disabled it Since Newton you now have new stacks will be created with convergence So that's the other thing is convergence status goes with the stack not not necessary with the configuration option But once the configuration option is on Then new stacks get created and convergence and we have a migration tool That you can migrate legacy ones over That's the heat-managed command has that tool Memory and performance improvements for convergence. We talked about this a little bit before with the with the graph but Which again was not a graphic convergence, but but we're down just from double memory usage to 15 to 25 percent more There was a lot of database stuff that we've improved so convergence was pretty heavy on the database and we've dialed that back a lot Part of it is is just the architecture is does use the database a lot But it should be considerably less overhead On top of what we were already doing for then then it was in in new narrow cutter large stacks Have had some Especially you like software deployments when you've got like a large number of software deployments all going on to one server at Same time. There was a lot of contention in the database and we didn't have Some of the retry stuff we were doing in there was not working very well. So that should be improved in planks. So You should see less problems with database contention with large stacks and the last one that that Rico kind of touched on before is a Lot of the colors now in place to you know, typically when when you update a heat stack It's comparing the current the new template that you're giving it with with what it thinks the current template is And so we're going to add an option It's going to be a command line option. So the user can decide When they do an update, do I want to compare against the previous stack? Or do I actually want to compare with with reality and he will go out and ask the other services? Hey, what's the status of this resource? And and it'll compare against that instead of comparing against What it thinks it did last time So that's Again, it's optional for the user. It's It's not super well tested yet, but it's it's going to be available in pike so you can start trying it out and and Probably will have a few bugs to iron out, but it's it's something that's going to be Hopefully we'll move towards that in the future and once we have that it's going to open up some of the The other things we wanted to do in the future with convergence with phase two and that kind of stuff where not necessarily continuously Updating stuff, but but we can improve the architecture significantly to To be even more efficient than it is So obviously we appreciate any help. There's there's number of ways to contribute and Yeah, we'd be happy to see anyone in the room if you if you want to contribute Come hang out an IRC You know review specs blueprints raised blueprints Submit patches reviews. Oh, that's very well Yeah, we actually need all the re all the like reviewers and you we need users. We need operators Yeah, we need we need anyone like you want to help and any crazy ideas We will be fine. We won't bite We're used to crazy ideas You don't give us idea or we using all the crazy idea we want and we don't care So last slide It is all right question. Who's got questions? Yes You might be using a microphone so others can hear you Yeah, so this session is being recorded. So if you all can count on my phone so that people watching at home can like So as an operator you say that we require service like Zakar and Mistral and as operator I use head up The director so my question is when you think this will be available in this platform for using auto-wheeling excellent question so First of all, it's not required to use Mistral and Zakar, but if you if you want to make that available to your users, then yes, obviously I Don't have an answer on the timeframe For that it is something we're looking at but we As far as director goes, we don't have an announcement. I Both of those services are being used in the director under cloud. I believe the stuff Is there to install them, but it's not it's not a supportive part and and I can't do the time When that will happen, so I think our time is up so we'll just close these sessions and leave all the other questions to After the session, so thank everybody joining and hope to see you this entire week. Thank you