 OK. I guess we'll get started. Eaglide, amongst you, will have noticed that I'm actually the wrong Steve. Steve Dake, the PTR, until very recently, have a project was assigned to do this session. For those of you who don't follow the dev lists, Steve unfortunately had to step down from the PTR role quite recently. So I'm standing in on his behalf and I'm going to present this session. Steve's stepping down is obviously a loss to the team. ac mae'n gweithio'n gwirio'r gwahfyrdd a'r gwahfyrdd o'r projet ar y gwahfyrdd rhai o'r tŷ, ond mae'n gweithio'r gwahfyrdd o'r gwahfyrdd o'r cyst-gol, eu bod yn ffathogwr yn rhaid o'r felly ynw'r gweithio'r gymaint. Rwy'n cael ei wneud am wahanol o'r gweithio'r ringhau a'r gweithio'r rhaid, ac mae'n gweithio chi'n gweithio'r gwaith arакиwnol i'n ffagorio, I'm just going to go over a summary of the history of heat, an overview of what heat is and what it does. I'm not going to do a massively technical presentation of that stuff, because I know a lot of you are already aware, but I just wanted to make sure that everyone in the room is aware of our goals and the summary of the level where we sit in the open stack ecosystem. I'm going to cover the main features which we've implemented for Grizzly and give you an idea of the road map for Havana as we understand it at the moment. I'm really excited having been in attendance this week. We've had some really great design sessions. We've had some really great positive feedback from a number of other companies. It's sounding like we're going to get contributors from other companies on the team, and we've had a relatively small team. All but one of the corridors at the moment are from Red Hat, and it's going to be enormously beneficial to have a more diverse set of contributors, and hopefully we're going to have more manpower to do more cool stuff on this project. I think we've achieved a lot in the last six months, in fact in the last year, but I'm really, really excited now about what we're going to do in the Havana Cycle. So here's just a brief overview of the history of heat. So we only started a year ago, and I think we've managed to achieve really quite a lot in that year. We've gone from having nothing to having some software which actually works and is useful to people, and I think to reach the point where people can actually make use of this software and it can be considered for inclusion as part of the Coordinated Release of OpenStack is something which we can be justifiably proud of. So we've got 25 people who contributed to heat over the past year. There's nine core members, and I think there's about probably about five or six of us who are contributing on a regular basis recently. There's about 1,900 commits making up the history in our repository. There's roughly 34,000 lines of code that are included tests, but they're not including OpenStack comment and they're not including comments. So that's a bit of a wishy-washy number, but it gives you a rough idea of the kind of constant stuff we've churned out. So we have this concept of resources, which is what you define in your stack template, which is maybe an instance or maybe some other logical abstraction which allows you to build up your picture of your infrastructure. And so we've implemented 37 different resource types. So the takeaway from that is we're not just about orchestrating launching instances. We can orchestrate all of the core services. We integrate with all of the core services. Not so long to yet, but I'll come to that in a minute. And it provides a way of consuming OpenStack services in a way that's simple. You don't need to know anything about all of the underlying APIs. You don't need to learn all the details of all of the underlying services and it makes OpenStack core services consumable to a wider audience. It allows you to do deployment of complicated infrastructure and applications in a simple and repeatable way. And it aggregates the APIs underneath into a single template versionable text file. So this is just an overview of where we sit in the OpenStack project ecosystem. So you can consider us as being something which is led above the core Nova Glant, Swift, Quantum, Cinder. And we talk to those APIs and we provide a template interface and a rest interface above that. Horizon can talk to heat and there was some sessions with some really great work where we're hopefully gonna have quite soon some capability to interface with heat through Horizon because that's a gap which we have at the moment. And at the moment, we have CloudFormation Compatible and the native REST API. A really hot topic this week has been the template language we use. So the first cup for heat was, hey, let's implement the CloudFormation syntax. And that's worked pretty well for us because it's allowed us to have some very tangible near-term goals. We've had some documentation for the way Amazon do things which we've been able to sometimes look at and think, hey, this is the way we're gonna do it. But there seems to be a real need in the community and a real desire to do something which is gonna provide a superset of that functionality. And so I'm really excited and I think that we've made some great progress on that this week. So there's gonna be some good work in the next few months in that area, I predict. So this, as I've already mentioned, you have an abstract configuration of all the core services into a single template which can be treated just like source code. And so you can deploy your infrastructure via this template. Those templates can also be nested which is a really powerful feature so you can build up building blocks of your infrastructure and layer them and compose your templates such that you don't need to have one enormous template, you can treat things in a modular way. And again, it allows you to start treating your cloud deployments a lot more like code, union conversion, these templates which you're producing. Another couple of really cool features of HEAP is we support basic HA functionality and we also support autoscaling. So again, we've had some great discussions around this area, I think there's potential for a lot more innovation in both of these features. We're gonna be moving to integrating with Silometer very soon where we'll be able to make use of much more advanced alarm features. And I think that there's gonna be potential for HA and autoscaling to be really valuable additional value in addition to just orchestrating the initial deployment. We also support deploying metadata update after the initial deployment to the various resources. This is most useful if you've got instance resources that need to periodically pull some reconfiguration data down. And we've got an instance agents which allow you to do that. I know that the triple O guys are making heavy use of that and Clint is Clint here. No, okay, so Clint Byron, recent new core member working on the HP triple O team is looking at implementing rolling updates based on the instance metadata. So summary is there's a whole load of cool stuff which we do at the moment. There's an enormous pile of blueprints for Havana and I think we're gonna be adding a whole load of new additional exciting features. I'm certainly excited about what we're gonna be doing in the next few months. So I'm not going to go into too much detail on this slide but I just wanted to give you an overview if you're not already aware of what our internal architecture looks like at the moment. This is pretty basic view of what we do at the moment but it gives you an idea that this kind of probably looks quite similar to many of the other OpenStack projects. If you're familiar with say the way Nova looks internally, we have a number of APIs which can scale horizontally. We talk via RPC to an engine which is basically doing the orchestration scheduling and the core logic of what he is actually doing. And very soon that will be scalable as well and so we're gonna have this scalable service which is in many cases very, very similar in topology to existing projects. We reuse a lot of stuff through Oslo and from day one we've tried to do things in the OpenStack way and make sure that we can encourage contributors from other projects because we're hopefully gonna have a fairly familiar code base for them. So this is another way of looking at what we're doing. So he takes in a stack template, we then process that, generate a dependency graph and then we make a series of REST calls to all of the underlying services. So you can see there, there's a missing line disk salometer, that's a piece of work which we're planning to do during Havana. Angus is also core on the salometer team and he's been doing some great work driving the functions, the features that we need in salometer which is gonna allow us to take a big step forward from our current basic metric and alarm evaluation logic. So this is to give you an idea of the resource types which I mentioned a minute ago. We implement all of these at the moment. You'll notice that there's two sections. We've got the AWS compatible section and this is kind of what we started with. We wanted to say, hey, we can launch this cloud formation template on OpenStack which in itself is a pretty cool feature in my opinion. But then more recently, we've started adding native resource types which in most cases are a fairly thin wrapper above the underlying OpenStack services and this is gonna provide even more direct way of consuming the underlying APIs without having to know all the details of the underlying API and with all of the advantages of having a single template which I mentioned earlier on. So I anticipate that this OpenStack native section is gonna grow substantially in the next few months and hopefully that's gonna go hand in hand with some of the template rework which we've been discussing this week such that you'll be able to deploy any topology you like without having to care about the AWS template syntax or the AWS resource naming when that seems to be something which people want and we've paid attention to that this week and there's been some great discussions and so I'm confident that we're going to be able to make good progress on that in the Havana Cycle. So this is just to give you a summary of what we've been doing through Grizzly. As I've already mentioned, it's really quite a small team of guys hacking on this code. There's probably five or six of us who are working on this stuff regularly and I think that we've managed to achieve really a lot of work in the last few months. We've fixed 144 bugs. We implemented 19 blueprints. We added a new native REST API. There's a fairly complete set of quantum resources, support for YAML templates, Python Heat Client. So now, again, this is about being similar to the way other OpenStack projects work. We provide Python Heat Client, which gives you CLI tools and an API such that you can talk to our REST API. We've got much improving virtual networking VPC resource support. This is an area which we're still working on, but we're making some big steps forward in that area. We now support the idea of updating a stack, which you've already deployed, and we're improving that such that our first approximation with that was that you just replaced the resource, but that's in many cases not what you want to do. And so we're still working and improving that such that you can do much less interactive updates. The old apologies once you've deployed them. We've implemented support for stack rollback, so you can automatically roll back a failed update. And again, I see this becoming a much more advanced functionality as we go through Havana with the rolling updates work that is going on and a possibility to do things like stack snapshots. And the sky's the limit with this stuff. There's a whole load of really cool features that we can look at adding, and the point is that we're doing what we've been doing since day one, which is to start with some very simple and basic functionality and then build it out. And so that's something which we've been concentrating on and I think that we've made really good progress. So we've got major Swift resource type. We've got much improved security. And that's again something which we've had sessions on this week and we're going to be able to make good use of some of the work that's been going on Keystone. For example, the trust's work is going to be extremely useful to us. And so there's a lot of these features. There's going to be Feature V2 in Havana. And so this takes me on to Havana road map. Zane, who is another core dev on the HEAP project, has offered to come up and walk us through this. So Zane, do you want to? Thanks, Steve. Steve literally stepped up one week ago, I think, to take on PTL Gold. So four days ago. Four days ago. So he's done a great job this week and thanks for that. But we're big believers in the HEAP team of kind of spreading the responsibility around. So that's why I volunteered to help out on this section. So we've got a bunch of very exciting features to play in for Havana. So I'm just going to run through them very quickly. Trailer resource creation. So at the moment when we orchestrate your template, we are creating each resource one at a time, basically. And that's going to be very size, especially if you've got 100 noble instances to spin up and they don't have dependencies on each other. Then we can create those in parallel. And that work, I'm pleased to say, will be going in next week, I think. So that is imminent. Quantum support. We've been working. Steve Baker in front right here has been working hard on VPC support. So the VPC resources from AWS will be hopefully progressing a little bit more. And those are going to be evolving in concert with features that are getting added in Quantum. So that's coming along. By the way, we already have native OpenStack Quantum resources. So if what you're doing maps directly to what Quantum provides, we can support that already. Rolling updates. This is one that Clendor's very big on with the triple O. Basically, Amazon implemented something similar to this recently. And it's when you're rolling out a very large deployment, especially when you're updating a very large deployment, you want to be kind of not doing the whole thing in one hit. You want to be testing as you go, making sure that it's still working. So we're going to be looking at that and implementing a solution for that. The new template language, that's been the big point of discussion this week. So new friends from RACSpace have come along with a lot of work that they've been able to contribute and a lot of good ideas. So basically what we're hearing, the message loud and clear this week is that people want to orchestrate not just talking to the OpenStack services, which is what we've been concentrating on so far. They want a solution where their whole application can be defined in the template. And that's certainly what we're going to be working towards. And Adrian and his team from RACSpace are going to be helping out with that work. So we're very excited about having them on board. And also they're going to be helping out with the autoscaling API. So that's cool as well. So right now we, I guess there's two things to be done with autoscaling. Number one, as Steve mentioned, we currently have a kind of hacked up internal implementation of autoscaling. And in the Havana cycle, that will be the event part, alarm part of that will be moving to the salameter. So we're going to be implementing integration with salameter for the events. And heat is only going to be handling the autoscaling part. And the other thing that's planned is we're going to add an API to that so that autoscaling is available to everyone, not just people using heat templates. So and we're looking forward to working with Adrian and his other team to implement that. OK, it's longer I've covered. Update second improvement. So I think a very big part of orchestration is not just creating your stack in a replicable way, but also being able to update it. And you can have your template inversion control and you can see what updated. And rather than having to go through all the APIs and every single update you do, you have to invent a new way of going through all the APIs, make sure it happens in the right order and all the right things happen. Heat can handle that for you. At the moment where it's working, we have some pretty basic stuff. There's a lot of resources that need to be created or destroyed and recreated at the moment when you update a stack. So we're going to be looking at trying to move as many of those as possible over to rolling updates where you don't have to destroy the resource to recreate it. So there'll be less interruption. Security, so there's a couple of things there. One is how do we, as an agent for the user, heat has to have a long running relationship with some of these resources, like auto scaling is a good example. Heat has to be sitting there and when you get an auto scaling event, you need it to spin up a resource. You don't want to be asking the user, hey, should I spin up a resource? Can you give me your credentials again? I lost them. And the good news is there's some new features in Keystone that will allow us to do that in a very secure way and that will be happening in Havana Cycle. And the other thing is when you have agents in your instances which are reporting back data to heat or indeed salamata, how can we secure those and make sure that that data is signed without exposing any of your user credentials to anything in the instance? And there'll be some improvements coming as well. Steve Hardy's been working on that very hard. Native resource types, so we have a few, as you saw on the previous slide, I think OpenStack native resources. And we're going to be implementing more of those. We're going to be trying to cover everything in OpenStack. We'll be servicing features that are kind of unique to OpenStack and they're not necessary Amazon features which you see in Amazon resource types. And our philosophy on this is all the resources are pluggable right now. So it should be very much up to the operator of the OpenStack cloud to decide which resources they want to deploy. So if you want to deploy a cloud that doesn't have any Amazon resources, you should be able to do that. And if for some reason you want to deploy one that doesn't have any OpenStack and other resources, you should do that too. And we're going to be implementing features to make that probably a configuration option. So that'd be really easy to just turn off all the Amazon stuff, if you so desire. Steps, suspend, resume. This is basically, this came from some folks who are actually using heat right now. So that's exciting. And what they want to do is spin up a stack and have it kind of ready to go but not necessarily running and using resources in the cloud. So we're going to be looking at kind of suspend, resume thing where you can maybe shut down your over instances but spend that again very quickly. And finally load balance as a service is now I believe available in quantum. So we'll be implementing native resource or a native resource type for that quite possibly but also an AWS resource type for that. Right now we have a solution for load balancing which is consistent of us spinning up and over instances with our chat proxy on it. So that will move over to using the load balance for the service API. And we're also, you know, we're hoping that databases of service is going to make it into OpenStack sometime. And we'll be, we also have a similar solution for that where splitting up another instance. So hopefully we'll be able to move to that API when it becomes available. Our policy on that is kind of if something moves into incubation or integration we'll probably support a plug-in but there's nothing to stop people from developing plug-ins out of tree as well. So. Thanks, A. Yeah, I do want to stay up here and we do a Q&A in a minute. So the only point to make about the road. That's how it is out here. Yeah, guys, do you want to come up here? This is not necessarily an exhaustive list. I've tried to pick some of the features that I'm aware of and I think are likely to happen during the Havana cycle. If there's stuff that's important to you, your company, or use case for heat, talk to us about it, raise blueprints, come and talk to us on ILC. You know, we're very open to new ideas and we want feedback from people who actually want to use this stuff. You know, I think we've got some really good capabilities at the moment and you know, we're really very happy to engage in discussions that shape the direction we go in. And so, come and talk to us. So, in closing, please come and get involved if you're interested in trying heat, come and talk to us. If you're interested in writing some code or some documentation, even better, write documentation, please. Then come and talk to us. That's another thing we're planning to work on. Yeah, so that probably should have been on the roadmap. You know, we're aware, you know, there's some good stuff in the wiki. It's hard to keep it up to date. There's not that much in our doc tree. So, you know, we're going to be very much working on that and if you'd like to come along and help us with that or code or anything at all, come and talk to us. We're very open to contributors. So, yeah, please come and talk to us and we're happy for you to get involved. So, what I'd like to do now is introduce us, Cordes, did Clint turn up? No, okay, so there's also Clint Byrom, but the four of us here, Steve Baker, Zane Bitter, Steve Hardy, Angus Salkheld. And so, we've all been working on Heat for quite a while and we're all excited, I would say, about the project and what we're going to be doing in the next few months. So, what I'd like to do now is just open up for sort of Q&A. Yeah, there's a microphone here. So, if you can get to the microphone, that's probably going to help us hear what you're saying. Could you expand a little bit on the support of an extended language for the template and to have external application in the ecosystem? So, I mean, this has probably been the hot discussion of the week. We knew that there was people asking for this, but it's probably taken us a little while to really understand the driving requirements behind that. And it wasn't just like, hey, we want something that doesn't look like a CloudFormation template. There's actual abstractions that we don't currently support, which are useful to people in real deployments of stack templates or whatever terminology you want to use. And so, the direction I think we're going to end up going is defining a superset language, which is probably going to be based initially on some of the DSL stuff and which RatchBase has been sharing with us. And we've been talking to the guys who are interested in Tosca. And we need to figure out a way of basically having a superset language, which allows you to do a non-lossy conversion from whichever template format you like into a heat-native format. So, we're not interested in reinventing a completely new syntax. That's going to be a nightmare. So, it's going to be Tosca comparable? So, heat at the moment is not going to have a Tosca interpreter inside it. Not for Havana. We have talked about this. That may be a nice thing to do at some point, but I think that we need to try and keep our goals achievable. We've only got a few months to work on this stuff and it's still going to be a huge amount of work defining an internal language, modifying our parser logic such that everything still works. We're kind of tied to supporting the CloudFormation template syntax because we've got users who rely on that stuff. So, we're going to support that and we're going to, at the same time, have a parallel language which is also supported and that will be a native syntax and that should allow you to convert from DSL to the heat format. You should be able to convert from Tosca to that format via some fairly trivial script or translation layer which will be outside of heat for the time being and we can revisit that discussion at the next summit if needs be. That's the way I see it developing. Has anyone got anything to add to that? Yes, I mean the idea for now is not that we're going to implement Tosca, but that we're going to make sure all the primitives are there that we can implement something equivalent to Tosca and you'll be able to translate. Thanks. Hi, sounds like some exciting stuff. One of the areas that's near and dear to my heart is performance monitoring and I wonder if you've given any thought to doing the performance monitoring of heat itself. So when heat's off doing its thing, how many system use sources is consuming? Is it running out of resource? Do I need bigger boxes, et cetera? Yes, so at the moment we've got a very basic, we call it a CloudWatch implementation, we've got a... Of heat itself. Of heat itself. Of heat itself. So I think the short answer is no, but I mean it's essentially a translation tool, so at the moment all we're doing is converting from one API to another. We're not doing a lot of work ourselves. Saying that you will have the biggest resource I would have thought would have been actually the database. We can certainly talk about it offline, but it's something that you can certainly talk about. Yeah, I think this is a general theme which is we've gone from developing this to actually looking at more at problems of real deployments. Yeah, because sometimes it gets kind of interesting if you run for like say 10 seconds and don't realize it that you're consuming all of one of your CPUs. You know, maybe need more threading in there somewhere. Okay, thanks. Any more questions? If you can make it to the mic, that's the question. Question about this auto scaling. So if I have a legacy applications which has its own metrics for importing this users, I mean not something that Cilometer is going to be able to tell me, but something proprietary and then I would have like some scripts that I would need to do. Some magic I need to do with the application to be able to actually auto scale it. Change some configuration, something like that. Is that part of the picture? Yeah, so our current model is to collect metrics from inside the instance via an instance agent that pushes to our PowerWatch compatible API. So when we move to using Cilometer as a metric and alarm source, I imagine there's going to be quite a lot of data that you can collect at the hypervisor level which means that you don't need to worry about stuff like instance credentials. Can I just use some of my own metrics? Yeah, so, but I would expect us to also support the ability to inject metrics, events and alarms via an agent script in a similar way to what we did at the moment. Angus, have you got anything to add? Well, and if you're... Yeah, so I mean the idea is not to say that you must use metrics which are produced by default by Cilometer, but to give a flexible mechanism for doing this stuff. And I think the new auto scaling guys have been a keen on quite a pluggable system. So I think we want to be able to work with... Yeah, we're moving in the direction of making it more configurable, not less configurable for sure. What's your viewpoint on other systems, other workflows and corporations and how that, you may tie in to heat for approvals or change control records and things like that? So I mean one thing that I don't think he wants to have to worry about is like business process. You know, you could layer something which handles that kind of stuff on top of heat, but I think that we would need to worry about application topology and defining that in a concise and in the simplest possible way. In my view, you don't want to start mixing business process into that logic. It's something that should be very much a layer above and outside the scope of heat. So there was a discussion yesterday with some folks who may or may not be here about workflows of service. I'm not sure that he's talking about that. I think it's more the hooks, for instance, into order scaling that you might be able to throw the logic off elsewhere to do the processing and then come back in and say, yes, I do want to, but don't decide. Yeah, I was discussing this with Adrian last night and there is a possibility that for order scaling, at least, there could be some hooks in there. But yeah, so I guess I mean the challenge is going to be defining an interface that enables that kind of stuff to sit on top and if you've got requirements in that area, let us know. But I don't think we would want to bake that kind of logic into the core illustration engine. Yeah, I think this is the whole thing about getting involved. If it don't say, oh, it's not there and then move on. So another will make it a whole other thing. Just come to us and say, I have a real use case, so this is my problem. Heat doesn't quite yet provide what I want. And either contribute a patch or add a blueprint and bake and we'll get to it. But that's the way to interact fast, right? So yeah, I mean, just speaking of patches, previously we've been a core team doing most of the development. It's been a few external contributions. I'm going to assume at some point we're going to have this avalanche of external contributions that we're going to review, that we're going to have to review. And it'll be good when that happens to have some more people ready to promote to core. And just like any other project in OpenState, the way to do that is to have a history of good quality patch contributions into heat and to also just do reviews for what's there. Yeah, because in theory we're in this transition to being a project like most of the other projects where there's a lot of external contributions, a core team who have to spend a good chunk of their time just doing reviews rather than core development. So yeah, but we have that transition to go through. I was just going to add a little more to that. There was a lot of neat discussion around this workflow as a service or putting in declarative tasks in some way that's safe. And I think that addresses several of the questions people have raised, but you also have the mechanism that is great where you can just drop a webhook in as part of the model for what's happening. And you can do, it's a simple model, but very powerful to achieve a lot of the things already. Yes, I mean yesterday there were some discussions in terms of life cycle operations and ways where we could provide interfaces such that you can just register for a call back when a certain event happens in the stack life cycle. So that's going to enable these kind of things to be layered on top of heat quite easily. And it's a question of understanding the use cases and making sure we provide sufficiently configurable interfaces. So the question was auto discovery of instances. So something like gas of those kinds of things. So at the moment, the instances which we know about are only the ones which have been defined via a stack template. So we don't support any, you can't have existing instances which get pulled into a heat stack. You need to define it through the template. So you talk about something like you have all your stuff running already and you try and turn that into a template. There are actually things that have shocked your configuration. I think there's probably tools that you can I mean it's possible to kind of grab a list of all the resources that I mean in theory, not that we do this but to grab a list of all the resources that you're using. It's very difficult to work out what the relationships between those should be. Yeah, thank you. Hi, I'm Clint by the way, good morning. In triple, we actually have a use case for something similar which is since we're bootstrapping open stack itself there is a moment where heat hasn't expressed its own existence. And so we do, we are looking at sort of maybe having an API for saying like oh there actually is an instance over there and this is the idea and it's in a stack because we would need that so that there might be some play there. So we're nearly out of time. Are there any more questions before we finish up? Okay, well thanks for listening.