 So I'm Marie from the OpusDec Foundation. I'm joined by PTL's Steve Baker who's going to review orchestration, codename HEAT, and David Lyle who's going to review Dashboard, codename Horizon. Today our speakers will each have about a half hour to walk through their latest project updates for you, followed by about five minutes of your questions, give or take. And if you have any additional questions, we'll see if we have time at the end as well. A little back there. So let's get going. And I'm going to turn it over to Steve. I'm going to, hold on a second, I'm going to turn this into a presentation mode for you, Steve, and then you can get going. Okay, so I'm going to do the project update for the OpusDec orchestration program which includes the HEAT project. I'm Steve Baker. I'm the orchestration PTL for the Icehouse Cycle. So we just go to the first slide. There's an overview of the Havana Release code activity. We had 42 blueprints implemented, almost 300 bugs fixed, over 800 code commits from 64 people, and almost 4,000 code reviews. So this gives a pretty good indication of how active this project has been in the Havana Cycle, even though it's quite a young project. Now if you just go to the next slide, there's a vendor breakdown of contributions from the Grizzly Cycle. Now as you can see, it's pretty obvious here that HEAT was started out as a project sponsored by Red Hat. The intention was always that it wouldn't be this way forever. So if we go to the next slide, we have the same chart. This is sourced from Stacolotix by the way. So this is the Havana chart. And as we can see, the contributions from Red Hat has gone down to 61%. We've got some major contributions from the likes of IBM, Space and HP, and also quite a long tail of contributions from many other companies. And while the contribution from Red Hat in relative terms has gone down significantly, in absolute terms it's actually gone up. So this is an indication that the project is growing and it also has quite a large diversity of contributors from a number of vendors. So I think this indicates that the project is from an open source culture point of view is in a really healthy position. So now I'm going to move on to cover some of the features that were in the Havana release. First up, we've got concurrent resource operations. Previously when you launched a stack, each resource was created serially. And that was even when there was no dependency between any given resource. So for some stacks, it would take a long time to launch a stack. But now whenever there's no direct dependency between any given resource, the resources are created in parallel. This can have a huge performance improvement for some stacks. Next we've got the provider and environment abstractions. So this gives you a way of changing the way a stack behaves without actually modifying the template. Instead you provide a separate environment which specifies different behaviors for a given resource type. And this environment can be specified at the cloud operator level or at the individual user level. So a cloud operator might want to provide some different resource types that provide slightly different behaviors from the standard resource types. Or they might want to override the default behavior of a core resource type to integrate some particular behavior of their cloud. And similarly when any user launches a stack, they can also provide an optional environment which can give those same overrides to different resource types. Now when you override a resource type or crash a new resource type, you're actually authoring it as a stack template itself. So for example you could override the compute resource type. As a stack, give the resource type some different defaults, add some behaviors. So it's quite a powerful abstraction. But only for solving certain problems. Next we have full salometer integration for metrics monitoring and alarms. So previously metrics and alarms were implemented as a lightweight compatibility layer for the CloudWatch API. That still exists. But now we have a full integration with a salometer. So any stacks which need to trigger actions on alarms such as auto scaling can now be driven from metrics that salometer collects. It's possible to push custom metrics from instances to salometer. And it's possible to configure a heat stack to perform actions based on salometer alarms. We have some new actions stacks suspend and resume. So now an entire stack can be suspended. And for the resources which support it, whenever a stack is suspended those resources will go into a suspended state. Whatever that means for that particular resource type. So for example an over server will go into a suspended state. And when the stack is resumed then the server will be resumed as well. Finally we have a standalone mode. It's actually possible to deploy heat outside of a cloud but still be able to launch stacks on any single arbitrary cloud. This has a couple of obvious use cases. One is just for development for heat developers. You don't have to have your own full cloud locally just to develop heat. But another obvious case is if you want to have your own local heat deployment which deploys to an open stack that doesn't have heat support yet or just any external open stack you can do that with standalone mode. So if we go to the next slide we've got the usual collection of new resources in each new release. We've got a sender volume attachment. There's some new resources for Neutron including load balancing, firewall and VPN as a service. There's a new native nova server resource which is great because now it's possible to write templates without having to specify AWS EC2 instance resources whenever you want to nova compute. The nova server resource also exposes anything in the any API option that the nova create API call supports. So there's a full transparency of all nova features. And finally we've got a collection of racks based cloud specific resources that have been put in the contrib directory of the heat repository. It's expected over time that these implementations will get smaller and smaller as racks based converges on a more standard open stack architecture. So if we go to the next slide we've got some preview features that landed in Havana but will continue to mature throughout the ice house cycle. First up we've got initial support for the hot template language. So this is a new native template format which is related to but is starting to diverge from the cloud formation template format that we started off with. It's based on YAML. We have the flexibility to clean things up, make a template format that's more author friendly and it lets us add any new features that we want without being held back by the cloud formation format. So we never set any expectations that it would be complete in Havana but we've made some progress in Havana and we have a hope that by the ice house release the hot template format will be ready for offering production templates. So also we have initial integration with Keystone trusts. Now currently a heat stack needs to make API calls which are not triggered by the user throughout the life cycle of the stack and obviously each API call needs a valid token to call it with. So currently the only way of achieving this is by actually storing the user credentials with the launch stack and using those credentials to create a token whenever we need one. This is not ideal but with Keystone trusts instead of storing the credentials we can store a trust ID and whenever we need a token to do an authenticated operation we can turn that trust into a token and use that for the operation. So this is part of a range of changes we'll be making which require new Keystone features and that work will continue in the ice house cycle. So I'm going to start talking about some planned features for the ice house release. Some of these have already landed but as with any open source development cycle we will attempt to land these features but it depends entirely on how the development and review cycle goes. So if there's a particular feature that's important to you then we would strongly encourage you to get involved in this development process. So as usual in every release we have a collection of new resources. Currently what's been in development and landed so far is a Trove resource a service, a Savana resource for the data processing and some new neutron resources such as a network gateway. So next slide. The order scaling API. Currently our order scaling implementation just implements the cloud formation order scaling within HEAT itself. What we plan to do is create a brand new API which can be used with or without HEAT stacks. So if you're not currently deploying your applications as HEAT stacks but you still want to incorporate order scaling into your architecture then it will be possible to do that. You'll still have to deploy HEAT but you have no obligation to directly launch HEAT stacks. Now the order scaling API will be flexible enough to scale more than just single compute resources. You can scale compute plus associated resources around it and there does need to be some way of representing those resources and how they relate to each other and how they expose their parameters to be integrated with your application. So the obvious way to represent that little collection of resources that get scaled is with a HEAT template snippet. So when you interact with the order scaling API you will be specifying HEAT templates in some way. Now once this order scaling API exists there will be some HEAT resources created that consume that API just like any other HEAT resource that consumes any other API. And from that point it will be possible to author templates that pretty much make no reference whatsoever to cloud formation. The order scaling resources have been probably the most important resources that have yet to have an open stack native implementation. So this is quite a large milestone for us to be able to author completely native implementations that include order scaling. So going on to the next slide. There's a hot software configuration. This is also quite a large change. These blueprints have a number of goals. They include being able to integrate any configuration management tool that's commonly used to configure software on running compute resources. And by configuration tool I mean tools like Puppet and Chef and SaltStack. To add a new configuration tool it just requires a small Hock to be authored and that Hock needs to be delivered to a running instance somehow. Ideally in a built image, in a golden image rather than deploying it at boot time. But you could deploy it at boot time as well. Other aims of hot software configuration is to provide a composability mechanism so that these configuration scripts can remain in their own configuration files or be invoked via URL. So this will quite radically change the way that templates are authored and the way that you write templates which do complex software configuration. Next slide. Management API. There will be a new API that is only used by cloud operators. It will give them some operations that an operator would need to manage a heat deployment so that all running stacks can be queried and viewed and potentially manipulated. Once the API exists we'll incrementally add to it over time based on whatever operators need. So next slide. Got the heat multi-engine scale out. So our API has always scaled out because it's just an API process. It's stateless so we just use normal load balancing techniques to do that. But we couldn't do that with the heat engine because the heat engine does maintain some state during long running operations such as stack create or stack update or stack delete. So until now we've been limited to having a single heat engine process for a given stack deployment. The current implementation requires a locking based solution. That may change in the future but it's lock based for now. The default lock is just based on a database table but there is a plugin system so that an operator could choose to deploy a different distributed lock such as ZooKeeper. But it does scale out of a heat engine now. So next slide. So stack convergence and failed update recovery. Now there are cases where the real world resources for a stack can get out of sync with what heat believes the state should be. This can happen due to transient failures or due to manual intervention of resources that heat wasn't aware of. And when this happens it would be nice to be able to bring those real world resources back into line with what heat believes it should be. And this is what convergence is all about. So convergence will introspect the real world resources, compare them to what heat thinks they should be and then come up with an action plan to run the appropriate API operations to bring that state back in line. And in a way related to this is the failed update recovery. Currently when you do a stack update and that update fails then the stack is stuck in a failed update state and you really have no choice but to create a new stack. But updates could fail for many reasons. There could be a template authoring error or there could be a transient cloud error. So really you need to be able to run update as many times as you want to until you get it into a state that you're happy with. So that will be possible now. So next slide, stack, abandon and adopt. These are two related features that also have uses by themselves. So stack abandon, it's a light stack delete but it does not delete the underlying real world cloud resources. It leaves them in a created state. So one use case where you would want to use it abandoned by itself is if you wanted to use heat to deploy a resource, to deploy a collection of resources but you don't want to use heat to manage the resource lifecycle as a whole. So you could create that stack and then abandon it immediately and those resources still exist but on their own. Now when you abandon a stack it also gives you a packet of information that describes the resources in that stack. And you can actually use that packet of information to pass it to the adopt call. So adopt will take that packet and it will create a stack but it won't create any resources. It will just attach to the resources that are specified in that packet of data. So as you can see abandon and adopt can be used together. You could abandon a stack on one heat instance such as a local standalone and then adopt it on another heat deployment such as the heat that comes with the cloud that uses that stack that uses those resources. Actually adopt can also be used by itself. If you manually author that packet of information you could in some circumstances adopt the resources that you specify. So resources that are created manually could then be adopted by a heat stack and from that point on the heat will manage the lifecycle of those resources. So next slide. It will be possible to launch all stacks without needing admin privileges. So currently some stacks, the resources for some stacks also create users to allow authenticated API operations to occur within the scope of that resource. And to create a user currently you need to launch the stack with an admin privileges user. And this has been a real problem for us for a long time now and we have a plan now for solving this issue. So it will now be possible to launch any heat stack with a conventional user. Then finally V2 API. We have a plan to have a new V2 API. It gives us a chance to clean up a few things such as having the tenant ID in the URL and for not requiring the stack ID and the stack name to be in the path for stack operations. It also gives us a chance to have a better handling of request scoping and policy for authorizing stack operations. So that's all I have at the moment. I'd be more than happy to answer any questions now. I see a question here. Question is we have noticed there is limited support for sentos and heat. Is there a plan to add more support in Icehouse? So I guess we need to find out what we mean by support for sentos. Is that for deploying heat on sentos or for using sentos as a guest? To answer your question with the announcement that Red Hat will be participating in the sentos community, we would expect that support for sentos will be very good in the near future, both as a guest and for installing heat on sentos via the RDO distribution. But as long as sentos has clouded in it, it should integrate well with the new software configuration that we're planning for this cycle. Any additional questions for Steve? The line is open if you want to ask or you can put them in chat. Okay, so just in the last couple of days there's been some interest in using windows as a heat guest. There are a couple of projects out there that are pertinent. There's CloudBase in it, which is a port of cloud in it that also works on windows. There's the Murano project, which among other things is an application catalogue for deploying complex windows applications that uses heat under the hood. So there is some activity there right now. I expect the way that a hot software configuration would integrate with windows is that the windows image would need cloud base in it. And it would need some kind of software configuration hook, which lets you specify configuration operations in PowerShell or whatever other windows configuration options there are. So I'm not personally working on that, but there is interest in activity in that area at the moment. Great. And if anyone else has additional questions, you can put them in the chat box or I think there's Meeting Burner sends an email afterwards. You can send them back to me as well and I'll get them back to Steve. Great. Well thank you Steve. I appreciate it. Thank you. I'm going to turn it over. Yes, good job. I'm going to turn it over to David. David, let me get your presentation up. Okay. So David, when you're ready. Okay. I'll set. Hello. My name is David Lyle. I'm the Verizon program PTO and this is the OpenStack dashboard project update. Slide please. So we completed the manner release. We made a lot of progress. We had 41 completed blueprints with a total of 406 commits and 6282 bugs. Next slide please. So that comes from a community of 104 contributors which I'm really proud of. It's up from 58 in Grizzly so we've got quite a bit of community growth and those commits come from 34 different companies as well as some unaffiliated contributors. And there's a lot. It's not just one company dominating the commit to any longer. Red Hat, HP, NEC, Mirantis, United Stack, MetaCloud, today Nebula is still in there as well. So very diverse contribution base which again to me is a great indication of the health of the community. Again, and also 217 reviews. A lot of reviews by a lot of various reviewers. And so that certainly is helping our project along. Next slide. So what have we accomplished in Havana? We got our first pass at heat integration as heat came out of incubation and graduated from incubation in the cycle before Havana. Then we added heat integration in Havana and that included stack creation. You can actually import the stacks definition from the URL. Go ahead and launch that stack and then view the topology as it builds as well as when it's up. And then be able to do some resource inspection once that stack is up and created. Solometer also came out of incubation in the cycle before Havana. So we graduated. So in Havana Cycle we added the preliminary support for Solometer. So the initial support is just in the admin panel and it's just created doing cross-cloud querying for certain pieces of utilization. The plan in the future is to get a more full-featured integration with Solometer. But this was the first pass at it. We also added Keystone B3 API support. So this is a fairly large effort to add domains, groups, and roll the cred for those to our identity model management as well as integrating that with existing projects and users. So now you can have a multi-domain setup, log into an individual domain, create additional domains, assign roles to end groups, assign users to groups, roles to those groups, et cetera. So it's a fairly full-featured Keystone B3 API implementation in Horizon. We improved Neutron feature support. The key things there were the VPN as a service and firewall as a service support. There are fairly rich panels in Horizon now to support those features in Neutron as well as we moved to a better parity for security group and quota between Neutron and Nova Network. So now you can actually, before for that, you couldn't manage security groups or quotas in Neutron from Horizon and now you can. And the other feature that was added is an interactive network topology. So in the previous release we had a static network topology which is truly beneficial. It's much better to be able to visualize your network topology from a visualization rather than from trying to sort these, sort it out from tables. But in the Vano release we added the ability to not only look at it but interact with it here. So you can go ahead and launch instances, create a new network, create a router at it and see that getting added here and see how that relationship looks. So other highlights from the Vano release. The big one as far as I'm concerned is multi-region support. So before Vano we only, Horizon could only see one region. And so for larger installations, multi-region support is absolutely necessary so now we can manage services across diverse regions. This helps Horizon become more than a token UI. It's intended to be a true UI to manage OpenStack. And then trying to keep up with and have a feature support. So we added a bunch of features there, editable default quotas, past words, availability zone support. So when you're launching instance you can specify where the availability zones are going to go or which availability zone that's going to be in and see what availability zone your particular instance is running in. Resizing instances, boot for volumes, improve boot for volume support and project for your support. And the last item that we added was trove integration. So although trove was still an incubation and at the end of the Vano release, they had the UI far enough along that we felt comfortable including it in the Havana release. We label it as experimental because it is not part of the official release for Havana. So there was the ability to turn it on and turn it off but with the integration that was provided in Havana you could create here a managed database as well as the backups of those databases. So what are our priorities for IFFAS? We have quite a list of items we'd like to get done. These are just the top ones. The first is role-based access control. So in Havana we added the start to this. We added a policy engine into Horizon to allow an implement it for the keystone elements inside of Horizon. Basically the idea here as with all the other policy engines in OpenStack is to be able to have finer grain access control. We want to be able to provide and enable certain actions based on what roles you have. So the idea now is, now that we have the engine, now that we have the keystone support, we'd like to enhance that nice house. We'd like to extend it throughout Horizon. And one of the benefits of that would be not only finer grain access control but hopefully condensing the project dashboard and the admin dashboard and just use role-based access control to define what the user sees. This would greatly simplify the code base, reduce some duplication, reduce some overhead, that sort of thing. This is a big item we'd like to kick out. Another thing we're looking to do is create a more extensible user interface layout. So currently we have the two tab dashboards side by side. It doesn't really give us a whole lot of room for expansion. One of the fundamental guiding principles for the Horizon project is we want to support extensibility. You can't expect what the upstream Horizon project, delivers, is going to be the final user interface for anybody pulling it down. They're going to want to add features to it. They're going to want to add new dashboards to it. We don't really have the space for that right now unless you have maybe a three-lettered name dashboard and one of them. So we're going to change the layout. It's going to be more of an accordion-style layout. Still going to be on the left-hand side of the screen. And we're going to move around the project selection in some context information. It should be more of a versatile user interface and certainly more extensible. Once we knock those first two items out, we're going to start getting some information architecture changes. So basically what that is is we're going to organize the data differently. The identity will be pulled out into its own dashboard. What is currently identity will be pulled out in its own dashboard. And again, once we have the role-based access control, a lot of what's in the project in Admin will get condensed. And there's more information on that on the OpenStack UX site. But basically we want to make it more sensible. We want to include better speedometer integration. So right now we have limited-kilometer support in the Admin panel, and it's by itself. It's its own panel inside the Admin dashboard. What we'd like to do is not only have that type of feature, but also be able to sprinkle graphs of spark lines throughout the other panels. So use the information from Solander to better inform the user. Better richer client interaction. So we've adopted AngularJS as a JavaScript platform inside Horizon. The steps are just trying to make some of the workflows a little more friendly and better validation between client and server side. One that's already been knocked out is Configure World dashboard loading. So the idea here is right now you have to do a bunch of config file changes to load dashboards or to change what dashboards load, what order they load in. What we'd like to be able to do is just drop, say, Company A pulls down Horizon. They want to add a dashboard that should be able to, or remove a dashboard that should be able to do that easily so they can either drop the dashboard in place, set a config file, and have it read in or not. Another thing we'd like to do is split Horizon from the OpenStack dashboard. There's a little disparity here, but Horizon is essentially the toolkit library that OpenStack dashboard uses. And so we'd like to, as it's a library and as other groups like to build off this library, we'd like to simplify things by splitting out the Horizon section of it, the actual toolkit side of it, and leave what's in OpenStack dashboard as the actual application that manages this OpenStack. And we think this will simplify, they'll simplify build and packaging, but it will also simplify flight for people who are trying to, for projects in incubation where they don't want to maybe pull down the whole entire OpenStack dashboard, then they just want to expand Horizon to implement user interface for new features. And last, we'd, on here, we'd like to have better tempest integration. So currently Horizon I believe has about one test in tempest, and it's basically just a sanity check. As Horizon utilizes most parts of the stack, we feel like it would be a great place to at least test the Python, let's say Python Keystone, Python sender client, all the clients to validate that there's no, not that no changes break backwards compatibility. And since we exercise all those fairly extensively, you know, this is not only for President Rayson of Horizon working, but also we just feel like it's a good integration point to make sure those changes don't happen that are detrimental to the stack. And that is all I have. Is there any questions? I see one over in the chat, I believe. So Keystone LDAP. Yeah, so does Keystone support LDAP integration? Yes, it does in the state now. They're currently working on better support for LDAP with federated backends. So if it were ready, we would like to have it included in the Ithous release for Horizon. Most likely this is probably going to slip to, at least as Horizon goes until early Juneau timeframe just so that that can stabilize. Other questions for David? You're going to have to air for too long, but feel free to ask your chat if there are any questions or the line is open as well. Going, going. Okay, well, that looks like that's it. Well, that's good, which will end a little bit early today. Thank you for joining everyone. Please check back for postings of this webinar and our speakers' presentations on OpenStack on the Foundation's YouTube channel, as well as our blog. I'll hopefully post those Friday or Monday. And thanks again to our speakers. Thank you, Steve, and thank you, David. And if there are no more questions, yes, appreciate your time. This concludes our webinar then. Take care. Bye-bye. Thanks so much. Thank you.