 Good afternoon. My name is Carol Barrett, and I'm here today to represent the Product Work Group. They're the group that is, on a regular basis, creating the community roadmap for OpenStack. And today I want to present an overview of how the roadmaps are structured, what type of information, it's contained, what you can learn from that, and then cover the highlights from the Newton roadmap. But of course, no roadmap presentation is ever complete without a legal disclaimer. So we completed this version of the roadmap at the beginning of September when Newton went into its first freeze. So well before the Okada Design Summit happened, and so we're providing the best information we have at that time. We update the roadmap on a quarterly basis, we'll cover that, and so any changes that of what got finally included in Newton will be reflected in that roadmap update as well as changes from the outcome of the Okada Design Summit. I want to start by talking about how we actually create the roadmap. So start with which projects we include. We look at the information that's gathered in the user survey to tell us which projects today are the most commonly used by operators, and what projects they are looking to deploy in their OpenStack installation in the upcoming releases. So that's one piece of information to help us decide what to include. The other one is to look at the other projects that are critical, though not necessarily the most well-known to a deployment, something like Oslo or RefStack. So we'll also include information around those. And then the next one is projects that from other data we see are on the rise, though they might not have heavy adoption today. There's a lot of interest in them, and we want to go ahead and be able to provide information, maybe something like COLA, containerizing the control plane. We go about and publish three release roadmap. So what you'll find in the one that we published in September is Newton, Okada, and Pike. So when we do an update, we first start by checking back with the PTL's cores and key developers in a project team to understand what we thought was going to happen versus what we actually achieved and update the release. Then we also do data mining of EtherPads, IRC meetings, and other places where the project team store information to also help us to gain insights as to the features and functions that were included and what the impact is or the value to operators and end users of the OpenStack Clouds. We'll put that all together, develop the roadmap, and then we go back to the PTL's cores and key developers to get their review and check off to make sure that the way we've captured the information and classified the information matches how they view the roadmap and the progress for their project. And then we work with the OpenStack Foundation to publish this out. You can find it on OpenStack.org's slash software slash roadmap. And we'll publish it there every quarter on an ongoing basis. The roadmap is organized around themes. And we've tried to make the themes self-explanatory and I'm just going to touch on a couple of them here. When we talk about manageability, we're talking about the experience that the operator has with OpenStack. When we talk about interoperability, there's four things that are grouped underneath there. There's Federation and interop cloud models. There's inter-OpenStack service dependencies, and then there's backwards compatibility. So a fair bit of information under interop. The last two are actually new themes that we're just adding for this release. And they were created based upon feedback we got from operators and end users over the last cycle. And so the first one is user experience. And this is talking about the experience for the end users of your cloud, so not the operators. And then the last one is security. So you'll see how these are used in the different ways that we organize the roadmaps, which is around view. So we have taken a hierarchical approach to providing different insights into the projects. The first view that I'm going to talk about is what we call the 10,000-foot view, which is really allows you to see how the different projects are planning their progress around themes. In this one, so what you're looking at here is across the top, the different themes, we're unable to fit all of the themes on one slide, so we break it into multiple slides. And then down the side, you see the different projects that we're covering in this update of the roadmap. The types of things that you can glean from this are what was the most common area of focus and improvement for all of the different projects in the Newton release. And here we can see it was really around modularity. Another thing that you can glean from these is what is the focus for a given project over multiple releases. So the other piece I should have covered in the overview is we have Newton, Okada, and Pike, so three releases for each of the themes. So for Nova, you can see that scalability is a major theme for them, a major focus of work and development resource investment over those three. If you look at heat, you'll see that resiliency, so the ability to recover from failure, is a major area of focus. So it gives you some idea of what you should expect from the projects at a very high level over the upcoming release cycles. If you do a double click on the ten thousand foot view, that takes you down to the thousand foot view. And the thousand foot view, what you're looking at here are again the themes. Because of the number of projects that we cover in each of the roadmap cycles, we have broken the thousand foot view into multiple slides. And so here you can see what projects are covered in this one slide. And then finally over here you have Newton, Okada, and Pike. So you can look at it by release by release as well. That's dangerous. And so as we talked about before, Nova had focused on scalability in the Newton release. And so you get more detail on what exactly they were working on around scalability, so sales v2 support and placement API. And as you go across, you can see that on the ten thousand foot view, you would have seen that Glantz had been working on modularity as a key focus. And then you get more detail as exactly what were those elements regarding modularity that they invested in. If you do a double click on the thousand foot view, you get to what we call the hundred foot view, which really is project centric. So for each project that we cover in the roadmap, there is one of these slides included in the deck. And what it provides you is a description of what the service is provided by that project. Number of contributors, number of companies inside of contributing to that release. It talks about for the current release, the blueprints, and that's a link. So if you wanted to, you could actually go and see what were all the blueprints, as well as the specs, which you can also go ahead and as a live link that would allow you to go ahead and see exactly what those specs were in case that was something that you wanted to collaborate on or provide more input onto. And then you see the different releases, Newton, Okada, and Pike. As you can tell, NOVA was very, very busy in the Newton release cycle. And we try to provide a fairly comprehensive list at a detailed level of the updates and new capability availability in NOVA in this case for this release. I think in this last update cycle that there were around 30, I think there were about 33 or 34 projects that were included in the roadmap update. And so the overall roadmap itself is about a 60 slide deck. So it's pretty wide. And that's the reason why we don't go through every aspect of the roadmap here. It's just too much information and not enough time. But what we do want to go ahead and do is to share with you the highlights from Newton and what we call the Cliff Notes version. And to go ahead and share that with you, I'd like to introduce Pete Chadwick, another member of the product work group, and Pete is from SUSE. So you get a pointer and a slide. Sorry to make you nervous. So what I want to do is I'm just going to go back as soon as I figure out how to use this one. I use the keys on the keyboard. So when we went through, this is the way we've been doing, kind of this was the lowest level of detail we used to provide, which was a lot of stuff on here. And we realized that not every one of these line items means anything or means something to everybody. These are a lot of code cleanup, a lot of internal refactoring that's needed to make things work better with the project. So what we want to do in this go-around was to come up with, as Carol says, the Cliff Notes. So the idea was to go through the various projects and try to highlight what we think are most important to operators, or potentially OpenStack developers, to understand how they can take advantage of key features that are brought forth. So, for example, in Nova, you have these which we think are kind of pretty visible. So you have the ability to set API policy directly in code so that you can control it as opposed to having to reconfigure your system every time that you want to change a policy. You can now do that through a simple, I think it's a JSON file. One of the things we hear from customers all the time is that live migration is an important feature that they're looking for. So the ability to take a VM from one compute node, move it to another. So there's enhancements there specifically around making sure that you go back to the scheduler. So when you're moving a VM, so if you go through and you say, okay, I want a VM to be scheduled or done on a system with this kind of capability, live migration used to not go check with the scheduler. So you could end up moving a VM from the kind of server you want to a server that was not suitable for whatever reason. So now you go back and look at the scheduler. That's an important feature that a lot of customers were asking for. The ability to do virtual device tagging just says that I can now assign a virtual device to a specific set of VMs so that whenever I bring that VM it attaches to the same device and makes it much more straightforward. Then as Mark Collier was talking about on the keynote on Tuesday, get me a network. So just make it really simple when you start up to say I need a network to connect my system rather than having to do a lot of configuration manually just saying do it and automatically Nova provides that to you. In Keystone, again, another thing that a lot of customers are looking for is rolling upgrades. The ability to upgrade open stack services without taking the APIs down. That's now supported as part of Keystone. And I think when Carol was going through the earlier slides that she showed cross project activity, rolling upgrade is one of those things that hits all the projects and they're all trying to get there. Some improvements with how Keystone interoperates with LDAP. So you can start to do more in terms of single enterprise directory and roll-based authentication. Token validation caching. Again, this one sounds kind of low level and what does it mean? What it really means is services can authenticate more rapidly. So you get the quicker response time, you get more performance out of the cloud because rather than having to go all the way up to Keystone every time I need to validate a token, I can cache that. And then finally as we talked about again in the keynote, security is very important to open stacks. We're trying to improve areas where there's potential problems. So one of these is you can now encrypt your credentials while they're in the database. Horizon. This is one where we had some discussions about what this really means, the horizon and being able to access the plugins directly from JavaScript. And it turns out again, this is a thing that greatly improves the performance of the overall system when you're trying to build open stack-aware applications. And then horizon support for Swift-only configuration. So some customers actually only want to install open stack for the object storage capability. And previously, if you wanted to do that and you wanted to use the dashboard, use the open stack dashboard horizon, you still had to install Nova and everything else. So now you can, if you really wanted to just run Swift, you can do that. You can still get the horizon dashboard, which provides an easy way for customers to set up storage objects and manage that. Again, horizon providing more interface into different or expanding the range of services that you can configure directly out of horizon. So it now supports access to the Neutron Level 3 agents. And then the last one is just letting administrators, you know, horizon has two views, the user view and the administrator view. Now when you're on the administrator view, you can see where floating IPs have been assigned. So you get an overall sense of kind of how your network's been being utilized and being consumed. From Heat, one of the things is we now, there's a couple different modes that you can run Heat in. Convergence just basically says you want things to come into alignment. And that's really the way most customers were configuring the system, so we just made that the default. You can now do, if X and Y then do this as part of the Heat, so when you're trying to look at event triggers, you can now have multiple, multiple conditionals in there. And you can also, you can also point to external resources so that if you have perhaps a storage cluster that's not directly part of the cloud, you can still trigger on that if you can get events into the system. And I assume that YAQL probably stands for yet another query language or something like that. So it's a way that you can get access into information from your Heat stacks. Telemetry, so this is actually three projects now. It used to be just Solometer, it's now Solometer, AODH and Naki. So across those three, they've implemented a number of things. First of all is full support for Magnum, so that when you set up a Kubernetes Bay with Magnum, you can get statistics on how your Bay is actually performing so that you can then use that to filter into the Heat stacks that are part of Magnum. There's a lot of new meters, so for example, you can now actually measure cash hits, which again, sounds kind of low level, but the point is if you start to see that you've got a physical server that's getting a lot of cash hits, that may say you need to move some VMs off onto another physical server so you can trigger an event from that. Batch processing is a performance enhancement, so rather than have to take each message individually, you can now say all these messages and just fire them off to a single batch process, so it improves the response time of Solometer and tracking the messages and bundling them up and sending them off to Heat to be worked with. And similar we have the composite rules for alarms, so you can say if X and Y, then trigger an alarm as opposed to just if Y, then trigger an alarm. Neutron, the ability to put quality of service on your VLANs or on specific network ports, again, it enables you to provide more fine grain control of the overall network. And VLAN-aware VMs means that when you launch VMs, you can have multiple VMs share a single port automatically. This really gives you more scalability in the overall network. A lot of performance improvements when you start using OpenVswitch now. OpenVswitch in the latest release has layer three forwarding built in, so Neutron can now take advantage of that to get faster performance on the network. And then also improving the responsiveness when you have network failures to reconfigure the network or to reassign new ports. Swift, again, keeping with the security theme, encryption in the object level while it's on the disk. So again, for some reason, somebody were to hack onto the system, they wouldn't be able to decode what's in the objects. Again, you see performance improvements, I think we saw that on the again on Carol's slides that talked about the themes across multiple, multiple areas. Performance improvement on erasure coding, I'm not sure how many people understand what erasure coding is, but essentially RAID in software. And so it's the ability to have multiple objects stripe them across. Stripe instead of having to replicate an object on every two times or three times to get the kind of resiliency you want, you can actually do it now, using what's called erasure coding, which makes smaller copies of things. However, it is actually a pretty expensive thing to do from a CPU perspective. So just trying to clean that up to have it have it perform better. And then Cinder, just to wrap up, we got one more after this. So Cinder, you can now do what's called pool waiting. The idea is that you want to make sure that as you're as you're assigning new blocks to attach them to devices, you want to be able to say I want I want you to use this pool of storage media. And you can change based upon the based upon, you know, you want to go the fastest stuff, you want to go the cheapest stuff, whatever you can adjust the weightings appropriately. We can now do active active high availability for Cinder, which means that you if you have multiple Cinder control nodes, you not only get the availability, but you also get improved performance. And Cinder whenever you had a volume and you kept you took snapshots, if you wanted to delete the volume, we then had to go and delete each individual snapshot after that, which is which is obviously kind of painful from administrative point of view. Now when you delete the volume, it automatically deletes the snapshots. And then there's a lot more storage back ends available for for Cinder so that you've got a wider range of devices that you can use behind Cinder. So just to wrap up, you know, obviously, this was a quick overview if you want to go and really crawl through all of the detailed slides and see all of the specific enhancements. Here's the here's the roadmap. Additionally, we will provide some results of what comes out of this design summit in terms of how that's going to influence, influence the roadmap. And more importantly, we'd love to get feedback. So if you think that this is first of all, if this is the right level of detail, if this is useful, you know, let us know so we can make sure that we're always providing the best feedback we can to the community as to what's going on. Also just a little plug, there will be some some videos from the PTLs talking about what they're going to be working on in the up either in this release or upcoming releases. So you can get more detailed directly from what the from what the technical community is focused on. So that's, I think all we had, right? So any questions or comments that either Carol I can answer? Yes. It's actually finer than that. It's on a quarterly basis where we'll in September we did the Newton update. And then when we get it done with the Ocata design cycle, we'll do an update that reflects what we know to be coming in Ocata as a result of that. And then the next release will be when we get to code freeze for Ocata. And then when we come out of the pike design, then we'll go ahead and do an update there. And as many of you are likely know, there's a sort of a reengineering happening in the open stack development process, where the design summits are changing and going to be replaced by the forums. We have project team gatherings coming into existence. And so as that model becomes more detailed, we'll look at how we should change our roadmap update process so that we can align and have the right information available at the right time. But the 1000 foot views are really just done on right after the release. Right. And then the updates are done through the design sessions. Yeah, or any other questions? All right. Well, thanks for joining us. Thanks for the summit.