 So go ahead, John. Thank you. Thank you very much Margie. I appreciate the opportunity. And first I wanted to say, introduce myself. I'm John Dickinson. I'm the project technical lead for OpenStack Object Storage, also called SWIFT. And I love giving these updates and the opportunity to do them every at the summits and now in this webinar format, it's something that I always look forward to. So I definitely want to say thank you to Margie and the Foundation for the opportunity since we didn't have a chance to do it in Hong Kong. I'm grateful that we can do this here this morning. So today I'm going to cover three basic things and then have some time for questions at the end. I'll cover what's been going on with the contributor community, what some of the major features are, and then looking forward where are we going, what's in progress, and what are we working on currently. So to start with, looking at just the community growth and who's contributing and how that is taking place. First off, we've got now, I think as of yesterday, a grand total of about 144 people who have contributed to SWIFT over the lifetime of the project. And 63 of these contributed in the six months of the Havana release cycle. What I really like about this is that we had over half of them, 35 new contributors were new to contributing to the project here. I've got their names on the screen right now. And one thing that's not shown that's really nice about this is these 35 contributors actually have 21 unique domain names in their email accounts that they're contributing with. Some of that's a little fuzzy because some people may contribute from a personal email rather than their employer's email address. But it still gives a nice measure of the broadness and diversity of the contributing community. And so that's something I'm really grateful for and happy for. During the Havana cycle, we had nearly 400 patches with over 1,700 reviews. And to all of these people, these 63 contributors, the total of 144 and the 35 new contributors we have. Thank you. It is, yes. I'm so sorry to interrupt. I think the slides aren't moving. So maybe put them in presentation mode if we'll reshare. You don't mind. Uh-oh. Okay. Well, let's see it this way. I can see the title slide. Okay. You can see that and you can still see my slides here. You can just see state of the project. Okay. Well, let me walk through this slide again. Can you see my slides now? I can't. Although, you know, it worked for everyone yesterday. So there's a little glitches in this application. But maybe I need to reshare. It's stuck on the one. We had some similar issues Tuesday, even after it was working fine. So I apologize. Did that change anything there? Yes. Yes, that did. If you're able to put it in pre-zone mode, that's great. It's not the slide. Oh, yeah. And then I think you just have to move that box. Yeah. It's working. It's 35 new contributors. So then I think you just have to move that gray box down, if possible. Everything good? We're good. Yeah. Everything good? Okay. Yeah. Thank you for letting me know. Okay. So very much to all of these contributors, I definitely want to say thank you. Swift is a world-class storage system. It's running massive clusters in production all over the world because of your efforts, the people who are contributing into the project. So to that, to all of these contributors, very much, very big thank you, and very strong kudos are deserved by everyone. So let's call out a couple of people specifically. Looking at over the last six months during Havana Cycle, looking at who's contributed the most numbers of patches and has done the most reviews within Swift. And in isolation, these numbers don't mean a whole lot because you need to have good patches and you need to have good reviews. But very much I wanted to call out both Peter and Sam because they're currently prolific members of the community, always on IRC in addition to contributing lots of patches during lots of reviews. And so they are actively making Swift better. So both to Peter and Sam, again, say thank you very much. Looking at overall who is contributing into Swift, the half dozen companies or so that are contributing, these are the ones that really stand out. These companies are active in doing code reviews, contributing code, actually building and deploying production clusters at scale. And so Swift Stack, Ena Von, Black Space, United Stack, Red Hat, and IBM have all been talking about and participating in the community. And in Hong Kong, at the last summit, there were quite a few new things that these companies came out with and talked about. So Swift Stack talked about Concur and a production global cluster that's running today. I'm going to cover that in just a little bit. We've got – Swift Stack also talked about a new hard drive platform on Seagate called the Kinetic Drive Platform. Rackspace talked about their numbers related to they're the largest publicly known Swift clusters today. They talked about their massive traffic levels and the fact that they're running 85 petabytes in their clusters right now. Ena Von talked also about one of their large customers running petabyte cluster deploy in France, which is really exciting. I know that Red Hat and IBM are building out major product offerings related to OpenStack and specifically around Swift. And so to all of these companies, again, thank you very much for your work and your continued contributions into Swift. And John, I'm sorry to interrupt you again. I think there was a meeting burner at scale. It was working very well for us yesterday with three people. It's just not advancing. Do you want to send me this slide? There's a link and I can give that to everybody. And you can just speak to it or – I don't know if you have to reshare again. And I apologize because it was working fine yesterday. I think there's an issue with this when we have more people. So we'll probably try another platform after this. But thank you for your flexibility. What do you see now? Major new features. And the major contributing company. Yep. Yes. Well, let me just walk through it this way then. And we'll go back. That sounds good. Thanks. Sorry. So after talking about the major participants in the community, the broad diversity there, moving on to, okay, what's the exciting part? What are the actual new features and new things that are enabled in Swift these days? So the first thing I want to talk about is global clusters. This is absolutely the biggest feature that was added in the last several months. And it's really great. So let me cover a little bit how it works and then talk about it. I've got a little video I'm going to walk through here. So I certainly hope that this works. So the first thing is in this example, I've got a cluster set up in both Portland and Hong Kong. And so a user will be able to write a web app and they will, with GeoDNS, then they're going to be routed into the data center that's nearest to them. So if they're coming out of New York, their request is going to go into Portland. So the data is written in replication for durability, just like it always is. And then Swift will automatically replicate the data to the second data center. And then once that is confirmed, the replicas are removed from that first house. And at that point, right then, you've got DR support and availability. Now the user is able to read from their tablets and see exactly what's there. And the great thing about this is in addition to the data, the DR, the disaster recovery, and the high availability there, users are then routed to their closest location as well, which dramatically improves their performance and their responsiveness. So to talk a little bit more about how that is actually running today in production, Kong Kerr got up on stage and gave a keynote at the summit. And Kong Kerr is a company that makes travel and expenditure reporting tools for enterprises. They have a mobile app that allows you to take a picture of your receipt with your phone and then expensive. And so Kong Kerr was looking for a storage solution that was extremely reliable, supported their mobile device use case, and improved the service levels they were able to offer to their customers. And so they chose OpenStack Swift and gave a keynote at the Hong Kong summit to talk about it. And so they have integrated it into their existing infrastructure, which was a heterogeneous infrastructure including a massive Microsoft installed base. They're currently adding more than 10 million images a month, and they now have a solution that enables them to grow and scale to billions of images over the next few years. So the really great thing that Global Clusters in general is going to give you is that you have the ability to have a globally dispersed cluster, a pretty wide geographic area. So the advantages here for service providers is you get to provide all-tolerant infrastructure that is what service providers expect and demand from the software that they use. Enterprises on the other hand are already used to running in multi-DC designs and having multi-DC architecture. And so Swift gives you that today with Global Clusters is in production today. You can use it today, which is something I'm really excited about. So we've got some other improvements coming to make Global Clusters even better, and I'm going to get to some of those in just a little while. But first, I want to cover some of the other new features that have been added into Swift over the last six months. So first, one of these things is a Contagy style, I can say. And this is something that allows operators to make composable configuration settings. And that offers easier system admin management. It makes it easier to extend Swift with additional functionality, and it makes it easier to integrate with management tools for your Swift cluster. Another major improvement was the fact that we added in Gould-MinCache connections which lowers the resource overhead for each request, thereby improving performance for both the requirements for the deployed hardware, but also improving the performance for end users. Another major ongoing piece of work is improving the replication process. We've made some improvements, and these are ongoing as well, to improve how the replication transport actually happens. And this allows us to build out some new efficiency improvements within the system internally. And the end result of this is that we're going to be able to lower the failure detection times and the automatic failure recovery times by making all of these processes more efficient. So this is ongoing work, but it's something that we've definitely got some major improvements already in the code today using. Another major performance improvement that we've been working on is getting better disk IO performance. One of the things that is generally pretty common about switch clusters is that storage nodes are typically very dense. You've got 12 to 24, even as much as 60 or 90 hard drives in a single server. And we've made significant performance so that each drive's workload, not just each server's workload, but each drive's workload is isolated. And this means that a busy or failing drive won't cause performance hits to other users of the system, whether that's system background processes or even, again, end user performance will not be impacted simply by one hard drive failing or being under load. Speaking of hard drives, one of the things I mentioned earlier is support for Seagate's new kinetic platform that was demoed in Hong Kong as a proof of concept of saying, hey, guys, this can work. This is something coming down the pike, and it's really exciting. So Seagate kinetic drives are different than normal hard drives that were all been used to. First off, they do not have a block interface with a SATA or SAS connector. Instead, they have a key value interface where you say, I need to store this data with this key, and then you speak to the drive over an Ethernet interface. And so you get something, a hard drive, that itself has a native object model. It's object-based data, and you don't have to worry about a lot of the abstraction layers, everything from ready controllers and caching systems to the VSS system in Linux, to POSIX, to all of these different things that are between the application and the actual storage. And so it's an exciting new platform that is able to improve the overall efficiency and improve the cost of parameters of building on a Swift cluster in the future. These are things that are going to be, I think, hitting the market in 2014 sometime. So looking at going forward, one of the things that I like about the kinetic drives and where that comes in is that it starts really exploring the flexibility of choice that the employers have in order to make efficient deployment decisions for their specific use case. So Swift has always been all about using standard off-the-shelf commodity hardware and the software itself will work around any failures and allow you to take advantage of whatever your deployment is. But the way everything has been built right now is that you have a single-replication policy. You can change that in a live production cluster over time, but your replication policy is going to be, look, we have three replicas of all of our data in the system. At the same token, you want to be able to potentially put Swift on... makes Swift take advantage of existing storage systems that you may already have deployed. And so what we've got in progress right now that we took advantage of in building out support for the kinetic drives and then Red Hat is doing some product offerings around taking advantage of this, integrating with Gluster is the concept of storage policies. And storage policies are something that I'm really excited about. They're basically made up of three different pieces. Given your overall sets of hardware that you can build out your Swift cluster for, when storing your data, you basically can choose now, you'll be able to choose three things. One, what subset of hardware is that going to be stored upon? And an obvious description of where this may come into play would be choosing a particular geography of hardware or you might do an East Coast and an EU and a West Coast and then Asia regions and you can choose what subset of the hardware is this going to be stored upon. If you're not using global clusters, you may want to instead say, hey, here's high performance hardware and here's low performance hardware. I want to store this data on SSDs and I want to store this other data on, say, the drives. So the second thing is after you've chosen what subset of hardware you're going to store your data on, at that point you want to be able to choose how is it stored across that subset of hardware. And so you may choose that I need to store my data with a standard three replicas for high durability and high availability of the data, but there's some kind of data that you may want to store slightly differently. You may, for example, if you're storing a lot of images, you may want to store some thumbnail images at two replicas because you can recreate those and you don't need the as high durability and you want to save some costs on storing the data. So you may choose that I want this to be my EU cluster with reduced redundancy and I want this to be my West Coast, California full redundancy and I want four replicas for a set of data that spans the globe. And so when you combine the subset of hardware you're putting things on with the encoding that you're actually using, then you get a lot of flexibility and power to match the specific deployment you have with your specific application use case. And so the last piece here that you'd be able to choose as you're building out the Swift cluster is actually after you've chosen what subset of hardware you're using and you've chosen how you're going to store it across that subset of hardware, then at some point you need to actually talk to a hard drive. And so what protocol are you using to talk to that hard drive? So for example today in Swift we pretty much assume that you're speaking a local POSIX file system. And there's lots of work right now to take advantage of extracting this and allowing you to talk to other kind of storage volumes. One example of this that I mentioned earlier is the Kinetic drives, not speaking a non-POSIX protocol. Another example that has been in development is being able to talk to cluster volumes rather than standard local file system volumes. And so the combination of these three things really allows deployers a massive amount of flexibility in how they can build new things. So obvious first use cases include being able to do reduced redundancy, being able to have performance tiers, being able to specifically control your geographic placements while maintaining as a unified cluster on its text globally deployed. And so definitely stay tuned here. There's a lot of exciting work going on and I'm really excited what's happening. The people who are really leading this efforts, the major players who are contributing code, the refactorings in the code needed to do this include Swiftstack, Red Hat, and Intel. And the interesting thing is that this work has a specific end goal. What we're trying to do is allow deployers to use non-replicated storage inside the Swift cluster alongside of their replicated storage. Erasure codes are an efficient way to store data with very high durability by cleverly encoding it and striking it across a lot of drives. It is really good for certain use cases, specifically when you're talking about storing large, colder content like backups or VM images or things like this. And so you can... Deployers can save a lot of money by being able to store things with erasure codes rather than doing a full replicated system if their data matches the use case where it works. And so these... This is our specific goal and Swiftstack and Intel and Box are very actively contributing and working towards this specific goal in Swift. And so that leads me to where we're going. My vision for Swift is that everyone, every day, will use Swift whether they realize it or not. And I think we're well on our way to this goal looking at the fact that we've gotten major cloud providers with their thousands and thousands of customers using Swift and, transitively, their customers are using Swift. So that's people like Rackspace and Enivance and HP, IBM now that they've acquired software. We've gotten major end user applications that have announced about their using Swift, everything from people like doing mobile gaming to companies like Concur to companies like organizations like Wikipedia. And these are what's really exciting to me. So I think that looking at our growing contributor base, looking at the growing number of production deployments in real-world applications and when people are looking for deploying storage that is highly durable, has high availability and supports massive concurrency. People are using Swift. And this is because it's powering some of the world's largest storage clusters today. And so I think absolutely, this is our vision. This is absolutely achievable and I'm really excited about the future. So I want to say thank you for your time and have a few minutes for questions. Okay, great. Thanks, John. Appreciate your flexibility. It looks like Jonathan had a couple questions but maybe one of them was answered, Jonathan. First of all, I assume everyone can get a copy of the slides at some point. I'll give it to you. Okay, right, and I'll post them. Thank you. And then there was a question about what is colder data? But I think maybe you answered that. I think colder data here is just relative. I'm not going to put specific numbers. It's a little bit different for every use case. But basically, data that is accessed less frequently but must still be available in the context of Swift. So for example, if you're storing backups, your goal is that you would never need them. But when you do need them, you need them. You need to be able to get them. So I think that's a perfect example of cold data as opposed to hot data, which may be something that is, for example, like the images stored on Wikipedia. Those are actively accessed and frequently accessed. Okay, great. Let's see. Rags, you can ask a question via audio, assuming it's semi-quiet where you are. Winfrey wants to ask. Yeah, thank you. Rags from Ragspace, can you hear me? Yes. Okay, cool. Can you talk a little bit about the high availability of the proxy nodes? I know that replicas of data that takes care of high availability and disaster recovery, but what about the proxy nodes? Is there some recommendations? How does it work? Is that some practices? Yes. So I think I'll throw a couple a little bit there, but I think the question is basically, okay, so talk about the high availability of the proxy nodes. And I'd be happy to because it's great and it's something that Swift has always supported. Swift's architecture, of course, as you know, has proxy nodes talking to storage nodes in the back end. And the storage nodes are where the data is stored. There's a massively scalable, and that's where the proxy node coordinates the communication for all of that. But the clients are actually talking to the proxy nodes. And so to have a high availability cluster, not only for working around hardware failures, but even just to do typical operational tasks like upgrading live clusters, you need to have the ability for these proxy nodes to come and go as they may. And so this is absolutely something that's been supported since day one in Swift. And the way this works is basically twofold. One is that there is no central point of communication or central point of failure or central knowledge sharing point within a switch cluster. So any request that comes to any proxy server can be handled. And what this means then is when you need to scale out the amount of concurrency that you need to support, just add more proxy servers. So in order to make these highly available and to abstract that behind, say, a single VIP or domain name, the answer there is you use a load balancer. And you can have a high availability load balancer pool. And the Swift proxy nodes already support health checks with appropriate back pressure ability so that you can, so an operator can very cleanly take a Swift cluster or a Swift proxy node out of that load balancer pool, and traffic will just seamlessly route around that within the Swift cluster. So there's nothing special about a single proxy node. If you need more throughputs, more concurrency, and to support the high availability use cases, which I contend that everyone does because we need to work around these hardware players, then add more proxy servers to put that behind the load balancer. There's several ways to build load balancers in front of Swift, whether they're not using a commercial solution or something open source or using different routing protocols or there's just quite a few things there. So I hope that answers your question. Yes, it does. Thanks. Great. Thank you. I think we have two more from Hamza. Sorry, let's see. What is about using flash memory with Swift? FSD? Yeah. I think you're asking if that please. Okay. That's a great question. And there's, in the interest of time, I'm not going to go into a lot of very specific deployment details, although I'd be happy to answer those later. Be absolutely, and it is possible to do that. Most use cases, well, that is, if somebody is looking to build out a highly durable cluster of storage that supports, that needs to be high performance as well, that's when the players start looking at deploying things or putting things onto all SSDs. There's nothing that really in Swift that requires that for object servers and there's nothing that necessarily prevents that either. This is one of the things I'm excited about about the storage policy work is that then the players will be able to deploy, say, a set of object servers on SSDs and a set of object servers with standard data drives, spinning disks, and they will be able to expose those different peers or performance to their end users. So absolutely, you can do it today. Generally, your bottleneck is networking, networking between the cluster and the client rather than single stream throughput, but yes, absolutely, you can do it. Okay, thank you. And then last one, and then we'll move on to Russell to be respectful of time. Let's say Jeff says, will Swift provide interfaces for hardware level storage replication? I'm not sure exactly what you're referring to there, Jeff, but I think you may be referring to things like storage appliances that themselves already provide reliable systems, reliable storage. So for example, if you're putting in, you've got NetApps or Eiffelons or something like that, I think those are some interesting things to consider, especially from a perspective of migrating to Swift away from conventional storage in order to improve performance and lower costs. And this is where some of these abstractions that are being built out to support storage policies actually, again, become very interesting because you will be able to isolate, say, hey look, here's a traditional storage appliance that I have, and I need to take advantage of my migration plans. So I've got a lot of traditionally deployed storage and I need to migrate to something that's a little more commodity. And in that sense, I absolutely think that we will see people within this next year doing just that. Building the Swift cluster, taking advantage of the traditional storage architecture and then migrating to a more generic commodity like Swift's hardware setup using the functionality provided by storage clusters. So with that said, I want to say thank you very much. Thank you to Margie and the Foundation. And if you would like to continue, I would welcome your involvement within Swift. You can always find us in OpenStack Swift on FreeNode IRC. And if you're interested in looking at how to get involved, definitely check out the OpenStack website and Swift Developer Docs at Swift.OpenStack.org. So thank you very much. Great. Thanks, John. Appreciate it. And if anyone has other questions, you can send them to me at margie at OpenStack.org, M-A-R-G-I-E. And I can get those over to John if we missed any. Okay, so let's move over to Russell on the compute side. I believe I'm sharing my screen. Can someone confirm that? Yeah, I can see it. Okay, anything. All right. Thank you. And I think it'll link to the Google Doc and the chat box there in case anyone else would like to pull that up just in case. Okay. Okay, great. Thank you. You may have to read your own questions because I won't see it. Let me see. I don't want to get too fancy, but let me see if I can change the view on this. If I can't, then I'm not going to. Let's see. One second. There you go. Blocking it for a second. About the rest I can do. Do I go to present? Okay. I mean, this is fine, too. Okay. One second. I'll give it one more second and then we'll go to present. Okay. All right. Let me move this gray box thing. Which is semi-annoying. Let's see. Where can this thing go? Can you see the full screen now? With the gray, like in the bottom left-hand corner. Okay. Okay. Go ahead. All right. Well, thank you very much. So, again, my name is Russell Bryant. I'm the PTL for the Compute Program, which primarily produces the NOVA project. And so let's talk about what's been going on with NOVA. So we jump to the next slide here. So what I want to talk about first is an overview of the Havana release that we released just a couple months ago. Just prior to the last OpenStack summit. So, NOVA was one of the founding projects, along with Swift of OpenStack, but it's still moving at a pretty incredible rate. So blueprints are the system we use in the Launchpad site for tracking feature work as well as other major development efforts. And so in the Havana release, we had 93 of those implemented. You can go look at the full list if you'd like. There were over 800 bugs fixed. Over 2,000 changes to the code made. One thing I really like to point out is that it was 281 people that contributed code that was merged in that six-month development cycle. So that's pretty exciting. And also, so we take code quality very seriously, both from a lot of angles across all of OpenStack. And part of that is doing code reviews. And so there were over 18,000 code reviews done that resulted in those 2,000 changes that went in. And so every single change that goes in has to be signed off by at least two people. And changes usually go through multiple revisions as we find problems and so forth. So we had just for the NOVA project over 18,000 code reviews in that development cycle. Also included a URL there for the Havana release notes. There at the bottom. That has sort of a more full list of features that we added in NOVA as well as other projects. So if you haven't seen that, I encourage you to check that out. But I'll cover a few of the features that we did here today. So if we jump to the next slide here. So one of the things that we've been working on for multiple releases is improving upgrades. And this is sort of, this is one of the things one of the biggest requests from users for quite some time. Especially as deployments are getting bigger and bigger. So another thing I can note about Havana is, as I talk to people, especially at the summit, one of the things I like to hear about is learn about people's deployments. How big is it? What are they doing with the software and so forth. And there are many deployments of NOVA today in the thousands of nodes size. And there are some that are over 10,000 or over 20,000 nodes. So that's the kind of scale that we've achieved at this point. So back to upgrades. So with that size of scale, still on the same slide. I'll let you know when we jump. So with that scale, you know, upgrades are more and more important. So one thing we've been working toward is the ability to do, to be able to upgrade your NOVA deploy without having to take any downtime of your API and control plane. So we've been doing a lot of infrastructure build up to support that. And we made a lot of progress in Havana. A couple of things in that area. So one of them was this unified object model. So one of the, probably the biggest thing that has been in our way of doing these live upgrades is the, when we have every release, we have changes to the database and the database schema. So you have to take everything down to migrate your database and then bring everything back up. So we have this object model where we're working to abstract the code base away from the details of the database. So that it can also, well, one, so the code base isn't so tightly coupled to the current status of the database schema and also having a nice place in the code where we can deal with different versions of the schema as necessary as it changes underneath you. So that, we made a huge amount of progress there and also this RPC version control thing. And what that's about is, nobody's a very highly distributed system like every other part of OpenStack. And we use messaging between services on multiple nodes. You know, as they talk to each other, usually A and QP. And all these versions are, all these interfaces are versioned. That was something that we did a couple of releases ago. And one thing that we've added is, not only did these interfaces have to be versioned, but we have to have some control over what version things speak and how they're progressing through an upgrade. And while you have a situation where your deployment has a mix of different versions, we have to have more control over what everything's speaking to make sure everything in this mixed deployment understands. The message is being sent. So we have more control over that now. So I'll talk a little bit more about upgrades later when we get to the Ice House release. So we jump to the next slide here. Another thing that we improved in Havana is Cells. So Cells is an optional, it's a deployment choice. It's a way of deploying Nova such that you have, you break it up into these high-level clusters. You have an API cell, and then you have compute cells. And every compute cell has its own database and its own message broker. It does a couple of things. One thing is for scalability. So if your deployment, as you scale up, if the limits you're hitting are the database and the message broker, then you break your computes into cells and then you can bypass that. So that was one of the primary things. But it also lets you have large compute clusters that are geographically separated under the same Nova API. So that's something that's still, we're still doing a lot of development on Cells support. And Havana has some features. And one of those are senders. So before Havana, a sender was not supported. Live migration is not supported between compute nodes within a cell. And then also we have a new scheduler. So in a traditional Nova deployment, the scheduler is a pretty fundamental component. What the scheduler does is that when a request comes in to start a VM somewhere, it makes that decision of where it's going to go. And in a cell deployment, scheduling is in two stages. So the first stage is cell scheduling. Which cell is this request going to? And then within the cell, it does the host-level scheduling. So the cell scheduler now works just like the host-level one in that it's a filter-based scheduler. And you can add filters and weights to do some more advanced logic there. So that is nice and handy. So let's jump to the next slide here. Last point I wanted to make about the Havana features that we added is improved center integration. So one thing that, so Nova makes pretty heavy use of lots of other OpenStack services that we're always looking at. Looking at new ways we can enhance that integration. And so that was when we did some of that here. One thing that we now support is encryption of your sender block storage volumes. Another thing is improved block device mappings. If you've ever used the Nova API for setting up block device mappings, it previously was actually pretty difficult to use. The syntax is nicer now. You've also got a lot more control over the bus type that's used to attach volumes to an instance. So you can specify whether something is going to be attached as a floppy or CD-ROM or otherwise, for example. Another thing that we made nicer was the workflow needed to be able to boot an instance that's volume-back. So all commonly referred to as boots and volume. And that workflow is a lot easier. Previously you had to kind of jump around Sender and Glance and Nova to kind of get that up and running. You had to go to Sender and create a volume from the image and then go to Nova and try to boot from it. It was not terribly straightforward, but now you can do that in a single operation from Nova. So a lot easier to use. So that's some Havana highlights. So now let's jump up to IceHouse. You can also go to the next slide and start talking about IceHouse. So IceHouse is the current release that we're working on right now. It started roughly at the OpenStack Summit. And we're actually quite well on our way to IceHouse. So we're working on the second development milestone. So IceHouse is a six-month development cycle. It's made up of a few development milestones, and then we switch into release candidate mode, and then we go to the full release. So we've already released the first development milestone and we've started working on the second one. So just to give you kind of a quick high-level view of the work that's already happened. Like I mentioned before, Blueprint are the things that we used for tracking the work. We've got 130 of those targeted. We don't know yet how many of those or which ones for sure will get finished, but that's all the things that people have said that they intend to complete for the IceHouse release. And you can go take a look at that at the link there at the bottom, the Launchpad link. We're getting close to 300 bugs already fixed. We've emerged almost 700 changes, and we've already had over 6,000 code reviews on all these changes in just the beginning of the development cycle so far. All right, let's jump to the next page. So one of the things we're aiming, a big thing we're aiming for in IceHouse is, like I mentioned earlier, that we've been working on live upgrade support for a few releases, and a lot of that's just been working on this, working on a lot of necessary infrastructure to make that work, and we're attempting to support live upgrades from up on at IceHouse. And a lot of that's, you know, finishing up some of the infrastructure we've been doing. I mentioned this object model that we did in Ivana. There's still work to do to convert the entire code base over to that. Some of it is changing the way we do database migrations. So database migrations can either be changing the schema and also it may be doing data migrations. And we're trying to do less data migrations there and do them more as a live or lazy data migration while the service is running. So that'll allow... Anyways, it'll allow us to do the offline data... DB migration a lot faster. And then we have a lot of interface versioning between the different services. We've got a lot of rules around that. So just continuing to make sure that we strictly adhere to those rules that we've set. And also we have some infrastructure work to set up testing around this to make sure we don't break it because it's really easy to break this along the way if we're not careful. So hopefully we'll be able to... Right now it looks like we will be able to support this and that's our plan unless we hit something that we can't work around. But I think we'll be able to do it. So jump to the next one. The other thing that we've doing is sort of another thing around code quality. We do a whole lot of continuous integration testing across OpenStack. And right now most of that is done against the, at least for NOVA, is done against the LibBird driver, which is the driver used for KBM virtualization. And that makes it tough. So that's great for that driver and it's not as great for everything else. We don't have as much confidence in changes to any other drivers because we don't have this continuous integration going into telling it that it's working. So one thing we decided this last cycle is that we want to build up continuous integration across every driver. And there's been good progress on that both for all the drivers actually there's good progress. DIN Server has been a lot of progress. The VMware team has a system up and running now reporting test results and other teams have made plans around working on these things. So we should have much expanded test infrastructure which of course it would result in a higher quality result for everyone. Let's jump to the next slide here. The REST API that the NOVA implements, right now it's in version two. We've been working on a version three of that. It's got a lot of changes and some of it's moving deprecated stuff. So for example, block storage volumes for it used to be something built into NOVA and it was split out into the sender project and its own API. So we need to a new driver of the API to remove that stuff that's no longer necessary. There's a whole lot of consistency cleanup both in formatting of data, the way we name things, the response codes, all that kind of stuff. And there's also some things that we have that putting off to a new API you don't do because there's such fundamental changes to the way the API works. And one of those is a new tasks API. So an example is if you want to go boot a new VM right now you don't have a whole lot of really good insight into that process. So it's sometimes it takes a little while. It's going off and might have to download an image it has to set up networking and just various aspects of that. So what we want to do is, or what we have been doing is doing a lot of restructuring of our code base to be able to better track these long running operations in terms of tasks and sub tasks and we're going to want to expose that by the API. So when you kick off a long running task, such as starting a virtual machine or doing a live migration, you'll have an API to get better insight into the various things necessary. To make that happen you can monitor progress and if something fails in the middle you'll have much more detailed insight into what the failure was and what had happened. So that is actively under development. So next slide, one thing we're looking at. So I guess Nova historically, we've split out multiple things. Nova is a pretty massive project and various pieces get split out over time. One thing we're looking at splitting out right now is the scheduler. So as OpenStack has grown we have more and more services. There are schedulers in each of these projects and it turns out there's lots of overlap between them. Both between code but probably more importantly the data and information about the deployment that all these schedulers really want to know about. There's a lot of overlap there. So it would be nice if we had a common scheduler such that we can do a lot of decision making from one service that has access to all this information. But we're starting that by taking the existing schedule code in Nova and splitting out into its own service and it's actually well underway and hopefully that will eventually lead to being able to do lots of more advanced cross-service scheduling stuff. All right. Next slide just real quick note. I mentioned that we have all these blueprints but that doesn't mean it's too late to show up with something new if anyone else out there has something that you'd like to see in Nova. You're just as welcome and encouraged as anyone else to come to bring your new stuff and there's still plenty of time. With that, I can take some questions and the last slide there has any contact information for me if anyone would like to get in touch with me later. Great. And if you don't mind looking in the chat, I can't see right now. I see a question there. What encryption methods are supported for Cinder and is it possible to plug in an encryption method? I don't remember off the top of my head to be quite honest with you. I have a hard time keeping detailed contacts to that feature in my head for Nova because of how many we're doing. Unfortunately, I don't remember the answer to that one. Lots of that's fair. This is Rags who asked the question but do you know if you can plug in an encryption method or again that's detail? Part of it is more of the actual encryption part of it is actually over on Cinder and not Nova so I don't have much. I haven't looked at that side of the code as much. A lot of the Nova support is a little bit more... It's a whole lot of infrastructure code and kind of glue code making sure that we can utilize what the encryption stuff that was done in Cinder. Part of what is in Nova is some key management code and that stuff is pluggable for different... depending on what key manager you want to use. Right now the code that's there is pretty primitive but for example we expect to add a driver for Barbican which is an emerging OpenStack project for doing key management and I suspect there may be drivers for other things too. But as far as the actual encryption methods I'm not really sure. I know it has to be pluggable to some degree because Cinder just like most other projects has lots of different storage backends and different technologies and it has to be pretty flexible to do encryption in whatever way the backend storage appliance doesn't. Thank you, Russell. Let's see. Alison asks, what about support for Solaris compute nodes since Oracle is now sponsoring OpenStack? As far as I know, I have not seen any code contributions from any one at Oracle so right now there's no support for that and I can't really offer any information on potential support for it. Really? Yeah, the support doesn't exist and I don't have any timeline for it I guess. All right, well if anyone else has a question if you think of any questions later feel free to contact me. One last question looks like, what about Hyper-V? Hyper-V is supported. We've had a driver there for a while and it's under reasonably active development. Great. The company in Europe I don't want to make sure I don't want to say the wrong thing but yeah, there's a company that works on contract by Microsoft to continue to maintain Hyper-V support so it's there, it's supposed to work. Again, and hopefully from what I understand they are working on some continuous integration infrastructure so we should see test results from that soon too. All right, well thank you very much for your time. Great, thank you Russell and thank you John and thank you everyone for your participation today. Hope you all have a nice holiday season and if you have any other questions please contact Russell, John or you can send me questions as well. And that concludes our meeting. Thank you. Thanks.