 Hi, this is Shamal Zahir with the Metake Design Series. We're about one week after the Design Summit and with me today I've got Sean. Sean, can you please tell us a little bit about yourself? Sure. My name is Sean McGiss. I work for Dell Storage in the Eden Prairie Office of Minnesota, right outside Minneapolis. I was part of the original team of compelling technologies. We were accorded by Dell a few years ago now. I've been spending the last 10 plus years working with storage, integrating with different applications, creating different ways to create and delete volumes, and OpenStack was one of the projects that I got involved in and really enjoyed. I would really like to have the opportunity to spend some more time on it and make it one of my primary focuses and been having a great time since. Awesome. So you're the Cinder Project team lead. Can you tell us a little bit more about what Cinder does? Sure. The Cinder Project is the block storage service within OpenStack. In a typical OpenStack deployment, you wouldn't necessarily need Cinder. You could have just your compute nodes running and virtual machine storage, just consuming local storage on the compute host. Cinder provides an external block persistent storage, so that even when you delete your virtual machine, you can spin up a new one, attach the same volume, and have that data stick around, and also use different back ends for that storage. Rather than being local on a compute node, it can be on your SAN array. It can be on an NFS share. You can use stuff, things like that. In the design summit that just happened in Tokyo, what were some of the hot topics that your team discussed and what were the decisions that were reached? There was a lot of discussion about what Cinder should actually be doing. There's been different thoughts on that, whether Cinder should really embrace the cloud-centric thinking of being a, I guess, more of a commodity block storage service, where it really doesn't matter what kind of storage you have on the back end. On the other hand, there's a lot of interest in taking advantage of features from more expensive SAN arrays, especially as we go into more of the enterprise environments. I think for a long time we did have a great focus on being more of a general public cloud kind of platform. But to really be a ubiquitous cloud platform, I think we need to go all the way from that public cloud to a private on-premise deployment. And that gets you into the enterprise environment. And those enterprise customers usually have an existing storage infrastructure in place. And they've spent a lot of money on it. And they have systems that kind of have some advanced features that you can't just get with direct-attached storage, things like data-tearing and deduplication and encryption. So one of the big topics was how can we balance that? How can we address the general-purpose need of block storage as well as the needs of these enterprise customers that want to use OpenStack but don't want to lose out on some of the value that they've invested in? Another theme was just kind of the unfinished business that we have in the project. We're a fairly mature project now. Cindy's been around for several releases now. And there have been different initiatives that have come and gone. And at times we would move on to other higher priority things that came up. Maybe before we were quite finished with what was the previous focus. So we have some things that weren't quite fully taken on the way as far as they should be. Things like our object inheritance for our drivers. It's something that an end user really doesn't know or should care about, really. But it is something that does limit us to a degree in how quickly we can move forward with new features and how safe we feel in making changes that we're not going to break something. So one of the themes that I see is getting back, cleaning up some things, picking up something again and finishing them and expanding on things like our functional testing and our quality of our third-party CI to make sure that the code is really solid and is in a good state that we're able to keep adding new functionality and moving the project forward and being able to keep meeting the needs of the users without getting too caught up in problems that we've caused for ourselves. Given the focus on going back and revisiting things, how would you say it's a distribution? Or what are the top and main priorities if you will for metastasis? I think one of the big things, just because it kind of came in really at the end of Liberty, is support for replication. There was an implementation of replication in Cinder previously. The problem was there was a lot of great work done, so I don't want to downplay that implementation at all, but it was implemented by a single vendor at the time, and really once other vendors tried to start implementing it, we found that it wasn't quite designed flexible enough to work with the various capabilities of different storage arrays. So because of that, we called replication V2, the second version of our replication support, where to start with, we tried to scale it back a little bit, not do too much at once until we kind of understand what other arrays, other storage devices are capable of doing first before we give into some of the more advanced functionality. So at the very end of Liberty, we approved the spec for implementing replication V2, but there was a lot of back and forth on it, and we really weren't able to do much until the end of the cycle, so we plan on implementing that in a few drivers now and get support for a few different storage devices before we start adding additional functionality on top. The other thing that we need to address is our API support. So we had a V1 API, we implemented V2, and we deprecated the V1 API for a few releases now and tried to go and actually remove that support. We weren't able to do that, because we found there's still a lot of clients using the V1 API. So that caused a lot of discussion of how we can move forward with being able to add new capabilities without being locked into supporting that API forever. So luckily for us as the center project, this has been run into in other projects, and there's been a lot of discussion around API micro versioning. So the idea there is that the clients can negotiate with the server for a specific version of the API and allow the server to support multiple versions of the API and add new functionality, change the API, but still support an older version for clients that don't have the latest and greatest. So that's being worked on right now, and we're hoping that it addresses a lot of our concerns and a lot of the restrictions we have right now and allow us to move forward without being too locked in by never being able to change what gets implemented. The other top priority is our ability to ensure that we have good quality code, good functioning code by continuing to focus on third-party CI, making sure that the results that we get from the various vendor CIs let us know the quality of that backend with Sender, and then implementing things like functional testing to be able to increase our test coverage, really make sure that different areas of the code base are getting covered and we have some metrics to back up the quality that we have the confidence that once a user deployed this, they're not going to run into a situation that we haven't tried or haven't tested against. For the listeners, what we've been trying to do is try to use the concepts of fees, whether scalability, resiliency, manageability, modularity, and interoperability to help connect the dots between the work being done in the project and the direction concepts are heading. So what would you say are the key, what is the key theme or theme from Sender and Mitaka? One of the themes is interoperability. The API macro version work that's being done is kind of to that end of being able to work with different versions of clients, different versions of services without having to be kind of in lock step and make sure that everyone's on the exact same version for things to work the way we expect them to. We're also looking at availability. One area I haven't discussed yet is there's a lot of discussion around high availability and active-active deployments of the Sender volume service. This is the service that handles the controlling the storage back end. There's some ways you can get that now, but it's not really supported in Sender. There's some gotchas. We want to work through those issues and make sure that we can support Sender in a high availability, active-active environment, allow users to scale it out and have the availability they need so that one failure within their environment doesn't cause end users of their cloud services to be locked out from provisioning new storage. We look forward to the amazing things that Sender will be delivering in Mitaka. Thanks again for your time. Thank you.