 Thank you. So good evening, everyone. Thank you for coming to our session. Welcome to Barcelona and the OpenStack Summit. Let's roll right into action. We are here to talk to you about OpenStack Manila in production and think of this as a movie. So we'll walk you through the scenes and you know, I have my colleagues here who will be my co-directors. So starting here, my friend Sean Cohen, one of the directors is an OpenStack product manager at Red Hat. Tom Barron here is a senior software engineer, Manila Core at Red Hat. And I am Anika Suri, a technical alliance manager for OpenStack and containers at NetApp. Thank you, Tom. So the scenes that you're going to help us shoot today are going to range from a shared deployments of OpenStack in production to production grade deployments of Manila shares. And we're going to talk about market trends. We're going to introduce you to our main actor, Manila. And Sean's going to help us shoot some action scenes or use cases, as we call them. And then Tom here will walk us through what Manila brings in Newton, what's the road to Akata leading up to, and then we'll end with a few key takeaways and question and answers. So starting with the share of deployments of OpenStack in production, did you know that 71% of the OpenStack deployments today are actually in production? Based on the survey that the OpenStack Foundation released last week, that's over 20% of what we saw last year. So this talks about the maturity and momentum that OpenStack is actually gaining. And Sean, what do you see about the Red Hat stats? We actually see similar picture in our Red Hat deployments. Something like 64% of our customers are either running already OpenStack in production or on the way to production. So it pretty much validates what we're seeing from the Foundation survey. We have something like close to 400 customers in that surveys and at any given time something like 40 POC is running. So this is very good indications of where we are in terms of OpenStack maturity in general. And one of the questions that we asked, which is I think the first time we asked it in OpenStack deployment, so this is totally new news item for you guys as well, is what actually people running in production on top of OpenStack, specifically in terms of workloads. And one of the interesting facts we learned that the picture is totally hybrid where in fact over 60% basically using today traditional applications, your Oracle, your SAP, really that old-fashioned OLTP databases on top of OpenStack alongside with the cloud native one. So it's pretty much surprised us because we were thinking OpenStack is purely like if you made your mind with OpenStack you were only way to your cloud, right? And we're seeing a hybrid picture. And so that was interesting fact. And as you can imagine, this type of workload has different requirements when we go to production. And today discussion is all about production. So where cloud native application doesn't care about state, right? Because they're pretty stateless. Where applications like the traditional workloads rely on high availability, for example. They rely on shared file system to fail over their cluster, for example. So that's pretty much answer your question? Yes, thank you. What about storage? What about file storage? I'm glad you asked. So IDC released a survey earlier this year where we see that 65% of the storage that is sold today is actually file-based. So that's where Manila steps in and that's consistent with, again, going back to the foundation survey because that's like the Bible. Going back to the foundation survey, we see Manila as the fourth largest emerging project right next to Magnum and Ironic projects at 36%. So this talks about the gaining momentum and the gaining share for Manila in the field. And this talks about, you know, the widely, the most widely deployed option for Manila, you know, if you see the breakdown from a higher level to a lower level, we see that about 23% of the drivers today are actually NetApp drivers followed by, you know, the generic Manila drivers at about 20%. So so much for stats. Let's move on to the age of cloud share services, right? And then what I want to highlight here is the fact that open source Manila, you know, when it came out back in 2013, you know, in the summer of 2013, you know, we had, we had customers asking NetApp as to how, you know, how they could standardize their deployments, right, with open stack and shared file services. And that's kind of what led to the inception of Manila, right? And that's why, you know, it was introduced mainly and then Microsoft Azure followed, you know, sued in about 2014, again, the summertime frame. And then we had AWS, Amazon web services, leading with their file services in about 2015. So what this goes to say is that, you know, not every day do you actually get to out innovate to tech giants, right, such as Microsoft and Amazon. And this speaks volumes about, you know, what open stack and what open source is capable of doing. Yeah. But by the way, how many file system does vendor supports in their public cloud offering? That's true. Yeah. So so Microsoft supports only SIFs and Amazon supports and FS only. So Manila has a protocol that actually supports all well, HDFS, CFFS, and a few of them that I'll be talking to later. So how many of people in this room actually are aware of what Manila is or have had some acquaintance with Manila? Okay. Nice. That's a good number of hands. 60%. So we'll get you acquainted, right? Just get you a little acquainted with our main actor, which is Manila. And what is Manila? It's an open, it's an open stack shared file system as a service. And what it provides is multi tenancy and security. So for instance, I have an example here on the right that shows, you know, different tenants running on Red Hat OSP platform, open stack platform. And, you know, for instance, if you have tenant five and tenant seven, what this goes to say is that you can set up access control lists, which are ACLs or secure, secure file services or network address ranges that actually separate out or that isolate one tenant from another, right? So if you have one tenant and you only want to give him or her access to the engineering files, but not let's say the marketing files in vice versa, you can do that with Manila. And that's what Manila helps you achieve. And as Sean already pointed out, right? Manila not only supports CFS and NFS, but also supports CFS, HDFS and a lot of other file networks. And now I'll pass it over to Sean to walk us through the action scenes. Thank you, Anika. Thank you. So let's go ahead and jump directly into the use cases and we have a lot. So we already covered, in a sense, the first one, which is the support for traditional enterprise applications, right? This is, believe it or not, is still very valid in open stack clouds because think about databases and not just databases of service, which is listed also here, but real databases. How many of you guys need your traditional databases that basically can run on a shared file system, right? So this is very valid and easy to use now. The beauty of Manila, to those of you who doesn't know, is you can actually sometimes use your existing storage by just an introduced and used service, right? Think how many vendors today supports unified storage? So in the same box, you have file and block. Most of the key vendors out there with Manila driver actually and Cinder driver basically supported off the box. So if it comes to Red Hat Ceph, Ceph as the same cluster can serve object, file and block, right? So in a sense, you're introducing new services sometimes over your existing infrastructure. That's very cool. Plus, of course, you can introduce more and more disruptive technologies as you go along. So traditional application goes with traditional sometimes investments you already have in your back ends that you can actually make use from Manila. So if you go down, so basically whatever the cloud delivers, the big promise is the elasticity, right? You can actually now scale file shares on demand, right? And think about all the, I give an example of databases. We'll think about like website with all the repositories that basically serve that website that now has on demand can scale. If you have like more traffic, it's like a high season, sale season, you see higher traffic in your website. Manila basically allows you to serve more storage, file storage on demand. That's very key. So it has a scale built in pretty much to its design. So we talked about databases and servers. We also have very tight integration with Trove as well as with Sahara, which is the big data. But big data is not just via other services integration in OpenStacket. We actually have HDFS out of the box. So as an admin in OpenStacket can go ahead and select provision HDFS file system. So think about a service catalog for storage based on file share, right? So this goes along very nicely with what we have already of Cinder and Swift in OpenStack. But now we basically open the whole new door of storage services sometimes without even introducing new storage at the back end because your back end is already supporting it. So that's very cool. So we have big data and we have Trove for databases service. HDFS, by the way, if you have today with Ironic the bare metal support that was introduced in Newton, you can actually do Manila HDFS running on bare metal, which is very cool. One of the biggest use cases we're seeing obviously is DevOps and on-demand development for builds environments. Test and dev pretty much. We're going to zoom into it a bit later as well as integration with existing automation frameworks. So throughout the DevOps life cycle, some of our customers basically have their own tools. Not everybody use Ansible. And basically sometimes we can actually use Manila out of the box instead and replacing some of these file system based tools. The hybrid cloud shares is very basic. Believe it or not, this is how Microsoft Azure file came into the world. They needed to wait to basically migrate so you can just mount a private cloud share from your premise and migrate the workload. Simple as that. And the interesting one we also have now is you cannot bypass it. It's the Toko NFV. So basically Manila introduced another service, which is relevant to service provider. A lot of this vendors, NFV vendor, basically introduced new services. Some of these new services like video streaming, et cetera, can reside on Manila shares. But more interesting, going back to what I said earlier about the service catalog, it's not just about Manila. I was lying. You do need your object. You do need your block for different purposes. But if you look at what some of the telcos are doing, like Dolce Telecom is a very good example. They have their email services. The whole Dolce Telecom email server runs on an open stack. And what they do is the email headers and the bodies of the emails sits on Manila shares where the files, the attachments, if you will, that can be movie pictures, sits an object which is much more suitable. So you get to use the right storage service for your one service. So instead, they have one offering of email service infrastructure, but it lays down a different storage offering. And maybe even different back ends at the bottom. That's very cool. Just to complete that, some of the innovations that Tom's going to talk about coming up in Newton Cycle in Okata actually opens the door to do more hybrid migrations between shares, not just from the same back end. You can actually do migrations of shares between different vendors, which is very cool. And we see that demand already. Think about it. We talked about scale. That scale, if I'm introducing that video streaming, I'm running out of space. I have a bunch of gear laying out. I just connect it to my cloud and use it as an infrastructure. I cannot be picky sometimes in a cloud environment. And the last one is, you cannot bypass it as well, is containers. I want to use my shares for containers. And one of the things you'll hear from Tom later today is some of the cool integrations we're doing in two projects in Newton Pistake. One of them is with Kola, so you can actually use containers to deploy OpenStack services, including Manila, and actually be able to support Docker. So instead of doing your direct integration, if you're a storage vendor in the audience and you want to provide your Kubernetes driver, you don't have to go with direct path. You can simply work with Manila and basically using Fuxi, which is another new project. In OpenStack Big 10, you can actually leverage existing sender services and Manila services. With that, let's zoom into some of these use cases. I've took up the customer names to protect the innocence. But this is like common themes we're seeing in our customers today. The first one, as we said, is like we need to support this existing traditional workloads without the need to rewrite them. For those of you who have not yet went through the experience of rewrite your application stack just to run it in a cloud, it's very painful not to take about the time it takes. It's much easier to do a two-modes approach starting basically leveraging the same workloads in OpenStack. And we talked, if you were listening to the keynotes this morning, like introducing a lot of enterprise features into the project, we've done it to basically serve this class of workloads. So I would not ignore it, but it's very key. This specific workloads needs high availability. There's a lot of work that's been done in this cycle and the next cycle around availability zones with Manila. Why? Because of these workloads. What about disaster recovery? Why do we need to share replication at all? Think about it. What kind of application needs to share replication? So if you tied it with the picture I've showed earlier with the hybrid workloads picture, right? This is it. Manila serves both ends of the game pretty much. The next one is what I call the storage as a service. So we talked about the public provider, Amazon actually is now fully supporting their EFS, Elastic File Share service in the public offering. But using OpenStack, you can have your own public offering based on OpenStack, open source, or you can have your own private. But regardless, you can actually build your own service catalog based on different file solution, file best solution, and introduce it and basically charge. Again, if I tell you it's all about Manila in this picture, I would like, again, you do need a charge by capabilities, you need monitoring. This is where things like integration with telemetry comes into play and Tom will talk about that as well. DevOps is the major one, right? I've added Ansible to the picture because we have a lot of customers who are leveraging Ansible as well as part of their DevOps life cycle. And shares are very easy to create and share, right? Annika talked about setting access rule between different unit in your organization. But think about development life cycle. You can create versions or clone or snapshots of shares and instantly mount them and provide them to another team, right? You can do testing. Every chain in the DevOps life cycle, you can actually use Manila. And it's very easy to self-service as a developer. I don't have to go to my IT infrastructure to lay down my DevOps infrastructure for shares. And a big thing is basically using it all the way to production. So DevOps is not just a movement, it's actually a way to deliver workloads in production. Our customers are not just following this model because it's fancy or cool, because this is how they deliver their application in real-time in their cloud offering. And at the end of the day, it has to be automated. It's really infrastructure as a service. And go back to the earlier example I gave you, like introducing a new service or a new application that you provide your customer. You need the ability to do it all automated, right? This is why you need all the different automation tools, including databases automation. So it's not just the database as a service to your end customers, but making sure that the whole system is pretty much integrated and automated. With that, let's talk about production-grade deployments with Manila shares. The big news from Reddit's perspective is we've been involved in the Manila project ever since it was pretty much born in the upstream community. And it's no secret that Reddit is the top contributor to OpenStack in general, but we're also very involved in emerging projects as well as new projects that we're going to name later. So we took more than a few releases to productize Manila. And I'm glad to announce that this summit we just announced our OpenStack platform 10 version. It's going to fully support Manila out of the box with high availability and integration into OSP director, which is our deployment tool. We also lay down automatically the horizon dashboards and everything you need to get going. Apart from supporting officially Manila in our major distribution, we also have done the lift to basically now start certify all the different beckons. Tom will show you how many new drivers we have also in Newton. Our job is to make sure that whatever you plug to your cloud works. This is why we introduced the certification program in general for OpenStack and we have a similar one for Cinder. So director can now basically deploy Manila, but we also have been able to already certify NetApp out of the box. So when you deploy OSP 10, NetApp will be plugging what's going to be already there and fully supported to work with the data on tap and integrated with director. So you can deploy Manila driver out of the box. In addition to the features you can expect from the Manila driver, this is some of the key things that NetApp basically supports with their driver. In OSP 10, the ability to expand and shrink shares, if you with me in the previous Manila talks we gave in the Reddit summits we were saying this is things to come up in the roadmap. I'm here happy to stand on the stage just two summits after and say where we're going to use, you can use it in production already. So this is how the lifecycle works in Reddit is really fast. Manila and Manila shares basically for importing and exporting data into Manila, snapshots and clones from the snapshots. Basically you're creating versions of shares from snapshots. And the biggest one which is still experimenting at API but we're heavily working on is the share application for disaster recovery use case. And the last one is share service, basically additional to the no share we have for more tainted isolation. And with that I'm going to hand it over to Tom who is going to take us all the way to Ocata. And by the way this is a picture from this week. It was it doesn't read Spanish. It basically said destination is Barcelona and the exit is Ocata. So please take us to the exit. All right, thanks. Thank you, Sean. And thank you for the really, really cool picture to get for the transition. I'm going to take you guys backstage as it were and follow the theme here. To look at some of what we've done in Newton where we're going to go, what I want to convey. There's way much more detail and there are a lot of people in the upstream middle of the community that know a lot more about Manila than I do. But what I want to convey is why it's such an exciting project to work on on the one hand. Really cool project with a lot of R&D possibilities. And at the same time some of the why we think it's production ready. And where we're taking it in terms of going red hat in terms of production and how Manila's done some, I think made some smart decisions about how to balance the both of those aspects. That's important to you guys who want to deploy as well as us as needing to support customers. But it's also important for developers who want to work on something cool and going forward as well. If you look at the numbers in the last release, there's some differences in the stats versus previous releases in some ways. There was an uptick in the number of drivers, I think from 18 to 23. There's a link down at the bottom of the page if any of my numbers are offered and other links you'll see on release notes. So you can check this but the gist is correct. There were 14 blueprints completed which is down from some previous releases. But what was new is we had a specs process which we'll talk about more in a little bit. There were 13 specs proposed and a lot of them because I'm not counting a couple things you might see in the commit like change the index and some substantive stuff. And five of them were accepted and not all five of those are actually implemented at this point. So what you see is kind of a little funnel going on in terms of feature development relative to where Manila was in previous releases, I think. Now I would argue that this is actually a sign of maturity of the product. The remaining things that we're working on in terms of substantive features, stuff like Rodrigo Marviere's doing over there and whatever, which we'll touch on a little bit, are hard problems that you're not going to solve in one release. So we see a focus in terms of the actual commits a lot on bug fixes, on usability. There were 60 Manila UI commits in this last release. There were 10 in the previous release. If you think about it, developers don't care that much about the UI, we're using the CLI, etc. But now we're going to have customers, customers do care about the UI. So we're paying a lot more attention to that kind of thing. So you see a shift, I think over 50% of the commits were officially bug fixes. The bug was filed and we're doing them as a fix. That doesn't mean the other 50% are feature development. There's all kinds of translations, refactoring code cleanup and stuff. Trivial fixes where you notice spelling is wrong. But there are things that reflect the maturity of the project in the sense that we care more about usability. That said, we haven't lost the ability to do cool new stuff as it will show. So we have release notes like other projects. I've put some of the new stuff in Newton up here, it's not complete or whatever. We needed a slide that said new and Newton. If you want to really look at what's going on, look at the release notes linked down below for the authoritative list. But if you look at this kind of representative list of some of the features, I think, we see improvements in the scheduler. We see some of Shane's work on thin provisioning showing up over here. Very similar to good stuff that's already been proved out in Cender to some extent being brought over to Manila. We see a goodness wayer and driver filter. We see improvements in access lists and the way we use access lists. For us, at Red Hat, we have a CFFS driver that's emerging. We needed the ability to get access keys back when you list access rules because we use access keys for the native CFFS driver. So you don't have to go out of band and issue a CF command to get that information in order to get it to work. We see a lot of improvements in network, infra, port binding support. Someone was just stuff that was kind of omitted that we needed to do, like handle MTUs better, but port binding support. So if you don't want to get stuck with 4096 VLANs and you want to support VXLAN on your infra. But you'll also notice when you look at this, API changes. A lot of API changes. Young project growing up, we realized we didn't understand exactly the needs for all time of APIs. So we'll talk about those in a moment, how we're handling APIs. No big surprises there. And then what we call experimental features. Actually, they're experimental APIs. And how we work with that. And that ties into the theme of how we can continue to do exciting R&D stuff where nobody's 100% sure of the answer ahead of time before we begin investigating where we're going. I work for Red Hat. We're putting this in production. We're going to stand behind the product and support it. That means we have to be able to take customer calls, handle issues and so on. So Sean mentioned the certification program. We're starting the certification program with NetApp. You saw the market share features for that driver. It's also a company that's been around for a while. They know how to do stuff pretty well. We're confident we can make it work with that. My opinion, for example, is the second driver that was listed right there. Not everybody will agree with me if you go poll everybody. It's the open source driver called the generic driver that was developed. It's not something I would rush out to try to support right now or I would expect to be certified right away. It's more of using the sender LVM as a back end. We don't in Red Hat take LVM in as a supported certified driver either. So we look at things from a production standpoint all the time, as well as being interested in the new cool features. And Manila has, like Nova, like sender at this point, introduced into his APIs micro versions. So what that means, just like for those other projects, basically, is that we can break the API with what it was in the past as long as we continue to support the old behavior with an older micro version number. So that allows us freedom to innovate, to fix, to improve as we move forward without breaking existing customers. So from a Red Hat standpoint, that's very important because we need to be able to support older releases and so on. Then we have customers that stay on releases. They're not shifting every six months. Let's put it that way. Some of them are banks and stuff like that that don't tend to do that. Experimental APIs, if you look at what I've listed as experimental features here, they have APIs that are called experimental. Our case is where Manila's done this cool thing, where you put a header in the arrest call that says, I'm calling an experimental API. If you don't have it in there, it won't work. And that's basically a handshake that says, we're not guaranteeing backwards compatibility or that we're going to keep the behavior around forever for that API. We want to do work in this area. We'd like people to try it out. We want your feedback. But we may change the API. We may get rid of it. So if you look at the list right now, we have consistency groups and consistency group snapshots as experimental. There are blueprints on the table right now to get rid of those APIs in favor of a more general share group API. That would involve a breakage where we're not supporting the API going forward. Share emigration and data services are... We want to be able to... Sean alluded to futures in which we can migrate across different back ends, across different protocols and so on. And the work to lay the foundations for that has been going on for a couple of releases. Rodrigo's been driving it. It's considered experimental basically to give the freedom to figure out how to do it right and to get stuff out into the field and get feedback on it and to work out things. We've already changed the APIs for migration along the way in ways that would not be backwards compatible. That doesn't mean these features are not usable. It just means you cannot count on them not changing on you. I'm from a back end standpoint of view. For instance, share replication right now, I believe we only had one driver initially that did share replication was NetApp. NetApp has a arguably gold standard for replication of file shares. I don't want to get them among others, but very solid. At this point we've got ZFS on Linux and Huawei both come in. And just my observation is not everybody's thinking of how one would interact with replication in exactly the same way. So as you bring more players in, you may need to shift the APIs, you may need to shift behavior even to some extent in order to arrive at an open community standard that still works and still provides what's delivered. So that's why that's considered experimental. I think this release, we had a new Docker driver, a new Tej Island, a teleflash, Hitachi, an Exento store bringing us to about 23 drivers. We also had a lot of driver fixes, a lot of driver extensions, not just fixes, but extensions and functionality. This is a partial list. You can see it, since NetApp's here, they're at the top, but they have cool hybrid aggregates, which are a combination of spinning and flash disk and the same storage aggregate that's getting supported, so you no longer have to say as a capability this pools just flash or just spinning disk. They can manage and unmanage snapshots on the CepFest front and others too are doing read-only shares. Read-only shares. And we can return the access key, as I mentioned earlier. All these things are in the release notes. The picture I want to leave you with here is, though, that it's an active, vital community and in the sense that a lot of different back ends want to get involved in it. We have people working actively on their drivers. They don't just put them in there and not change anything, not develop it, and we have new players coming in. From a distributor standpoint, that's great because it means the project's healthy and vital. It's also a challenge in terms of what you're going to support and stand by and work with customers and so on. It's an education program as we have for Sender, for example, and we're leading that with NetApp. Not all the work, as I found out in this last release, on Manila is within the Manila project itself. In fact, probably much of what I did in this last release involved working with other projects and coordinating and edutating and figuring stuff out to make it work as a product for us to ship, not as opposed to just an upstream cool project with all kinds of stuff going on. Are we running short on time? Just checking, I should look. The triple O project is the upstream version of Direct from which we do our director project. Triple O went through major foundational changes in this release to do what are composable roles and composable or custom roles. We not just got the ability to deploy NetApp via triple O, but we did it using that new infrastructure instead of just copying the way something has been done for Sender or something like that. That will enable, over time, for us to deploy services traditionally they were on controller nodes or they were on compute, or you would run compute nodes or you would run storage nodes. There's going to be a much more flexible infrastructure where you basically can scale out any number of services over a new role that you define. Customers wanted it. We did that for NetApp. We did it for our native CFFS driver as well because that's our driver and we wanted to know, we wanted to do it in the same way so that we have one too many. We didn't just do it for one, we can roll this out for other drivers. We worked, Romana Raja worked with the Ganesha project during this release because we had an emerging need from the Docker driver. They reported that we really didn't have ability to update dynamic export lists well. So we landed a fix, it's not yet been used in Manila. We intend to use it for CFFS in any case when we build an NFS gateway in front of CFFS using Ganesha. We're working, Mark Koderer and Daniel Meado are on the Tempest community as well as Mark's very active in Manila. Tempest library has been building in the background stable interfaces that Manila and other projects can use instead of calling into the Tempest library directly. That's a big deal for us as a downstream project because we need to be able to ship the same Tempest that's upstream whereas in Manila Gate right now our CI uses PEND commits which are not what gets shipped. So the overall aim is to improve CI stability anyway. It's a good thing and there's a lot of work going on in that front outside. Security vulnerability. There was a discussion within the Manila community towards the end of the new release whether we should pursue a tag that's given to OpenStack projects and all the traditional ones get it for free at this point. They might lose it some day if they don't keep up to snuff. But for vulnerability, vulnerability managed tag. So while that discussion was going on, interestingly, downstream, I got a notice saying, oh, there's a security vulnerability in Manila. It's embargoed meaning it hasn't gone public. You're going to have time to fix it downstream and get it all ready before anybody hears about actually what the issue is. It was a cross-site scripting thing in the Manila UI, not the biggest deal in the world. But what it did was put us through the paces of going through that process. We patched our version of Manila in OpenStack 7, 8, 9, 10s what corresponds to Newton. So that could get patched upstream because it was prior to the release date. And we went through the whole process and it gave me an opportunity to ask, hey, tell, you know, Manila community is talking about this. There's not that much resources to work on this. We're not sure we want to be pursuing this starting in Okada. What's Red Hat's position on this? So that conversation went up to Mark McLaughlin, actually. And I got several levels, hell, yes. You know, this is vital to our business. We support enterprise customers. We really care about downstream distributions. This embargo process and so on from upstream into the Red Hat internal security team works very well. You need to be doing that. If you don't have enough resources to chase that and support it in Manila, raise your hand and say we'll do that and so on. So we're meeting with the security team here later this week to get the ball rolling on that. We've talked about it internally within the Manila team and their attitude was, yeah, if you want to work on it, right, you know, we don't have time. So we are going to be working on that. I've got Barbican example on here. Barbican went through this process. There are about five things she had to do, probably four out of the five we're doing already, but there's a security survey and audit, which is the kind of challenging part. And the good news is that Barbican went through that process in Newton. So we're going to learn from what they did and be able to leverage and model, you know, because it's sort of a question, how do you get started? That's a big subject, right? So we'll have a model to work on there. As a downstream distribution, we really need monitoring, troubleshooting, telemetry for billing and so on, as we talked about. So we are or will be working with other communities than Manila on that front. I mean, Solometer and Nokia and so on, the obvious examples in terms of that. But a short term thing that I would like us to keep, we started working on, I would love us to continue, is a very practical thing. It's also shared with Sender right now as a project, is asynchronous message servicing. So today often if a create goes wrong or something, you just see that it went wrong asynchronously and you don't really know, you as an end user don't know why. And you got to call your cloud administrator or something. So this is a way to get messages back to the end user that will tell them what's going on. I don't know if Alex means anything here, but he got it going in Sender and in Manila. And that's a short term thing in this category that we can do. COLA project in Newton added support for deploying Manila, running Manila services containerized. We need to understand that, understand how it works, how containers are going to play in triple O, going forward eventually. I need to move along. High availability architecture. I wanted to say quickly. We deploy triple O today or OSPD so that the API service and scheduler service run arbitrarily on any nodes, any combination, because they all run active active. And that works very well with the composable roles model that I talked about earlier. But we need to run Manila share just like we need to run Sender volume under pacemaker control. We want to move to a model where the Manila share moves from the left side over to the right side over time. That's something from a downstream, you know, people care about upstream, we care about it very much downstream as well. I'll kind of focus on beyond. Well, the good news is that whatever I say here isn't up to me. So what we're going to focus on. And I told you what I'm interested in or Tom Barron's opinions about what we ought to be doing. We have a proposal from Ptl Ben Schwartz under for a new spec process, in which instead of having 13 specs that we work on and we only merge five and then we only implement a few that we earlier in the process we figure out as a community which ones we're going to focus on in a particular release. And so I want, and I believe a lot of us want, to begin the journey to more scalable active active services given the picture I just showed. But the first thing in my opinion, I think a lot of people's opinion in our community is we need to get a consistent model for avoiding races in the first place, even on a single note. If we get that model then we can talk about how to scale out services and so on. These are design sessions on this stuff later this week. Okay, so high availability disaster recovery, you can see that it's very important to write a hat from Sean's talk. This is the domain of replication backup, snapshots. We need, this is partly my perception, but for us to get a common notion of availability zones for storage. I talked to, I don't think we really necessarily have that common notion yet and it's driving a lot of the tensions in our design. Finally, the migration data service work that Rodrigo's championing and so on is going to be a multiple release, unless I'm surprised when it all comes in, and I've got a service that's going to yield a whole lot of value to us. We care about it, we would love to be able to migrate cluster FS over this FFS and stuff like that. That's a future, but the foundation's for being laid now. I'm running on. Let me turn this over to Anika for some key take away. Just one note regarding containers. So, Kola, there's a link there, already has the ability to containerize Manila today as part of the deployment, so there's guides already upstream you can use. Another interesting containers related project in this cycle that's going to take us through as well is basically leveraging Docker with persistent storage. Using Manila shares, there's a project called Fuxi, another new one, you can check it out, and that's also going to be delivered during the Ocata release. Overall, I think Ocata is going to be also a container theme in terms of new innovations we're driving into Manila. With that, I want to just talk about some key takeaways. I was going to say an interesting name. Just summarizing the key takeaways that we wanted to go out with as far as this presentation goes was the fact that why NetApp and Manila for OpenStack, as my esteemed colleagues here highlighted the development work that's been going on and how NetApp has been involved and how NetApp has partnered with Red Hat for each and every step of the way. I want you to remember that NetApp Manila drivers lets you manage and maintain control of your data across cloud environments. We have an OpenStack portfolio that has ONTAP, which is a flag-trip product, but also SolidFire, storage grids, Altavolt, on-command insights, and all these projects have OpenStack projects that they map to. There's a portfolio that we offer cost-effective storage with really good performance, and I'll let Sean cover these. We're there at her once. I think by now you know why Red Hat. As you've seen, we're very active in the community for two ways. One of it is drive innovations. I just talked about containers, what we're doing in new projects related. As an example, but also the boring stuff. I think Tom put it very nicely when he talked about security aspects or high availability and getting really the disaster recovery capabilities into the project. We're doing all the boring stuff, so you as customers can actually run this and you have someone to call if you have issues. That's what we do for a business. I think we're doing it pretty well. When you combine the two together, as you saw, we have a very tight integration with NetApp. OSP10 is coming up. NetApp Driver integration out of the box as well as Director integration. We basically allow customers to run a lot of the use cases we touched earlier in production already today. This is no longer an emerging project from my perspective. This is a project that you know that we had incubation process and Manila was incubated. At some point we moved to a big tent where basically the responsibility for basically graduating services now relies on the distribution. Here I am on behalf of the major distribution for OpenStack saying Manila's production grade and ready. That's the main takeaway if you will for today. With some links, that slide is going to be available later so you can download the latest of basically how to deploy it in production. Some of the related sessions, we didn't have time to do the demos today but with good news is we have the Manila update session on Thursday and our colleagues basically will go in more details about this cycle as well as some demos, cool demos that we can do. I want to thank again Anika and Tom for joining me today and want to thank you for surviving the last session of the day and enjoy the rest of the summit as well. Thank you. Thank you. We'll take questions now. We've just been told that you have to use the mic to ask us any questions if anyone has any. We'll be around in case you guys want to ask.