 Hey guys, let's go ahead and get started. So thanks for coming. Today we're going to have a session about a customer story where Thompson Reuters has deployed Manila. And just share with you some of the things that they did, what they were looking to do, and how that came together, and talk a little bit about Manila. Who here is familiar with Manila? Anyone? Everyone? A lot of people? OK, so when we talk about Manila, I won't go into a whole lot detail, but provide you some of the key points and key concepts about it. But so a quick agenda. Again, Justin is from Thompson Reuters. I'm Greg Lockmiller. I'm with NetApp. I'm a technical marketing engineer, and I've been with NetApp for nine years, working in various roles from application space to storage space. So my career has taken me from architecture to rack and stack and doing storage designs to building out NetApp gear to working with customers to do what they want to do, set with, and listen, and interpret, and build out design. So what we want to talk about today just real quick, the challenges that Thompson Reuters encountered when they were going to the private cloud, and specifically around their NAS investment, and what they did with NFS and NAS, and how they wanted to manage to that within their private cloud, what they were looking to for the objectives of that cloud and the requirements. Justin will talk to it. I don't want to steal his thunder, but they had some very specific requirements and objectives that they wanted to accomplish in bringing in the NAS part into their private cloud with OpenStack. I'll talk a little bit about what Manil is and then turn it back over to Justin. He'll talk about some of the benefits and things that occurred for them and how it worked well for Thompson Reuters. So I'll go ahead and turn it over to Justin right now and he can introduce himself and tell us a little bit about who Thompson Reuters is. All right, well, thank you, Greg. So first, how many of you have heard of Thompson Reuters? So it looks like a lot of hands. So this is just something I took from our website. Thompson Reuters, the world's leading source of intelligence information for business and professionals. So we serve a number of industries. We serve the finance industry, the legal industry, the tax industry, and it's about providing information to those businesses through a lot of software products that we've written. And so we've been looking at what's kind of happening out there with OpenStack with cloud so that we can continue to deliver those products for our businesses. As Greg already said, I'm Justin Deppman. I'm an infrastructure architect with Thompson Reuters. I've been dabbling with OpenStack for about two years and we've had a deployment which we have users on that are building the products to deliver to our customers for almost a year. So, oops, I'm sorry. No, it's okay. So Justin, tell us a little bit about the challenges that you guys were trying to solve. What were some of the challenges around as you worked within the IT organization providing benefits to your business units and specifically around the OpenStack, the private cloud, and now? Yeah, no, absolutely. Thanks, Greg. So as I already mentioned, we've got an OpenStack cloud and even before that, we've been using a different cloud platform and we've been enabling our business units to more quickly do development, do testing to get the products out there for their end customers. But one of the challenges that we've had, and I think a lot of enterprises have, is how do you give the customers or your business units what they want in the timeframe in which they want it? So we had looked to cloud to help solve that for us by building a internal private cloud. We're able to spin up compute instances, provide the customers with the storage they need, provide them with the networking they need, but there's still some gaps that we have in that space. So specifically what we're gonna be talking today to is about Manila and essentially NAS as a service. So we have a pretty sizable investment in using NAS storage in our environment. A lot of our business units are accustomed to be able to share files across compute instances seamlessly, and that was one of the gaps that we had in our private cloud deployment. And so that and a number of other things we're constantly looking at, how do we bring in some of these other services that we traditionally provide in our data center to become more self-service so that the customers are able to provision those resources as they need. As we're looking at this, this just becomes one of the tools that the business units have in order to be able to more quickly do development and testing and really get their products out there in front of their customers who are the ones who ultimately pay us money. So in terms of the challenges, what type of objectives did you need for this solution? What were some of the key things and requirements and objectives that you guys really wanted to have that would help you guys out? So I think one of the challenges I see as an enterprise when I start bringing in OpenStack or any new technology is it's bringing in a lot of new things to groups that are accustomed to supporting the infrastructure that we already have. And in looking at bringing in the NAS as a service, one of the things we wanted to do was reuse the sizable investment they already had in our NAS infrastructure. So we wanted to be able to self-service or provision NAS for customers, essentially, when they needed it, but we didn't want to bring in anything that was substantially different than what we were already accustomed to running from. So what we ended up doing is we ended up looking kind of at what's out there, worked with NetApp a little bit and figured out, okay, how can we reuse some of the knowledge that we already have using the NetApp Workflow Automator, which is essentially a provisioning tool to provision on NetApp filers, but make it such that we can really decouple that interaction between the end customer and the end result so that we were able to leverage the identity within Keystone, we're able to set quotas, we're able to allow the users to use that well-defined interface into Manila with standard APIs and really decouple it from the underlying infrastructure that they're already provisioned or they want to be able to provision. That way there's the identity pieces, the permissioning, we didn't have to worry about how to deal with that. OpenStack takes care of that, but we were able to behind the scenes look at using some of the infrastructure that we already had on the floor to do the provisioning. Awesome. So with their integration, they use Manila and they also use WF Face, what I'd like to do is share a little bit about what Manila is but first, how many people here have NFS within their infrastructure outside of the cloud? Who uses NFS? And a lot of talks that we give and a lot of presentations and customers that I go to, they always say, well, I got all this NFS and SIFs and I'd like to bring that into the private cloud. And one of the things that if you notice here, the IDC did a survey or study back in 2012 and I know that's ancient in terms of technology, that's probably a generation old to us, so to speak, being in 2012. But 65% of the storage delivered and sold in their survey was first file shares. And so that was one of the key factors that kind of drove the community to say, you know, OpenStack technical community, we need this. And they agreed and so thus started Manila. So that was kind of the beginnings of Manila and how it came to bear. So a little bit more about Manila, you know, what is it, think of what Cinder is the block storage for OpenStack, Manila is for file shares to OpenStack. It is in an incubated, well, in previous terminology, it was an incubated project as of August of 2014, but it provides exactly to what Justin was saying some multi-tenant secure file share services where an end user can have quotas, can create their own NFS shares, be it or SIFs and provide different access abilities to those devices, to those file systems, rather. And it allows those shares to be available to the VM or bare metal, and we've talked to a few folks that wanted to use OpenStack to provision it, provide access to it, and they'll have bare metal environments that will mount those NFS or SIF shares up. So that's what Manila is, again, it's NFS and SIFs. It provides the ability for multi-tenant and secure sharing activities. So a few key concepts, I find that sometimes when we talk about Manila, we throw some new terms out there that may not be familiar, so I'll be brief here, but as you guys learn more about Manila or hear more about it or start using it, think about a share as a, that is the NFS entity or a SIFs entity, right? It can be provisioned on any storage, right? Just whatever driver is supported with Manila, and we'll talk about that later, that'll create that particular object on the storage. So here, you know, for us at NetApp, that's a flexible, and it creates a flexible on the storage, and then it provides that share and you create an export policy for that, for the VM or the bare metal. You can also define who has access to it, and so with NFS, you can do it with a CIDR format or a specific IP, as well as you can deny access to. So maybe I create a share, and I don't want Justin's box to see it, so I can deny access to that, but maybe open it up to a range within a CIDR format as well. So you've got the security of who can actually mount or provide that share to their host. And then finally, it's a share network. Initially with Manila, it was focused on with Neutron, and we're gonna talk a little bit later in a couple of slides about some of the changes in Kilo around Manila and what it can do from a network perspective. But it creates a share network, it's based upon Neutron for some of the earlier releases, and it still uses Neutron, and we'll talk a little bit about some of the deployment options later. But effectively what you can do is define a Neutron network, provide that as a share network within Manila, and then that layer two layer three connectivity is provided that way, and then you have again another multi-layer security for those shares within the networking space. So also within Manila, you can provide security services. And so when I talk about that, think about SIFS, LDAP, things like that. So if I'm a SIFS protocol, I can do LDAP and provide access rules that way by user accounts, as well as if Kiberos, if you wanted to use Kiberos or things like that. So there's some other types of security services available as well with Manila. And then snapshots, just like Cinder, Manila has snapshots. And what we do with the snapshots with the NetApp driver is Manila snapshot creates a NetApp snapshot. And so for those that are familiar with the NetApp storage, it's a snapshot on the storage. And as well as other drivers, it's all comparable to what they do within their storage and their infrastructure. So you have the capability of creating snapshots and creating shares from snapshots too. And then finally, real quick, a backend. Very much it's a provider of the shares. Think of it as an NFS server that's providing up the capability or storage providing capability to an NFS server. Those shares can reside on a single backend or you can have multiple backends defined there too. And then finally, the driver. Multiple drivers are out there now. With the release of Kilo, we saw nine additional drivers as part of the stable Kilo branch. So we'll share those with you real quick too. So these are some of the drivers. Initially, back for the past couple of years, you were able, since Icehounds, to pull down Manila and you'd have an EMC driver and NetApp driver as well as a Gluster FS driver. But with the release of Kilo this past April and end of April and May, these are all the additional drivers you can get. I mean the Oracle's EFS storage appliance, HDS, HDFS, I thought that was kind of, so a Hadoop file system is available too. GPFS from IBM, HP for some of their NAS devices and a couple others that you can see here. So again, it's growing. A lot of companies and a lot of individuals are contributing as well as riding drivers for NAS devices. And I think what really brings credibility to the use of NFS within the cloud is anybody heard what AWS has announced over the past month, past few weeks? Has anybody heard of the EFS? So AWS has came out and said, hey, gosh, we've got an elastic file system. So they saw the need for NFS within the AWS cloud. So again, Manila's been around for two years and we went to the technical community as a community and said, there's a need, there's a business and technical gap there. So again, then you can see that adds credibility to that with all the drivers that's been brought forward in the past six months. So real quick, I'll talk a little bit about a brief architecture, logical architecture and hand it back over to Justin about their solution. So just like Cinder for the most part, Manila is absolutely not in the data path and you have an API server. So if you wanna integrate with it, you most certainly can integrate with the Manila APIs. You don't have to use a horizon. You don't have to use the command line. There is integration and we're gonna talk a little bit about that, what we did for the solution for Thompson Reuters. So then you have the scheduler. He's gonna decide, where do I wanna place it at? You have multiple back ends. You can have pools of storage and you can wait that as well as you can have share types. Just like in Cinder, you have volume types in Manila. You have share types, you have extra specs and you can define some criteria and attributes about where you want the scheduler to place things. And then obviously finally, you have the Manila shares and so based on your drivers and how you configured your Manila.conf, that's the share process. So what I'd like to do now, again, just real brief, not a whole lot of time there, but I wanna turn it back over to Justin and talk a little bit about their solution. Let him share with us about the solution and what was the best fit for Thompson Reuters, how they use Manila, how they use WFA. All right, thanks, Rick. So kind of looking at what we did for our environment, we got some graphics up there. We can see the OpenStack deployment is represented by that little cloud on the left-hand side with the Manila service running in there. That's kind of the standard OpenStack pieces. That's the user interaction with the share service. The Manila service is then using a REST API to call into the NetApp Workflow Automator which then actually orchestrates the share creation on our NetApp filers. I think the really cool thing when you look at this picture, though, is when you look at it from on the right-hand side where we've got WFA and you've got the filers, that's the sphere of traditional IT. So that's where you would usually have a storage admin or someone else in the data center organization actually doing that provisioning work. There's no mechanism to easily expose that to my end users through a standard interface and the beauty of all this is we're able to take the OpenStack tenant and users that we already have for all of our other services, identify the user through Keystone, through quotas within Manila, be able to permission the tenants to do a certain amount of shares with a certain amount of gigabytes and through that standard OpenStack interface through the Manila service, they're able to make an API call which then gets translated through that driver to that underlying infrastructure to actually call out to WFA and the end result is they end up with a volume that's created on the filer that they're able to then do other operations through the Manila service on. So they're able to say they wanna create a snapshot, say they wanna be able to permission it to all their VMs. I think this provides a ton of flexibility and it makes the end users a lot more empowered to be able to get what they need very quickly. We've seen a lot of challenges that we've got split infrastructure between what we've done in our traditional space and what we've done in our OpenStack cloud and when I look at what we have in our OpenStack cloud users have gotten accustomed to be able to self-service and provision things and as we talked about earlier we kinda had a gap in being able to self-provision their NAS or even be able to permission their NAS. Now through this interface, through the Manila API the users are able to create a NAS, they're able to then permission their NAS to their VMs, they're able to remove the permission if they want. We've given them the control where they typically would have if they wanted to get a cloud resource accessing one of the NASs that was managed by the data center they might have to wait a period of time, couple days maybe to get permission whereas now they can essentially have that in minutes. So this is really, I think this is really exciting for organization. We're very, we use a lot of NAS and we've got a lot of business units that have built applications where they expect to have a shared file system and early on as we started working on our initial cloud solution that we did a couple of years ago and then more recently with OpenStack that's always been a question of where's the NAS? I need, you've given me a VM, you've given me block storage but I need this shared storage and through the solution we're able to leverage what we've already got on the floor but put a self-service interface in front of it such that the users can do what they need. So yeah, to add on to what Justin said, this particular solution, we have different modes of the driver. We have what we call a direct driver. So very traditionally, maybe what you would download off GitHub or get with your distribution is a direct driver which would communicate directly to your storage array whoever you use behind it or if you wanna do what they call the reference implementation where it creates a VM and creates some storage made available. So that's the direct method. It talks directly to the storage and this is an indirect method where we use WFA as an intermediary and primarily to Justin's point. They needed to reuse and not reinvent. They had to be able to continue to use their investment in their tools as well as with what they had on the floor. So this felt like a nice fit having an indirect method where they're familiar with WFA. They have a ton of investment and development and deployment of it. Not everyone has that, right? But that's why I wanted to point out that this is one method that's the indirect method and it fit very well for Thompson writers and with the teams that Justin works with. And then there's the direct method too and I'll share a little bit there. So we have again the direct and indirect methods. Like I said, direct provides simplicity and with Kilo you've got some different capabilities there for deployment, be it with multi SVM or single SVM. And what that really means is I wanna handle my storage virtual machines through Manila or I wanna pre-configure a virtual machine for Manila to use. And that's all part of the direct. You can use the different network plug-ins. I can be Neutron, Nova Network or even standalone too. So those are a lot of the features that came out with Kilo that can make it flexible for when you wanna deploy Manila, how do you wanna do Manila within your infrastructure? I'm not up here trying to talk about what NetApp's done but it's more about what the community has done for Manila because with the changes that the community has done with Manila for this driver and Kilo as a whole it really provides flexibility with those direct drivers. Now from a NetApp perspective obviously we have a Manila CDOT driver, cluster data on tap driver. We also have the WFA, the indirect driver. So if you do have the use of WFA or the workflow automation and you don't wanna give that up and still use what you've architected for automation then you have the capability of integrating that with OpenStack through REST API calls. So those are some of the things. Again, simplicity with direct and the flexibility with WFA and kinda not re-architecting or redefining your standards and your architecture so the ability to continue business as you're used to but be able to insert this type of activity so you can still use the cloud within your infrastructure. So one more thing is really, we've talked about the requirements, the objectives and the challenges for Thomson Reuters and Justin's shared with us a lot of information there but what I'd like for Justin to share with us too is what were the benefits? He kinda caught some of those benefits as we've talked but what kinda things really provided a benefit for them and how it worked well for Thomson Reuters being able to go down that path looking at the challenges what the solution needed and then finally doing an indirect driver and what that meant for their business. Okay, so the why Manila for OpenStack Clouds I think the key thing when we look at OpenStack within our organization is, I've heard this term used a lot this week is the abstraction piece. So being able to take and have a service with a well-defined set of APIs that users are able to interact with and decouple that from the underlying infrastructure really empowers the users to be able to start doing a lot more things and by having all of our users be able to use OpenStack we're starting to, as we continue to build up our catalog of services, starting to have more and more things that they're able to self-service and provision themselves without having to come to the data center for their needs. You know, another thing with this and this has been mentioned a couple of times already is, you know, the... Oh, sorry. He changed my slide, I mean. Just the investment that we've got in our Filer infrastructure that we've already got so that what we're already used to operating we're able to take and reuse that by using the drivers that Greg had talked a little bit about to really make it such that the users are able to request a share but underneath the driver is able to reuse the existing infrastructure that we have in WFA and the NetApp Filers that we already know how to run. I've touched already on kind of the self-management, the DevOps thing, the ability for users to get what they need in order to do the work that they need to do. And, you know, this just really is allowing our users to continue to be able to self-service things that they're accustomed to coming to the data center for and where we really had a disconnect between what we offered in cloud services and what we offered in traditional IT where they're accustomed to being able to get things very quickly in our private cloud and then there's other things that took them longer in our traditional infrastructure and by bringing some of those additional services into our private cloud, we're able to deliver a richer experience and have all those services that they're accustomed to or more of the services they're accustomed to in our private cloud. So can you share with us a couple more benefits? I know we've hit quite a few of these but being able to share with what, again, not necessarily reinvent but reuse and some of the self-service and bringing the traditional IT services back into their private cloud. Yeah, no, absolutely, Greg. So using the standardization for being able to have a consistent workflows that are constantly called so we're able to provision the same thing so we know what's being created out there in our private cloud. It also helps reduce some of the kind of confusion that we've had as we've added export permissions to our traditional NAS where the storage team might be adding an IP address to a traditional NAS that we have for an export permission than somebody else coming along can't figure out what it is. So by having this interface where we're able to have the users interact with the API and provision the NAS and then self-service provision the permissions as well, we've standardized how that is happening. Again, I've already touched on this a little bit but increasing efficiency. So we've made it much more quick and much easier for the users to provision the resources they need as well as permission the resources they need. So if they add additional compute instances to what they're running or they destroy and recreate and get a different IP, they're able to very easily interact with the Minoa service, change the export permissions for their NAS, mount their NAS up to their instances without having to wait for a human to go and change the export permissions on a NAS share that exists in our traditional space. Achieve fast, reliable customized storage deployment. So I think I've already touched on this a little bit but just we're now creating the NAS shares in a consistent, repeatable manner by having automation actually going out, creating the volumes that we're creating and again, some of the best practices stuff again, I think that comes down to consistency such that when the shares are created for the user, they're created in a consistent manner. We've got a little bit of orchestration happening that's making some placement decisions to determine which firewall to place it on and that allows it to be done in a consistent manner. And I think the key to all of this, I mean a lot of the technology is really cool but I think the key to it is really being able to provide the services to the end users that are looking for this. So a lot of our work right now in our private cloud is development and we've had a mandate for about the last year that we looked at cloud first for development. So anytime a business unit wants a new system, we're really steering them to use our private cloud so that they can provision the resource very quickly but we did have a gap in there as I already mentioned from a NAS service because so many of our business units were accustomed to having that shared file system in place on their service that the data center had previously supported and as we've pushed our business units into our private cloud for their work, we had a big gap in the services and some of them were still going through our traditional IT requesting export permissions to NAS that we already had and with that there was kind of a lag in that occurring so by delivering this what we're able to do is we're able to allow them to much more quickly get the resources that they need. Thanks Justin. So is it fair to say that with everything that we spent a lot of time together working with requirements and definitions but is it fair to say that you didn't necessarily have to change your standard storage architectures in some of the best practices that you guys felt worked for you by when we went through this project is that still the case? You're still to maintain your best practices, your architecture, you didn't have to redo things. It was still business as usual for the storage guys. Yeah, that's very true and I think that's just when you think about OpenStack is that abstraction, that orchestration layer really what it lets you do is it lets you take these pieces of infrastructure that you may already be maintaining just beyond Manila but in other services as well and continue doing what you're already doing but put that self-service interface in front of the users to be able to allow them to request what they need and have it provisioned in a very quick timeframe. Awesome, all right. So we went through the material pretty quick so just ask are there any questions or anything that you all'd like to ask or hear more about and we've got some time. Any questions? Hello, hello. There we go. What kind of workloads are you putting on Manila? Is it just file sharing, traditional file sharing that is being used in a self-service way or you are looking at new kind of workloads like Hadoop or things like that? No, so for what we're doing in our private cloud on our NAS it's primarily to support traditional workloads so I think as we're a fairly large company we're doing a lot with our private cloud and also building some products that will run on the cloud but I think as we're looking at where this fits the need right now is the developers who are building traditional applications really we had a large gap because they're accustomed to doing things through NAS or through shared storage and what this allowed us to do is to take something that they already know is they're continuing developing and testing of their traditional applications to give them the ability to much more quickly go through provision a NAS and do the work they need to do but yeah I wouldn't say there's anything innovative that we're doing that we're running on the NAS at this point. Okay, so just to add to the answer and maybe add to your question is that remember that Manila is just a control plane, right? So the workload is gonna be dependent upon the back-end storage as well as maybe how you do your neutron networking too so just keep that in mind as you guys go down that path. Yeah, the reason I asked the question is I wanted to because the way I think you presented is what I see Manila getting used as which is a Cloudy file service that gives you a share when you want it, when you don't want it to get rid of it, you don't care where it came from, it's all done, it's a catalog driven thing. Yeah, very transparent to the end user who has self-management and the ability to self-provision. Yeah, I mean I'd say I'll just add to that too. Another place that we'd really seen that need is if you have a group that's doing some work and they have a couple of compute instances they wanna have somewhere that they can very easily write that file out to for kind of a persistent storage where they can write it out and they can read it in from other instances even when the actual compute instance goes away and you could use something like object storage for that but there's not a lot of people that know how to write to that. When you can do a traditional file system I think it really fits a need where people already know how to do those operations. So for Justin, you had a initial implementation, private cloud implementation and you moved to OpenStack. What motivated you to make that change? No, that's a fair question, I kind of said that. So without talking too long about it, we used CloudStack almost three years ago we started on that path and at that time I wouldn't say, we knew we'd probably reevaluate it but it wasn't quite who the winner was gonna be and I would say given the number of people here it's pretty quarter who the winner is now. Great question. I have a question about snapshots. You said there were read-only copies of a shared. I knew you could make a new share, a read-write share. Correct. But is there an API for exporting a read-only snapshot directly? Exporting a read-only snapshot. So you gotta work upon a share. So I can't necessarily work upon a snapshot within Manila. So I'd have to. Okay, that's not good. I said it was a read-only copy. So it's not even readable really, it puts in there. I can do it outside of Manila, so to speak. If from a storage administration perspective there's some things that could be done there. Okay, now I was wondering if I missed something in the API. I didn't remember if there's any way to read a snapshot directly. You can make a share from it. Afterwards we can get some clarity to that to make sure I didn't misrepresent it and then we'll ask in a moment. Okay, thanks. I have a real fundamental question. So when would you use Swift as opposed to a file share? Is one faster than the other? Because Swift is already available to all the VMs. Yeah, so I think there's two answers to that. I'll take part of it and your experience would be very valuable and answered. So from my experience with customers and what they do with their applications, think about applications. Think about an Oracle database or a MySQL database or things like that, that write to file systems where they're already set up to write to file systems versus objects and things like that. Or in some environments that we see where people need clustered file systems where you can say objects or files and things within that file system need to be shared. One of the things that Justin mentioned too is that the ability to save that Manila share, to be able to maintain that data you can do away with the VM and bring that to yet another environment or even take it into another open stack environment to in the future. To me, I come from the application space and storage space so it comes down to many of the applications already know how to work with file shares and file systems. You don't have to rearchitect or rebuild. You have applications that already know how to write to files and manage to that. And so this provides that opportunity to enable them to be within the private cloud or as a service provider too, maybe that's part of an offering that you could provide too. But you know, they had some experience in what they did. Yeah, no, I mean, I think Greg really touched on it. It comes down to, it's not just the technology, it's the people aspect so that, what do the developers know how to do? So you've got an army of developers that have been accustomed to doing things a certain way for a long time. We've got a sizable investment in the application footprint that we already have which knows how to write to a standard file system. But I think there's definitely places for both of them. Yeah, one more question. So are you initializing your shares through the heat templates or when the VM comes up? Yeah, so there is, I can't remember when the session was, but yes, we do have some integration with heat. And so you could build a topology, maybe you say I need six VMs, 22 shares and eight block storage devices. So there is some integration that's out there and available for heat. It's not released as an integrated part of it for Kilo, but it is available. Okay, thanks. Justin, you indicated that you are reusing your existing investments in NetApp or OpenStack for the Manila thing. Now the NetApp filers or E-Series frames that you're using for OpenStack, are they also servicing traditional workloads or are they dedicated to OpenStack workloads? So right now the NetApp filers that we have that we're using for our Manila service is dedicated filers. What we want as we continue down this path, what we want to do is we want to get to the point that we're starting to commingle the workloads so that we've got a large footprint of filers already on the floor. And as you have gaps, whether it's from a capacity of space perspective or you've got some additional capacity from a performance perspective to be able to more intelligently do some placement on the shares, kind of to spread the workload out a little bit. So what we did initially was very specific in that we've got dedicated filers, but as we look to continue down this path, we really want to be able to spread that out. So basically on the NetApp filers, the on-tap code need to be updated to service Manila or the existing code would just work fine with... So I think Greg touched on this a little bit when he talked about the indirect versus direct model. I don't know a whole lot about the specifics of the NetApp code versions interacting with the Manila direct driver, but in our implementation because we're essentially going through the WFA or the workflow automator is kind of that orchestration piece. I don't think there's as many dependencies on certain versions. And I think what that allows us to do is it really allows us to leverage something that NetApp has that already knows how to do the provisioning to help us make those placement decisions. And Greg, I don't know if you want anything to... Yeah, just to let you know, we do have somewhat of an interoperability type matrix. There are pieces of the cluster data on top and Manila, they work well. For example, right now, we don't have a seven-mode driver. It sounds like you're quite familiar with it. And so there's not a seven-mode driver. So that's an example where there would be maybe a gap for you, possibly, I don't know. But yeah, the Manila driver, the NetApp Manila driver is cluster data on top. And it's been tested with 8182, 8383, kind of being recent, still going through a lot of the testing, but we know that for the most part it works. But we haven't said, it's all good, or we still got some more testing there to do. But yeah, it's cluster data on top. I test with 8.3, but I'm not most certainly the one that goes through a lot of different use cases, maybe that a customer would go through. We do know people out there using it with 8.2 and 8.1. And then again, we don't have a seven-mode driver yet. Okay, now do you have stats, like I know working with NetApp previously, they used to show me stats like this is how the customer adoption of this particular code level is with... So when you say code level as a Manila code level, or on top? No, just like a cluster on top code level. Do you have something for Manila? I know that's available, I don't have that information though. Sure, sure, yeah. Thank you. Sure. Thank you. All right guys, well, appreciate your time. And thanks for coming, hope everyone's enjoyed the summit. And again, thank you very much. Thanks.