 Hello. It's 1.30. It's time to start. Thanks, everyone, for joining. It is Thursday of Summit Week, and by now all the sales people have gone home, so we can talk about how things really work. So what we're going to do today is talk about unified file and object access, being able to write data as an object and be able to read it from the file side. And vice versa, write as a file and consume it from the object side. My name is Bill Owen. I work for IBM in the spectrum scale development area, and I'm responsible for the integration of OpenStack with spectrum scale. I'll be starting the presentation and handing it off to Sumita and Sandeep to go into more of the details of how things work. So what's the session about? We've had a number of sessions at previous summits that talk about Swift on File and that disk file, how it integrates with Swift and the features it provides. What we want to talk about today is some background on that, a little bit of introduction on Swift on File and the use cases, but then look at some of the unique challenges that are still there to make this work in a production environment and talk about our approaches to meeting those challenges. So we'll go through that and then look at the current state of the art as to what we have implemented today and some of the things we're working on. Okay, so what is Unified File and Object Access? I think this may be a review for a lot of people, but again, it's the idea of being able to access my data from the object interface or from the file interface. It shouldn't matter. So on the right, we've got a diagram that shows data coming in as an object, being ingested as an object. It comes into the Swift container and once it's in the Swift container, it can be consumed either from the Swift interface or it can be consumed from the file interface. So we're showing here that item three on the right hand side, the data is read by the file side and that can be NFS, SMB or POSIX. Processing is done and then the object can be written back to the file system and made available to the object interface. If you look at the stack on the right hand side, we've got the at the bottom level a clustered file system. This can be spectrum scale, it can be Gluster, it can be any clustered file system that provides uniform access across all your proxy nodes. On top of that is the Swift software and in particular the Swift on file disk file and then on top of that is the organization within Swift containers and that's the unit that objects are organized in community Swift as well as in the Swift on file. So again, the key points are accessing objects from either interface. It allows the data to be published by as files and then consumed as objects. Also for multi-protocol access, one of the important things is how do I maintain ownership? If I write a file as myself, can I also read it with the same ownership? How is that persisted across both environments? And finally, just the optimization of not having to move data, we think enables a number of interesting use cases. So let's talk about some of those use cases. First of all is the in-place analytics and on the left hand side of the diagram, we've got a traditional object store environment. Data is ingested as objects and then I want to run analytics on that. The way that would work in traditional systems is my analytics software would have to extract the data from the object store using the object interface. So that would be a copy of the data. It moves that copy of the data to its work area, does its processing, and then puts the results back into the object store from the object interface. So a lot of data movement. With Swift on file, again a very similar diagram, the data is ingested from the object interface in exactly the same way. Once it's there though, I can access it directly. Again, no copying of data. I access it directly using the POSIX or file system interface. When I'm done, I can publish my results back into the file system and they can be consumed from the object interface. So ingest from object, process, and then publish through object. The next use case is something we talked about on Tuesday. This is an example of a media house where you've got a number of subsidiaries. Employees from each subsidiary are doing video production, so they're collecting lots of videos for different projects. When they have those collected, they store those in the object store. So they ingest them, subsidiary one, ingests them into its set of accounts or containers. subsidiary two has another set, and it puts it into its account and containers. Once it's there, I need to do something with it. I need to process it, edit it with my video editing software. And typically that's going to be file, expecting file access, written for file access. So we want to make that data available to those file clients. In this example, we're using Manila to publish the shares, export the shares to different VM farms. Once it's there, I can run my video editing software, make my changes, save the data. And I save the data back into the file system. And at that point, I can publish it again through the object interface. One small point I wanted to make here, though, in the first case on the far right, with subsidiary two, I'm publishing the data back into the file system into the same container. And it may be the same file name or it may be a different file name, but it's the same container. In the second subsidiary one case, I'm publishing it back into a different container. So I have that flexibility of how I organize the data. I just need to lay it out in the file system that way. And the objectizer process in Swift will make that available and visible as an object. Last use case is more the individual user style use case. In this situation, I've got two users, John and Rhea, and they want to be able to access their data based on whatever type of device they're using. So if I've got my phone and I'm taking pictures, whether I'm at my house, at my office, or on Simon's boat, I want to take the picture and then upload it to the object store. Once it's in the object store, I want to be able to access it from my workstation or my Windows device or whatever. And again, I need to have a consistent user ID mapping so that I can authenticate to my mobile device with one set of credentials, access it from any other device with the same set of credentials, so I don't have to remember a different user ID and password. And again, the work we're doing is helping to enable that. So with that, I'm going to hand it over to Smita and she'll go into the details. Thank you, Bill. So in the next few slides, we are going to go over design and approaches of implementing this solution. But first, we'll look at what the topmost three challenges are for this kind of an implementation. So what we need to address is the ability of accessing objects from the object interface that have been ingested as files and ability to access files from the file interfaces like NFS, SMB and POSIX that have been ingested by object interface as objects. And the third challenge we need to address is the common user identity across both these protocols, file and object protocols. So what the first requirement is about is it is about accessing objects using file interface, which means objects are ingested from object interfaces like REST API, Swift API, and they need to be accessed using file interface like NFS, SMB and POSIX. So Swift on file has been a good approach for this solution and it is available on GitHub at the given location. Now let's see how actually Swift on file solves this problem. So the biggest advantage of Swift on file is the way it places files on the, it places data on the file system. For example, if an object with name a.jpg has to be put using this URL through an object interface, the way traditional Swift will store it on a file system is something like this. So an object a.jpg is stored as a file named as some random number which is again stored within a directory hierarchy where directories are named as some numbers which doesn't make sense to the end user or a file system user. This of course is determined by ring and Swift services understand this and Swift services know where to find the data on the file system. But to a file system user, this doesn't make sense and hence cannot be used by file protocols. What Swift on file does is it changes this data placement mechanism on the clustered file system. The same URL in case of Swift on file gets stored as this. So an object with name a.jpg gets stored as a file a.jpg on the file system and within a directory hierarchy that has account and container name within it. So from a URL of object put, it can be easily determined what its file system path would be and the same can be used by file interfaces for accessing this data. In IBM spectrum scale, this feature is implemented as a Swift storage policy. So Swift storage policies give flexibility to the administrator to manage unified and file access feature. It gives ability to create multiple such policies for unified file and access depending on the backend storage that you have. It also makes it possible for unified access policy to coexist with your traditional Swift policies and other Swift policies. So a policy basically can be defined at a container level and different containers can be created with different policies, thus giving that flexibility to the administrator. The second requirement we are going to talk about is write a file and read as an object, which means files are written from file interfaces like NFS, SMB and POSIX, which can then be accessed using object interfaces. So do we need anything additional than what Swift on file already provides to make this happen? Well, yes. So we basically have three issues to address for this to make work. A file written from a file interface does not have an entry in the container database, so that doesn't show up in the container listing. Also, container and account stats do not include the data that is written from file interface. And the third issue is that a file written from file interfaces lacks the Swift metadata on it. Now, if there are applications that make use of container listing to get to the object or applications that make use of Swift metadata or container and account stats, they will fail if these three issues are not addressed. And we have a solution for that, which is in three parts. So firstly, you have to identify the changes that are happening from the file side. The second part of the solution is to update the container database with this information that is happening on the file side. And the third part is to actually construct the Swift metadata and associate it with the files extended attribute. So we'll go a level deeper and look at the approaches that we have used to implement this solution. So talking about the first problem of identifying file side changes, the first approach we took was based on file system scans. We first get a list of files from the file system using mechanisms such as OS.Walk or IBM Spectrum Scales policy scans, ILM policy scans. Once we have this file list, we compare it with the container database entries and we get a difference. So the files that are additional in the file list are essentially the files that have been added from the file interfaces. And the missing files in the file list are the ones that have been deleted from file interfaces. So this way we'll be able to identify the changes that are happening from the file side. But this has some issues as well. So one of the biggest issues is that file system scans are resource intensive. Second problem with this kind of an approach is that since these are scans, these need to be run periodically. And when running periodically with some time interval, it doesn't guarantee that the file written from file interface will be immediately made available to the object interface. There is going to be some delay in that. So the second approach for this to solve this issue is event based mechanism, which is kind of a reactive approach. Your service can subscribe to file system events like file create and file delete. There are various mechanisms and tools to subscribe to these kind of events. For example, Linux iNotify or IBM spectrum scales, lightweight events. Now to handle the surge of events in case of peak workload hours, we need to maintain a queue of these events, which can then be later on disposed based on the type of event. Now again, there are some issues with this kind of approach as well. The first issue is that since it's a file system event, you cannot distinguish whether the event occurred as a result of file operation or as a result of object operations. What we are interested is only the events that are happening as a result of file operations. The second issue here is that events may be lost due to reasons like node failures. So in order to handle these issues, in order to handle the event loss issue, we propose a hybrid solution of event based approach as well as policy scan approach or the file system scan approach, where primarily we use event based approach for file change detection, but we also run periodic policy scans with low frequency to identify changes that we may have missed due to lost events. The second problem that we need to tackle is updating the container database with this information that we have just found out. So there are again two ways to do that. In the first implementation, we attempted to do direct database updates on the container database using the container broker service. So Swift provides this container broker service, which has APIs for updating the container database, and we made use of the APIs like object and merge items to update the container database. Again, this had problems of, since this is an API, it runs locally, whereas the file system is clustered. So there could be two object nodes trying to update the same container database path. The second issue here is that since it was, I mean, there could be multiple processes or multiple nodes or multiple threads trying to access or update the same container database at the same time, we were facing DB log time out issues due to the contention. So we moved to the HTTP connect based approach. So HTTP connect method is actually making an HTTP call to the container server, and we are performing a put operation on container server using this method. So what it essentially basically put only for new files and delete for obviously deleted files. So what this is doing is only, since it is a put operation for an object to a container server, it is essentially only doing the part of updating the container database with relevant information. Now here, I mean, this works very well, but we do get failures due to time outs, and we need to handle them by adding those entries that have not been gone through the HTTP request to a sync pending. So in IBM spectrum scale, this is the approach that we are currently using. So I'd hand it over to Sandeep to cover the third requirement of addressing common user identity. Yeah, thanks, thanks, Meeta. So requirement three addressing common user identity across file and object. This particular topic, you can actually have a one day session and still debate and come up with different approaches. If people have talked about how do you do compatibility between SMB, NFS, people working on file system would understand this is a problem or this is an area that you can actually debate and go on and say there are various approaches to do this. So I'll go through of how didn't we approach this. So let's quickly talk about the Swift ACL semantics. So we all know that the object semantics is much different than the file semantics, right? And it goes with, it has to do with the eventual consistency or stronger consistency. It even has to do with authorization and that that's where it goes into identity management. So in Swift ACL semantics, we know it's like keep it simple, Swift ACL enables owners to set and read write access, right? It is all based on the container, right? The point to note is there is no association of user ID and group ID, right? It's a very important concept to understand and realize, especially when you try to have Swift and file work together, right? So no association with UID and GIDs. It's controllable while the HTTP headers, like a lot of us know, and there's a quick example of what a Swift ACL looks like, right? It's like you have a read ACL, here the example shows you have a read ACL for test, test, test a two and a write example where write ACL on test and test a two. It's all on the container, not on the object. I'm speaking about the Swift ACLs, I know S3 ACL is a variation. So the point here on the slide is the one marked in red, that's no association with the user ID and the group ID. Now let's look at the file ACL semantics, right? Now file ACL semantics typically divide into two types. One is the POSIX, right? The RWX, a lot of us have been familiar with the POSIX ACLs and then came the SMB ACLs or the Windows ACL and the NFS V4 ACLs, right? Fundamental differences, NFS V4 ACLs has been more rich in semantics, supports inheritance as compared to POSIX ACLs, etc. So then you see there's a fundamental difference between the Swift ACLs and the, Swift ACLs and the file ACLs. So file ACLs, I'm sorry, so file ACLs, you see they are more granular, granular and comprehensive, right, as compared to object ACLs, supports inheritance, ACLs can be set at both file as well as directory level and ACLs associated with user ID and group ID, right? I'm not going to talk about S3 ACLs, another variation, but just for this particular session we'll concentrate on the Swift ACLs and file ACLs. Now why we are talking about ACLs? Because when we talk about unified file and object, this is one area that someday we have to get it right, right? And as I said, there could be various debates on how do we do this right. But the fundamental of getting the ACL semantics between file ACL and object ACL right is to get the user ID mapping right, right? User ID for the object and the file access. So that is where it comes in need for user ID compatibility, right? So unified file and object, one of the fundamental problems we want to solve is user ID compatibility. So what are the two fundamental problems here? File system stores user ID for file, right? That's what every user who makes a file, he's known by and the file system recognizes it through the user ID. And then you have the Swift which stores all the object as a file, but with the Swift identity, the Swift user, the special Swift user. So cannot leverage existing file-based tools that require correct user ID for the system. So what I mean by that is there are some file systems, spectrum scale for example, that we are referring to here, who have fundamental capabilities like having backup quotas, information, lifecycle management, which become very important for the file-based place, right? And now when you're going to put your object interfaces on that, it's going to be important even for the object-based place. Now all these, when you want to do backup per user, when you want to do ILM per user, information lifecycle management per user, they all are based on the user identity, right? So that's one problem we'll have to solve so that we can start leveraging all these rich feature functionalities that has been developed, a lot of manuals on the file system and leverage it up for a unified file and object. Separate actual storage and semantics for file and object. In the previous two slides we saw that the actual semantics are fundamentally different. Someday we want to make sure how we go and how do we have those both compatible and plus the storage where exactly the actuals are stored in object versus that in file. So these are the two problems. Now in order to achieve compatibility, the first step is to provide an ability to have file user mapping with the Swift users. So you're going to have your Swift users, we need to make sure they get the same IDs as that when they come from the file interfaces. Now again, it's a fundamentally, I mean you're trying to get two different semantics, two different thought processes and trying to say let me merge and get something that will kind of work together. It's a little difficult in the sense that and hence what we did is let's go into the two mode approach. We have mode one, we call it non-unified identity. What it means is that you'll be able to have Swift users and file users access the same data but you're not worried about the user identity. It's more about accessing the data. It touches the use cases that Bill mentioned earlier, the use case one of analytics where you're going to run your analytic jobs using an application ID, you're not much worried about the user ID management, ACLs, et cetera. And then we have the mode two which is unified identity, right? Where you want to have the single OS identity and that's where if you have seen his use case number three and partial use case number two, that's where this will play. So we divided it into mode one and mode two. So what's a mode one, right? So in mode one, what you see is you have an object access, you have object users ingesting object, your file access, you have your file protocols, NFSS, POSIX ingesting file. You have object authentication, you're going to integrate with ADNL app, right? And then file authentication typically will integrate with ADNL app. For this mode, we said, okay, let's keep it flexible. Let's accommodate it. We don't want to get it all stringent and tight. So for this mode, the authentication servers, the ADNL lab are not mandatory, should be the same. They can be flexible. You can have object pointing to your different AD or different L lab. Your file interface is pointing to different ADNL lab. So data created by object API will be available for application via file interfaces. How do you do that? You are going to access using your root or if your application is going to run using a special ID and is going to do some analytics, you can just go on your POSIX or mount file system and just give them the ACLs. User app given explicit ACLs. And data created via file interface will be accessible via object API, right? And we need some elevations and Swift user permissions. I'll probably go into details of it in the next slide. So a little bit details. So retaining file ACLs on put and post. So this is a design point. We wanted to keep it consistent, but we have the flexibility based on how the user really wants. So if you have an object put that is happening and there's a file ACL that's associated by default, we retain those file ACLs. They're separate than object ACLs, but we retain so that the file does not shout out saying there's something foul that has happened, right? But if you want to just... And sorry. And object authentication setup is independent of file authentication setup. There's something that I showed in your previous slide. Data created by object API is owned by Swift user. So in this mode, we say let's not change the ownership. Let Swift continue to own whatever he's creating, let the file continue to own whatever the file is creating whenever there's conflict, we'll make sure which one to adhere to based on the user discretion, right? So application processing object data from file interface needs to be required, file ACL to access the data. Again, we touched upon this point. You want to give explicit ACLs. So in this example, what I show is you have a Keystone user, John. He's creating a get put here. I just do a stat on that particular object on Swift on file. You see Swift continues to be the owner and you see John, we have given him explicit ACLs, right? Either you can give explicit ACLs or we can write in middleware, which gives that. So we have both the options. And then here, you created a file using POSIX interface and you see root is owned. At the same time, John has to access it as if it's an object. So there are a couple of ways of doing this. It's really based on your designs, but you could either make sure that every file is created, gets Swift and access, or you elevate Swift permissions using DAC override capabilities, not really making it root, but good enough to access the data. Now let's go to the mode two. Now mode two helps us get towards more tighter integration. That's where we start mandating things. If you're familiar with SMB NFS, for SMB NFS also to work, you need the ID management to be all synchronized and common. So very similarly, here what we would have, sorry, here what you have is, you have an object authentication keystone and you have the file authentication. Both of them, we have to point it to the same ADL or LLAB. And when you point it to the same ADL or LLAB, it's important that every user or group is going to be associated with the UIDs and the GIDs and hence we will have an external ID mapping. And that's why I said common set of object and file users using the same directory services, AD plus RSE 2.3 or 7, a lot of people who play NAS would be familiar with that or an LLAB. Objects created using Swift API will be owned by the user performing the object operation put. That is what we want when Bill creates an object, he will be the owner of the object. You go and do a stat on it, you'll see him as the owner. So again, retaining file ACLs on put and post, we have actually kept it, again, it's flexible, but we retain the existing file ACLs so that the file ACL does not shout it as found. For initial put operations, for the object over nested directories, this is a tricky one, but for the timing, we said, okay, the objects are not going to set any ACLs on the nested directories when they're going to be having nested directories. This example out here which shows you have a keystone user, John, is doing and get input. And when you do a stat, you actually have John as the owner, right? And the Swift user has gone away. So this was a quick overview of what, I mean, in fact, that there are, as I said, it's an half-day design session that you can have on how to do unified ID, mapping between file and object. So with this, I'll actually hand it over to Bill to kind of summarize and wrap this up. Thanks. All right, thanks. Thanks, Zandeep, thanks, Smita. So yeah, we've shown where we're at with this integration, and we've talked a little bit about it, but I'll say it again, our vision is that, ultimately, we have the same level of access from object and from file. We're working towards that. I think there's still some design challenges we have to get through to get there, but that's our vision and that's where we're targeting our development. So the underlying storage that we're using for this is the IBM Spectrum Scale Product. It's an enterprise-class storage platform for enabling software-defined storage. What we do is enable all of the OpenStack storage projects on that, Cinder, Manila, Glance, and Swift. And also, as it shows here, all of the file access protocols are enabled as well, NFS, SMB, HDFS, and POSIX. And it's got a host of other enterprise features that we leverage both from the object side as well as from other OpenStack storage sides, like tiering of data and backup and all kinds of other enterprise features. But what we talked about today, the key points are multi-protocol access, enabling that using Swift on file so that we can have access from the object interface and from the file interface to the same data. No gateways, no copying, it's direct access. So the feature is made available as a storage policy, as Smitha mentioned. So we've got multiple storage policies that can be enabled for your file and object access data, and you can specify different properties on those. So I can have a storage policy that's file and object access and is compressed, or another storage policy that's file and object access and is encrypted, or still another storage policy that is traditional Swift access because I wanna use that layout. So a lot of flexibility in how you lay out your data. Also, as Sandeep mentioned, two different modes for user identity mapping. The first one, I call independent mode, and that's mode one. The second one is unified mode, where we have a shared identity back in, so we have the same user from the file side as well as the object side. And again, the last point I wanna make is just with this approach, we support and allow access for analytics for other file applications from the file interface and then publishing that information from the object interface. All right, many questions that we can answer. Yes, a few questions. First of all, with the Swift deployments in general, it is possible to enable also a replication between several availability zones or data centers. With the Swift on file, do you support similar type of replication between several clusters? With Swift on file, we don't use the Swift replication between clusters. We delegate that to the file system, so there's facilities within spectrum scale that allow us to replicate data between data centers and that's the approach we take. Okay, and you mentioned this special agent that updates container DB if a file is created or modified. So what if somebody creates a directory at the level of container or account? Will it be detected and the new container will be created automatically? Or it's just not supported? How do we handle that? I don't think we support that. I think we just ignore it, but... So if somebody's trying to make deer at the top level, it will fail? So that won't show up as a container from the object interface. Okay, so it will be on the file side, but not on the object. Exactly, one of the constraints is that we require you to create your containers from the object interface and once you've done that, then you can populate them with data. Okay, and do support hierarchical directories so if they go and create several levels of directories, will it appear correctly in the container also? Yes, so you'll have the container name won't be changed, but then the object name will have that path as part of the object name. Thank you. You're welcome. Glad you got that question in. So I had a question about the inherited ACLs. Is that primarily a problem from the NFS type perspective where you don't really know the directory you're accessing? You mentioned that's a future item that's not supported right now. So with the respect to inheritance, we support inheritance when you do file level inheritance. The limitation that we talked about is when you do an object put and you have multiple directories and the object goes right, let's say on the base directory. The fundamental problem if I talk about is the user who comes from NFS, the same user would probably not have an access because he needs access all the way up the directory. So that is in the current implementation, we say, you know what? You need to get access through all because we didn't want to do it from the object side. So that's the current limitation. Do you support SSL on the S3 interface? Yes, yes we do. Okay, so could I talk with somebody about I'm over here? Oh, I'm sorry. Yeah, I'm blinded by the light. Sorry. Thanks. That's fine, yeah. So I have an implementation of spectrum scale object and we've been having some difficulty making SSL work on the S3 interface. Maybe afterwards, could I talk to somebody? Sure, let's talk online. Great, thanks. You bet. So you did a good job of highlighting how it can be very difficult to get the CRUD functions to map up with a POSIC file system. But the S3 and the SWIF API have a lot more features in it, right? We have, you know, SLOs, we have object exploration. How do you marry up those functions, you know, for applications that are taking advantage of more of the S3 and SWIF APIs? Yeah, again, that's kind of a journey. There are certain things that I think won't make sense within the file and object access, like the SLOs and DLOs. But other things like versioning, I think we should be able to support a function for that. So it's kind of a case by case basis. And again, our goal is that ultimately we would have full compatibility of all the SWIF features that are in the normal interface. We'd have support for those from the file side. You showed at the beginning how this kind of solves the problem of data typically being copied from an object store to a file directory or backwards. And then on the previous slide, you just showed that you not only support SMB NFS, but you also support HDFS. So I actually have a customer who's searching for a solution where he can kill two birds with one stone in regards to Hadoop and running analytics on the servers that are also the storage servers for the nodes. Clever Safe, I think used to have some kind of solution for that. Based on what you just showed, would that be a possibility where a server could have a dual purpose as both an analytics node and a GPFS storage or metadata server? Right, that was the use case one that we presented to begin with. Sorry, I didn't catch that. And that's exactly one of the USBs, so. Right, that's one of the value ads that we think we're bringing. Sorry, follow up, but that then, that would eliminate the additional three copies in the standalone Hadoop environment. Yes, we think so. Thank you. So you showed earlier and back to your question about replication that if I have two regions and spectrum scale is doing the replication between the regions, obviously because you're updating the account container nodes, I'm going to do a listing and so if I put a file into region one and it has not replicated over, it's gonna show in the container listing and so now as a user in region two, I'm gonna see that. Now normally because you have the ring is able to fetch across, how do you mitigate that of users seeing objects that don't exist or getting 404s rather waiting for replication? I don't think I gave you a complete answer on the first question. So what we do in that situation is we're using, we're delegating the object replication to the file system but for the account and container databases, we have it configured where that would be done with Swift replication. So you would still have your account and container databases replicated at every region. And that I get. And so now if I put a file in in region one and it has not replicated the spectrum scale because the AC nodes are cross region in region two as a user or a database or a backup application, I'm going to see in the listing that that object exists but when I do a fetch, I'm gonna get a 404. So the way the replication technology works in spectrum scale is when you do the get on that, when I read that from the file system, that's what causes it to be moved to that site. So we would have to deal with timeouts and make sure we have the timeout setting correctly depending on the network between those sites. But when I do the get, it would pull the data from wherever it existed and make sure that it was available to the object server. Oh, okay. So even if it doesn't see the file there, it'll do the fetch. And I'd be able to do like what are the restrictions there because obviously Swift can do high latency. Are we talking 30 millisecond late C50? Is this within a small metro region or could we do this worldwide? We're working on a project with a customer where it's transatlantic or trans-specific. So it can be between geos. Thank you. From the Swift point of view, is it kind of vanilla Swift that you implemented or it's IBM internal implementation? It's vanilla Swift with extensions in disk file and extensions in middleware. So those are the two extension points for Swift and that's what we've been working to make sure we limit our changes to those things. And the Swift on file? Well, Swift on file is another disk file. So we've taken the Swift on file disk file that's in the community, we've made some changes to that so that it's customized for spectrum scale, but it's still just changes within that disk file. Okay, but this is also an open source solution, right? We have not contributed those changes back yet. Okay, but is it something that you're planning to do? Yes, some of the changes. We won't contribute everything back but because some things are very specific to how GPFS works or spectrum scale works, but the things that are general and can be valued in the community, we will contribute back. Thank you. Any other questions? Okay, thank you everybody for joining and have a good rest of the day.