 You ready? Hello. Can you hear? Good morning. Thank you for coming. My name is John Dickinson. I am director of technology at Swift Stack and the project technical lead for OpenStack Swift. And with me. My name is Carlos Cabana. I work for IBM on the Cloud Innovation Lab on Object Storage. And one thing that we're going to talk about today is new drive technology and how it works with Swift. Swift turns out to be a really good way to store a large amount of data. And one of the major or the only, the major cost center of a Swift cluster is the hard drives that just by and large dominates any sort of cost associated with Swift. So when you hear new technology coming along, the next thing you want to know is, well, how is it going to work? What do we do? And so one of the new technologies that has come out recently is this thing called shingled magnetic recording. And so today Carlos and I are going to share the results of some of the tests we've been doing at Swift Stack and IBM around this, both kind of at a micro benchmark level and also from an end to end level with Swift itself just to kind of give you some recommendations of what the current state of the art is and what you should do and think about going forward. So the first part of it is what is SMR? What is this whole thing? So shingle magnetic recording. Think about it this way. A long time ago, well, when you think about how hard drives work, you're just like, OK, we've got this spinning platter and you're going to lay down this magnetic stripe on it. And at least in my mind, the way I'd always conceptualize it is that you've got this one layer of magnetic line coming down and then they scoot over a little bit and then they write this other layer and then the next one comes along and the next one comes along. And you're like, OK, just makes a lot of sense. It's kind of like lanes on a highway or something like that. They line up right next to each other. Everything's great. Turns out that's the way they used to do things back in the 1970s. And they do things a little bit differently now. So the traditional hard drives that we're all used to using as spinning media is using something called perpendicular magnetic recording. So imagine those lanes of magnetic flux or whatever it's called laid down on the spinning disk. But actually, what they've done is they've tanked those and laid them on their side so they're vertically. Kind of like do some nice things with the material so the magnetic charge goes vertically instead of just laterally through the surface. What that means is they can get a lot more density because you can start squeezing those together. So we've been using that for like the past 20 or 30 years. Yay. So those are called PMR drives or perpendicular magnetic recording. We're going to refer to PMR drives quite a bit in comparing them with SMR drives. So SMR drives, new stuff. PMR drives, which we've all been used to with traditional hard drives. So what is SMR? What's the difference? Well, what they have done is starting to take those magnetic tracks and starting to overlap them just a little bit to squeeze them even closer together. So in this way, it looks kind of like this. You can think of it very similar to shingles on a roof. So you lay down one line and then you lay down another one on top of it and another one on top of it. And they'll do this for a certain band or zone, an SMR band on the drive surface. And they make a little gap. And they will then start writing down another SMR band. Now, I think that's pretty neat. That's pretty good. They've done some really clever things in the hardware, the actual read and write heads on the spinning media. And you can suddenly write something and then read just a little sliver that's still exposed. Does that make sense? Everybody with me so far? Now, here's the problem. What happens if you want to overwrite one of those things that's in the middle? Well, because the right path is so large, what it means is you're overwriting that stuff that was layered on it. So let's say that you want to overwrite a byte that's right in the middle of one of these SMR zones. You then now have to overwrite. You have to read and then write down again using the SMR pattern all of the data that came after that in that SMR zone. What this means, and the end result here, is that you end up with something that starts looking like a linear access. Because the natural expression of this would be imagine that the entire surface of the platter was written from the very start, very first LBA to the very last LBA on the drive. They would be all completely written with SMR. Then you would, if you wanted to modify one byte, you have to read the entire data, or at least every data, piece of data, after that point on the drive, and then overwrite them all again just to change one byte. So that's why you break them into zones and that's kind of, this is the cost you get of that density. So that being said, obviously this is not going to work just out of the box for anybody writing a new sort of application. So there are basically two different kinds of SMR drives that are available today. There's a third that's not quite yet available. But the two that we're gonna cover today are drive-managed and host-managed. So drive-managed does something very interesting. A drive-managed SMR drive pretends. It just kind of fakes it out and says, guess what? You can use it exactly like you've always been doing. Just plug it in in replacement of a PMR drive and you're not gonna even notice. Well, you might notice, but your applications will still work, you don't have to do anything and the firmware on the hard drive itself manages any sort of translation and if you write to that byte, it's gonna do all of the right things. The host-managed will require the application or the operating system to have those SMRs to know exactly how to lay down the truck. We'll cover that in a little bit. But for now we're gonna cover drive-managed because these are some things that you can actually buy today. To do that, they've got this thing called a media cache. So remember that a hard drive is built out of a stack of platters. The spinning media hard drives are built out of a stack of platters and you can go from the outside edge into the inside edge and you're writing that down. Well, you can see here on this image that you've got these different bands that are the SMR bands. But on this outer diameter, which is the part that's gonna spin the fastest, you've got something that is really just treated as a traditional PMR area and it is the media cache. Now this image I have pulled and there's a link at the bottom from the skylight paper. And there was a paper presented earlier this year at the FAST conference where they took some of these drives, literally cut a hole in it and kind of sealed it off again and then watched what the drive heads were actually doing when they put various workloads on this. And so it's a great source of reference as far as some of the actual limitations of the drives and some of the constraints around it. And in our own testing, we've confirmed a lot of this very experientially. So here's what happens on this and why this is really so important, why I wanted to spend some time on this. SMR drives have something fundamentally new that hard drives didn't used to have. They now have state. And so you've got, imagine you're writing in this drive especially with these drive managed SMR drives. The drive is making this translation between how we used to talk to drives to this SMR way. And what happens is that it now keeps track of what is living where on the hard drive. So you've got basically a map of when I write this piece of data down, now I'd have to find the appropriate SMR track and write it into that and remember where my right pointer is on that and stuff like that. Drives have always had some amount of cache and some amount of RAM on the drive themselves which they can use to buffer writes and cache some reads and kind of smooth out some of the performance. The media cache behaves very, very similarly but it's actually persisted, it's durable and it is written down onto the platter. And it's much, much larger than you would get in a traditional, like the RAM cache. So the RAM cache you might be looking at say 128 megabytes or something like that. The media cache we have tested and found that the media cache will start impacting performance and kind of fill up in enough sense that it will start impacting things at around 30 gigabytes-ish. So you've got about 30 gigabytes on reserved on the overall capacity of the drive as this media cache. So what that means is the media, well the media cache of the drive has to manage two things about the cache. One, they have to manage how many bytes are actually stored in this media cache. As we get too full, then we need to slow things down because the cache, the buffer is filling up so we need to slow client requests in because we can't offload them to the SMR zones fast enough. The other thing is that it has to keep track of say basically the number of writes and so that is something else that can impact performance. So in this case, when you start running out of cache space you're going to have a performance hit and the performance hit starts being experienced around 30 gigabytes of bytes written or about 200,000 writes, whichever one comes first. So if you're writing very, very large files you're probably gonna hit the capacity limit first. If you're writing very small files you're probably gonna run into the number of operations that are being managed. And this is basically the state table that the drive is managing saying that I've accepted all of these writes, these 200,000 writes and now I need to kind of flush them out and figure out how to reorder the writes so that they go out appropriately onto these SMR bands. So that's basically where we are is what the drive media cache does. So that being said, how does this actually affect performance in looking at reads and writes to the drive? The manufacturers will already say that SMR drives are gonna be a little bit slower than traditional PMR drives. This is the results of some tests that we ran at Swiftstock and it is random writes and basically looking at the raw device and then seeking to a random position on that raw device and writing out an object or an object, a file of a chunk of bytes. So direct microbenchmark directly to the drive itself. Nothing else, not even a file system in the way. And so we tested this at everything from 4K writes all the way up to four megabyte writes. And you can see that when we're dealing with very, very small things, we can't really sustain a very large throughput because we end up running into that drive media cache like number of items and then performance goes down. However, when you get to bigger objects, things like four megabytes, then you're gonna be able to start writing much, you have much more throughput before you get impacted by the performance of the drive. And you would think you look at this and like, oh wow, current trends continue. Well, I'll just, man, let's write 100 megabyte objects and we're gonna get infinite speed, right? Not quite. So we suspect that generally the limits that you're gonna see will be at whatever that SMR band size is. So if an SMR band size is gonna be around, say, 256 megabytes or something like that, you're not gonna see any performance after that. And more so, the way I phrase this as a sustainable daily write limit is, in this example, if you have a four megabyte object or four megabyte writes to an SMR drive, you can sustain on average throughout an entire day about eight megabits a second of writes. That doesn't mean that you have to stay at or below eight megabits all the time. You could spike, but within that day, you need to average at eight megabits a second. And that's actually really important because if you do spike above that, then you will start dealing with this background garbage collection, defrag process that's happening to move things from the media cache into the SMR zones on the drive. And when that happens, performance tanks, I mean like three orders of magnitude worse. You can move from writing megabytes to bytes per second. And so that's why you really need to stay below this. Now you look at this and we start thinking about, okay, well what does this mean for Swift itself? So okay, that's not too bad, four megabytes, that's not very big, I've got lots of data, that's a lot more than that. But if you remember back say six months ago in Vancouver when we had quite a few different public service providers who are running OpenStack Swift share a lot more information about their numbers, the object size distribution looks more like this. And this is kind of bad. And the problem is that I'll admit I actually kind of made up some of these numbers to just kind of show a nice big bolt in the curve because we don't have it quite at this resolution. But what we do know for a fact that at least with some service providers is that 91% of their objects are less than 100K. What does that mean if you say that you need to have like four megabyte or more or larger objects? At that point you're like wait, these two things do not work together. So that's going to give you a hint on some of the conclusions that we'll come to in a little bit. But the point is that in general Swift workloads and kind of the public cloud use case and kind of just the general purpose use case, the way a lot of people have been seeing it is that they're very heavily influenced at doing very, very small writes. And SMR drives are very, very good at, we're not very good, but they are much better at doing large writes. And until these things would match up you would say it's going to be kind of tricky to use SMR drives inside of a general purpose Swift cluster. So thinking about that, and we took a cluster and deployed a storage policy in Swift that was only SMR drives. And then we had another one that was only PMR drives. We ran the exact same workloads against both. And this is what we found. So what's important here is not particularly the specific numbers as I want to point out the shape of some of these graphs, especially the one that's in the two wide ones that are in the bottom right. This is exactly what you want to see with a benchmark. You start your benchmark, load spikes up, you've got this nice plateau as you're stressing the system when the system's done, the plateau ends and it's just beautiful shape. So you can see here what I did is I was running a benchmark and I believe it was using some, I think yeah, these were four megabytes. No, these are four kilobyte writes. And I just gradually increased the concurrency. Run test run, then it's over, increase the concurrency, run it again. And you can see exactly what you would expect. Workload goes up, okay great, increase the concurrency, more throughput, increase concurrency, more throughput, everything keeps going. It's like, this is doing pretty well, I like this, I love this. Okay, then I ran the exact same workload against the PMR, I mean the SMR drives. And this is the shape of the graph. So there's a couple, there's two things to notice here. Number one is that first plateau ends up being a lot longer, which you're looking at the 4K writes, it ends up being a lot slower. And number two as I started, as soon as I started to increase the concurrency and do more workload to this going all the way through Swift, it started well and then it tanked. And the performance became very, very erratic, it became much, much slower and frankly just unusable. And so the exact numbers that I got averaged out here is with 4K writes, with concurrency of 200 connections to this particular server that was doing, had just SMR or PMR drives, I was able to get 383 puts per second. It's like, okay, it's reasonable. And with SMR I was only able to get 187. So you're looking at less than half of the performance that was able to do on that 4K writes. So the next thing I did was try to figure out, okay, well what about reads? Interestingly enough, reads were very, very similar. So my exact numbers, I got 1,287 reads per second on these 4K reads. And then I did 4K reads on the SMR drives and I got 1,300 per second. So within any sort of variation according to the particular test, basically they were identical. And then I started thinking, well maybe there's something magic about these 4K things, if that's actually the physical sector size on the drive, well you're not doing anything extra whether it's SMR or PMR, it's just really one operation from the drive anyway. So let's try something different. And I chose some random, not particularly, some like crazy mutual prime with 4K that's around 100 megabytes, 93 or 97 megabyte files or something like that. Increase the concurrency did that and actually saw the same thing. The performance of SMR and PMR with reads was very, very similar, which does kind of show where they might be good for. So all the numbers you saw so far were for the drive-managed SMR drives. However, we also wanted to know what the performance was for the host-managed SMR drives. As you saw in the previous slide, the characteristics of the drives are different. And in the host-managed drive in particular, all the burden of managing the drive and doing optimizations relies on the user or the operating system. One thing John did not mention is that each of those zones that we had in the SMR drive actually have a pointer where you are going to go to make your writes. In the case of the drive-managed drive, is the drive itself who's managing all of that. But for the host-managed drive, it's your responsibility to go ahead and write only starting at that point. If you're going to try to write anywhere else, the drive will reject your request for the write. So what this is doing is the disk is transferring all the burden of all this management to the user level or operating system or file system level itself. The good thing about this, however, is that now the performance becomes much more predictable because you are responsible for it. So ideally, we will test the host-managed drives with Swift itself. We'll just switch the disks and do the same test that John ran. However, that's not possible because now we need to do lots of things ourselves. And Swift is doing much more. Swift is doing operations on reads and writes, of course, on directories on symbolic links and metadata on the files. If we were to go ahead and implement all of that for the host-managed SMR drive, we would be getting very close to some sort of user-level file system. However, we really care about the performance for the reads and writes. So for that purpose, we wrote a simulator that will drive writes and reads and read writes in a specific ratios to try to understand what are the performance characteristics of this drive. So as you can see here, we have a simulator section that we can configure to do reads and writes in any ratio we want. They send all the requests to a processing module. This module, what it does is for the reads themselves, they will call directly to the drive libraries to make the read for the writes. It's going to use some internal buffer that we configure between two and four zones in size. And when those buffers are complete, there's one part of the process that is actually doing the write to the drive. They are for optimizing the writing on the drive as much as we can. At the same time, you will see on the right side that's the fragmentation process. So due to the sequential nature of the writing of the zones, one thing that may happen over time is that you will not only be writing files, right? At some point you're going to delete some of them. And that will leave you with fragmented zones. And you don't want that. What you really want is one to have things as compact as possible. So this fragmentation process, what it's doing is reading from the beginning of the drive, compacting all the data, removing all those holes that you're having there, and writing them at the end of the drive. And it's running at the same time as we are doing all these other operations. As you see here, all of these operations are going through a set of libraries. Those are called ZBC libraries, which come from the manufacturer of the host-managed SMR drives. So we're not calling the drive directly, we're calling the drive through these libraries. So these are the absolute numbers that we get for writes, for reads, and for write-heavy read writes on the host-managed SMR drives. Our intention here is to try to see what is the best that we can get from these drives, but not under the best possible situation. So we are dealing here with very small objects which are somewhat detrimental to all the reads, because it's forcing the drive to go and read very small chunks of data all the time. Let's start with the writes, which are at the top. The numbers that we get are fairly stable and very close to the rated write speed of the drive. Now, when we begin to disturb these writes a little bit with the reads, of course, we will see the performance degrade a little bit, because after writing a certain amount of data, we're going to introduce a few random reads there which are going to take the head all over the disk, and then we're going to continue to the writes. The intention of that is to see is somewhat more normal usage of the drive and under these scenarios. And at the bottom, do you see the reads? And the reads are performing fairly flat, and this will become important in the following slides. But one thing that we learned from the manufacturer here that is very important at this point is that the libraries that handle the drives, they don't do any caching or any prefetching at all. So for every read operation that we are doing, we are actually issuing a SCSI command making the disk respond to that, bringing it back all the way up to the application and then continue. The disk is not trying, sorry, the libraries are not trying to be smart and read more than one track and more information as the operating system would. Therefore, the consequences of that is that with this testing scenario, the reads do not scale. But it's not only about the host-managed SMR drives. We also want to compare with the PMR drives because we have something running today. We're going to know how does it compare with this new technology. So we did a read-rides and read-rides comparison between these two types of drives. What we can see here in this chart is the performance results for the reads. So the red line that you can see, the one at the top, is the performance with the PMR drives in what we call no-direct access. So I need to point out, what we're testing here is the same simulator that we had before, but running on the SMR drive and on the PMR drives, simulating SMR drives. And there are two ways in which we can do that. We can make the access to the PMR drive as any application while trying to use all the optimizations of the operating system that means caching and prefetching as well. Or we can do a direct access to the disk, which is much more closely to what the libraries in the SMR drive are doing, right? So we wanted to do this comparison. We see the performance with the PMR drive, what we call no-direct access, which is using all the prefetching, using all the caching, scaling very well up to roughly 45 simultaneous processes. Then we see the middle line, that's the SMR drive, that scales fairly close to the PMR drive up to about 10 processes, and it flattened out from there. And the reason for that is what I mentioned before, that these libraries are not doing any caching or prefetching at all. So it's reasonable for them to scale up to a point and then flatten out because you can really perform any better than that. At the bottom, you will see what we call the direct access to the PMR drive. And what this is doing is you're sampling we're purposely disabling all the caching to all the caching being done by the operating system when accessing the drive. You can see this is very flat here. And the reason for that, and we learned this recently, actually, is that this is being purposely protected by the mutics on all the operations and it's being done at the level of the libraries themselves. So what's happening is this is not going to scale at all. So unfortunately, that's showing a somewhat artificial way in which they are protecting all these operations. So it's not a fair comparison with a direct access to the drive, unfortunately. So these are performance numbers for the right. What we can see for both the PMR drive and the SMR drive is that both are performing the right very close to the rated specifications of the drives. Clearly, we'll see that when we're doing direct access to the PMR drive, it will lower its performance, right? But all of them stay within certain parameters. And the one thing that we like to point out is how flat the SMR drive is. It's fairly constant. And as I said before, it is very close to the rated performance for the drive. And finally, we want to do the read-write comparison. And notice here that the moment we begin to interrupt the writes on the disk with some read operations, we can see both the PMR and the SMR drive performing fairly close. So some conclusions that we need to get from all the slides that I just showed you. First, we're dealing with the first version of these drives and the libraries themselves. They did not do any caching so far because this was the easiest and the simplest implementation that they thought they could have right now. They're leaving all that burden for the moment to the user, though I'm pretty sure there are lots of opportunities for improvements in those libraries with respect to caching. And the other thing that we see is that it's very clear compromise that we want to, do you may want to have here, right? We're talking about what performance versus what density do you want to have, right? It's either one or the other clearly. And there's potential for having very good data density in these drives as well. However, as we can see here under a right heavy read-write scenario, we can see that this performance fairly, fairly close. So that leaves us with the big question. What do you do? You're a deployer of Swift and you've got drive veneers saying we've got this new hotness. It's this greatest thing. So the question is should you go buy them and should you deploy them in your Swift cluster? So in general, based on our testing, you're gonna, I think, not be surprised by some of the stuff we'll say here. So our recommendations are that number one, in a general use case scenario for Swift, you should not use SMR drives. The performance hits that you get on those, especially when you're dealing with the smaller objects, especially as we've seen the distribution of object sizes inside of Swift clusters, is not worth the additional capacity that you can get by the increased density of those drives. But drive manufacturers actually aren't even selling these as general purpose drives, so they're not lying to you. It is very, very good. These things are sold as archive drives. These are things that are designed for a particular use case. So there are some use cases that this could be very good for you. And specifically, it would be do you have a workload that generates large files and large being four megabytes or larger files? And it does something that writes and potentially can read. Reads, remember, can be okay. For example, maybe you have backups. I know there are a lot of different backup vendors out there and I've worked with quite a few of them. But specifically, I know that Commvault has the ability to have very large writes. And with backups, you're looking at kind of this worm data. You write it once and you read it many times. You don't try to overwrite. You don't try to keep updating the data after you've written it once. So it ends up being a pretty good use case for SMR drives. Another one would be potentially like video surveillance cameras and things like that. They generally can output about one megabyte a second. And so that would be a great use case for SMR drives, especially when you configure those to say every minute or every 100 megabytes or so, cut it off and write a file down, persist that and then keep streaming to the next one. And other use cases that I've seen to kind of like the big data, I think genomics and DNA things and you've got these huge data sets. I think the genomics thing is very interesting because you've got these very massive data sets. I would not put the SMR drives in the path of the high speed analytics and comparisons and they're kind of like very small files that are reading and writing things. But that full sequence genome that is gigabytes in size, not a bad case for SMR. So the point is there are actually some reasonable use cases that people are doing today that would be an interesting fit and something to consider for SMR drives. And if you wanted to start doing that, if you wanted to start experimenting with this and figuring out, okay, well, I might want to gradually ingest some of these SMR drives into my cluster, how would I do that? Step one would be set up a storage policy inside of SWIFT. We've got the storage policies that allow you to isolate a particular set of drives from the rest of them. So you would set up a storage policy that would be set up as this is gonna be my backups storage policy or something like that. And then as you continually, as you generally buy or as you gradually buy the SMR drives, you can put those into that storage policy alone. Then any kind of data that's written to that storage policy, you just make sure that the applications that are writing into that storage policies are the ones that are doing the large files, not a lot of overwrites doing the big files. Yeah, it's kind of that sort of thing. So like I said, I think that there are some good use cases out there that you could, but probably not for a general purpose. Great, let's just start buying SMR drives because they're gonna have a little bit more capacity available today. But that does mean that what does the future work? What can we do to actually improve this and make this a better story for everyone? And so there's quite a few things that I think that are obvious directions. Some of them aren't very simple, but some of them are fairly obvious. One of them is something that I think that we're very interested in Swift overall anyway. And in fact, some of the Rackspace developers have been working on their Go language hummingbird server have already implemented some of this. And that is being able to restrict down the number of connections to a particular drive. In general, if you said if we could rate limit a particular drive such that every day it's only gonna average eight megabytes a second, then look, we're right in the zone of what's good for SMR drives. Now, if we could bring that in and make that a little more configurable to the operator and to the deployer, I think that's something that would be widely usable for a lot of people inside of Swift. So all the work we did so far, so we were using the drive management SMR drives so certain performance and we did a lot of work ourselves in order to manage the drives themselves. However, the next logical step which is being developed nowadays are file systems that are aware of the SMR drives. So they will go and do all these optimizations themselves. So what we need to wait for now is for these file systems, at least the first versions to be available and get them and run all these tests again and see how those file systems perform because many people will use them for sure and they're going to really solve some of the problems that we saw during our tests here. And then I think finally kind of the pie in the sky multi-year timeframe horizon of what could be possible was is that after you have particularly done some small level rate limiting to a particular drive after SMR file systems have been developed and you've tested those and deployed those and those have shown promise, well then what would possibly be next? And at that point you're getting into the realm of things like well, how would we teach Swift itself to speak SMR? And really what you're talking about here is almost substituting SMR for any sort of native memory, native media conversation between Swift and whatever that storage media is. And frankly, it's terrifying, it's a huge, that would be a huge multi-year effort inside of Swift that I don't think anybody is chomping at the bit to try to tackle, but because a lot of that will be taken care of, as Carlos said, by those SMR file systems to start with. So it's something that's definitely a possibility as far as this far future thing and if you really want to commit some developers, several years of time to work on this in the community, patches are welcome. But that being said, I think that to summarize, what we've got is some interesting ideas for the future work that some things that we, the developer community, can work on. We've got some good recommendations for what you today can take advantage of in that, namely, don't use SMR for a general purpose, but use them for a specific thing. Use a storage policy inside of Swift to say that here are large files that have not really much overwrite, not a huge, ingest bandwidth requirements. And those are things where you may be able to see some opportunity of benefit for SMR drives. And that being said, I think we've got a couple of minutes left. If you have some questions, we've got a couple of mics. There's one in the back, okay. He's bringing it to you, Jerry. An idea of the improved density that you're seeing with SMR over PMR? Today, SMR drives are about a 20% improvement over PMR drives. For example, you can go by an eight terabyte PMR drive, you can buy a 10 terabyte SMR drive. And those numbers are gonna pretty much increase along those lines, but I've been told that it's gonna be about a 20% improvements over intensity. Does any of this change when you go to like a kinetic interface instead of a SCSI interface on the drive? In other words, SMR behind kinetic, have you thought about that? So SMR behind kinetic drives, I don't think, to be honest, I don't know. Nobody's tested that. Nobody's built kinetic drives that are SMR as well. I think we should do one new thing at a time. But that being said, I would anticipate that it would be basically the SMR, kinetic would be the host aware thing talking, it just happens to be on the exact same physical thing to the SMR drive. So I would imagine that you would see similar performance as the host aware drives. But that's just because there's already an API translation layer. But I don't think that those even exist right now. So I'm just speculating on that point. Anyone else? What do you think about cash tiering from SMR to PMR? Cash tiering, as far as saying that I've got my colder data that I'm gonna go put on SMR and then my more active stuff will be on PMR. And then my super active stuff will be on, Intel's 3D cross point, of course. You know, all the new hotness on storage media. Yeah, I think that's fine. I think that's outside of the realm of the actual drive. But that's something that is actively being talked about what every week probably within the community, people come in and say, well, I've got this data and I wanna move it around. And this week here at the design summit, one of the things that we are preserving a large block of time to talk about is exactly, okay, so data movement inside of a cluster. And tiering is one expression of that, of saying I need to move something from this place to this place, how can we do that as efficiently as possible? So I think that they're related, but not particularly dependent on one another. And if there is a good cash tiering or data tiering solution with a Swift deployment, whether that's inside of Swift or external to Swift, then I think that SMR drives would definitely have a place inside of that. I mean, it's kind of a generic answer, but I think it's something to consider, yes. Maybe we're time for one more question. Hi, just a quick comment. I'm Rick Wheeler from Red Hat and I've been working on SMR for a couple of years with the drive vendors in the Linux community. So there is a lot of work going on in these special file systems, but more specifically, probably the most interesting one is Western Digital has a device mapper module that they haven't quite pushed out to open source yet that will let you do effectively host-managed SMR on normal file systems. So that was talked about at LinuxCon in Seattle and what we talked about at Vault, so something to watch for, yeah. I think that's great to hear. And one thing we didn't really spend any time talking about were these host-aware drives. Yeah, because they're not quite available yet, but there's kind of like a mid-tier in there. So I'm really excited to see that. And in fact, I'm very grateful for being able to talk directly to those drive manufacturers, I know Carlos is too, as far as we've received a lot of insights. We had a link to the Skylight paper, which is, you can see a lot of great numbers. I've got a link on this at the bottom here, which is these slides that are available if you wanted to study that in more depth. And I know that at Swiftstack, and I'm sure also at IBM, there are quite a few people behind the scenes who were involved in a lot of this testing. I want to say thank you to them as well. Thank you very much for coming. Thank you. Thank you.