 Welcome. Thanks for coming. My name is John. I'm the project technical lead for Swift, and this is the Swift project update for this summit. So thank you. If this is where you want to be, great. If it's not, you'll learn something today. So this update is going to be slightly different than some of the ones I've given in the past. But I think you'll see that as I go through this, there's going to be plenty of opportunity for questions. But in a lot of times in previous updates, I have given, said, OK, here's the big things that we've been doing, here's what's going on. And if you remember correctly, the project update I gave six months ago in Vancouver, I was talking about some massive, massive, massive new features that had just landed. Turns out, if you do a huge amount of massive new features, you don't also have queued up another huge set of massive features to talk about six months later. But we do have some things to talk about. But the really cool thing is I get to talk about the impact of those features has been in various production deployments. And I think that's going to be kind of fun. So there's plenty of time for question, plenty of times for feedback, and we can go back and forth. I think there's a mic here, and I'll also repeat questions for the video. So all that being said, welcome. I want to start off always by not assuming that anybody knows everything to start with. So what is Swift? Swift is an object storage system. And the whole point of Swift is to abstract away the storage media from the data that you're actually storing on it so that, as an application developer, you can only think about, here's some bytes to give to my storage system. And I'd like to get those back at some point. So you write data and you read data. But you don't have to think about the hard problems of storage. Your application only has to worry about what are, the application can worry about making the application great. It doesn't have to think about concurrency of access. It doesn't have to think about optimization of throughputs and locking and overwrites and failures in your media and working around those and durably storing things, keeping them available. All of those sort of things is what Swift handles for you. On the operator side, the advantage of this design means that it gives the operators a way to more easily manage a large and growing storage cluster. As it continues to grow, you can continue to add capacity where you need it. So overall, this design gives you, and the whole point of Swift is to give you very durable storage that is massively scalable and supports a very, very large amount of aggregate concurrent throughput. And that's what we make. Swift API looks like this. All the requests are basic HTTP verbs and response codes. And there are three key parts of a Swift request. There's the account, the container, and the object. The account is something akin to a bank account. It's not necessarily tied one-to-one to a particular end user. It's a place where you put stuff. And that's it, just like your own bank account. You put things in there, hopefully take out less than you put in, and you sometimes give somebody else access to it, and that's fine. So you put data in there. It's just a place where you store things. The container in Swift is these days, unfortunately, named because of some other little technology that was invented after Swift. But it is a subdivision of your account namespace. And this is very similar to Amazon's S3 buckets. So the container is analogous to a bucket. And so these days, we kind of go back and forth between calling in a container and a bucket, really, just depending on who we're talking to. And then finally, you've got the object, which is where you actually store the data. And if you're storing backups, if you're storing movies, if you're storing cat pictures, whatever the case may be, that goes there. But this gives you a very flat namespace where you can support multi-tenancy, whereas an I can have an account, and you can have an account. And we can name things. We can each have a container called Photos, and that's totally fine. And inside of our respective Photos containers, we can each have a cat.jpg. And there's no overlap or override or contention on that. So this design makes it really easy for you to code against as an application developer. And you'll see those three parts, the account container, and object reflected throughout the overall design of the system, the deeper you look. When you put it into a production cluster, basically this is what it looks like. You've got a client who normally talks to a load balancer that is fronting some what we call proxy servers. The proxy servers are the API endpoints. And the proxy servers talk to the storage nodes. The storage nodes are the ones that have hard drives plugged into them. The proxy server is responsible for taking that request, implementing most of the API, and then choosing the right storage servers to talk to, to be able to read or write the data, and working around failures and all of that, and making sure that the proper response code is sent back to the client. So the cool thing about this is it's very modular. Each of these pieces are stateless, which means that they can come and go. And the aggregate load is handled by the remainder that are still in the cluster. And it also means you can add new stuff where you need it. So if you need more network throughput, you can add proxy servers. If you need more storage capacity, you can add more storage servers. And you don't have to add them in predefined size chunks up front. So you need more. You can add more exactly where you need it. And you don't have to, these kind of things was what I was saying earlier about making it easier for the operators to grow and scale and manage their cluster. So that's a quick overview of what Swift is. And now that we're all caught up and understand exactly what Swift is, let's talk about the state of the project. So what is it that we've done? Swift is one of the oldest projects in OpenStack, one of the two founding projects. So we've been doing these a very long time. And last time in Vancouver, we had just released Swift version 2.18.0. And it was the biggest release we had ever done in Swift. It had three major new things in it. The first one is that we added something called container sharding, which is a way to take these containers, these logical containers that you see in the API and split them behind the scenes so that as the individual container grows, the container stores some metadata and a listing of what is inside of it for paginating what's actually in there so you can discover it on your objects. As you add more and more and more objects into a container, we have hundreds of millions to billions of objects, the storage requirements for that single container get very large and can in fact exceed the storage availability on a single hard drive. So it's important to split those up and store it throughout the rest of the clusters. You have the entirety of the capacity of the cluster being able to share the load for a particular container. So what we implemented was a way that transparently does that with no downtime for the end user and an operator initiated container sharding. Massive feature took about four years to design, implement, test and release. It alone made up about 10% of the total code base today. So it's just absolutely massive. And that wasn't the only big thing that we did. We also released or reintegrated S3 API compatibility layer into Swift so that anyone who is automatically deploying Swift out of the box by default can have an S3 API endpoint that works in talks to your Swift cluster. So you can use either the Swift API or the S3 API. And the great thing about this is, of course, there are so many applications that are already out there that are written with using S3 API to speak to an objects storage system. And so we can support that. This used to be a project that was maintained in the ecosystem of OpenStack, but not inside of the Swift code base. And so we migrated into the Swift code base so that as we as developers, as we continue to add new features, as we continue to do tests, we make sure we don't introduce regressions and we make sure that the new features, especially things that are done on client-facing changes, we make sure that we're considering S3 APIs, first-class citizen on those. And then the third thing we did was some back-ends performance improvements for some of the consistency demons that are running things that make sure that the right data is in the right place at the right time. And it's what allows the storage servers to remain healthy overall, they do that. So I wanna go through each of these and then talk about some of the new things that we've been doing. So the first one is the container sharding. So you take a giant database and you split it up and into a lot of smaller databases that are distributed throughout the cluster. This has been an amazing success. It has worked wonderfully. The only thing that we have come across at all so far is occasionally after it's done doing this, there's a default config tuning parameter that may be a little bit too low and it uses up some extra CPU while it's waiting to find new databases too short. And that's it. It's continued to work while clusters stay up, while people are actively using these particular containers. And I've seen it used in quite a few different places, in different production clusters. The biggest database that I know of that's the size of the database before it started, before we initiated container sharding was, I don't know exactly what it is, but I think it was somewhere in the neighborhood of three to 400 million objects inside of that row, inside of that database. And we were able to start sharding. It was taken care of in the background. It goes relatively quickly. The moving that amount of data around can be kind of slow, especially keeping it consistent throughout the system. But in general, it seems on that kind of order of magnitude size, it seems to, you're measuring in a small number of days that you initiate container sharding and then everything is settled out and fixed. Less than a week, maybe just two or three days. So it works actually pretty well. The biggest single database that I know of, container that I know of, that has been sharded at all, has approximately three billion objects in it. Fortunately, we discovered that it was going to have three billion objects in it, way before it had three billion objects in it to start with. So it merely had one or 200 million at the time. This was in conjunction with some people importing and migrating from a different storage system into Swift. And so we started doing some analysis on the remaining data and realized, we've got a number that's gonna be three billion later. Let's go ahead and initiate sharding. We were able to pre-shard essentially or shard what was already there. And then as it continued to grow, you can allow it to continue, you can continue to shard anything that gets bigger, but it was, again, done in a live production environment with no client impact or downtime at all. So that was, it's been tremendously successful and I'm really happy with how well that's worked. With the other major big feature that we did last time, we were talking about the S3 API. It's harder to say that here's exactly what we have seen because it's really just now we have more client applications talking to us. And this is, you can see anecdotal evidence about, oh, well I was using this or you talked to somebody else and say we already have something that speaks S3, can we talk to Swift and you just say yes and the conversation's over. So it's not quite as dramatic of numbers or things like that. However, there are a few things that are rather interesting. I know that there was a small bit of news, maybe a month or so ago, about how Splunk can talk to Swift clusters using the S3 API. And this is hugely important for people who are using Splunk for all of their indexing and analytics stuff that they're doing with that system. And it means that they can automatically use Swift as a way to store all of their warm indexes. The things that are not the super high performance stuff right now, but you don't have to ever offload those to unavailable storage and you can continue to use them with, you can have access to all of your data without having to sacrifice. I can't store this anymore because it's not as recent and it turns out when you have access to all of that data even the long tail of the historic data you can start making some very interesting decisions and discoveries about what's actually happening. So you can get a huge amount of extra benefit without having to compromise on that storage space because Splunk can talk directly to Swift via the S3 API. So that's, I've seen a few other software projects that have been able to start talking to Swift and have played with that a little bit. So that's, again, where we're seeing success with people using the S3 API to talk to Swift itself. We do have some plans to continue to improve this in the future, a few of the things that we will probably be adding within the relative near term, but let's just say by the time we all reconvene at the next summit in Denver, I would expect these to be done would include a, this is the S3 versioning is the, Swift has versioning a couple different ways actually to be able to store historic copies of your data as they're overwritten and S3 has yet a different way to do that if you're talking to AWS and their API for doing so is subtly different than the way the Swift works. So being able to expose that functionality is actually pretty important for some other software vendors that are expecting to be able to talk to an S3 endpoints in this case, Swift and use some of those functionality. So we're continuing to add that functionality we'll be adding that plus more in the months to come. The last thing that I wanted to talk about with the, it's kind of a recap from last time kind of production experiences of what has happened is the improvements and performance we made to some of the background consistency demon processes. One of these would be a way that we were able to subdivide the work inside of a particular single storage server so that we keep all of the hardware more active without getting bogged down into one particular slow device. So it's common to have dozens of hard drives inside of a single storage server. And when that happens, it turns out hard drives are slow and they tend to start to break before they actually break. Which means that the way we used to be doing something is essentially we would scan the data across the hard drives and see what, collect the work jobs to be done and then start iterating processing over that. But if there were some work jobs that were located on one drive that was starting to become very slow, then all of the other work would basically just pile up behind it. And you would end up with this very hot spots inside of your storage server by one hard drive using all of its possible resources that it has and all of the other drives being idle and not making progress on that other work that needs to be done. So the basic idea that we did here is to subdivide that work into smaller pools up to one pool for every drive. Oftentimes I've seen it done where it's closer to one to three drives per worker now. And this allows you to say that if there is a particular solo hard drive the other work can continue to make progress and you end up seeing a lot of improvement in the health of the cluster or it just is able to stay healthier better. But I've got an example of this improvement on improving the health of the cluster here. This is a picture of some metrics we were collecting off of a Swift cluster. And so let me describe what's going on before you can see what really happens. As with most status graphs, red is bad, green is good. And you can see that this started out with a lot of bad. What this is mapping is, again apologies I haven't gone into a lot of detail on this today, but on the storage, on the data placement algorithm inside of Swift we place things in partitions of the namespace like an internal namespace so that it's distributed throughout the rest of the cluster. And we can look and we can query that partition placement and so a drive will have some subset of the partitions assigned to it and a different drive will have different ones. In the case of a failure, partitions are written to another location. So for example, in a three replica cluster every partition will have three primary locations. That's where they're supposed to be according to the placement algorithm. And if one of those hard drives dies then the data that was on there is distributed throughout the rest of the cluster. And when that partition is placed onto a non-primary location we call that a handoff location. So basically this is mapping a particular cluster and how many partitions are in their primary locations versus how many partitions are in their handoff locations. You want the red to be gone, you don't want red. So one of the first questions you should ask when you see something like this is how did it get so bad in the first place? That's a problem, we need to fix it but how do we not get into that circumstance in the first time? And sometimes honestly it's just unavoidable. And I have to be clear that having handoff partitions is a normal and expected part of running a Swift cluster. It's just part of what it does for capacity expansion, for failure handling and things like that. So this doesn't mean anything's broken, there's no client impact on this sort of thing. It just means that operationally you need to resolve this problem and there's a lot of work to be done in the background. So this cluster got into a pretty bad shape with regard to the handoffs because the operators had done a very large amount of hardware changes, which means that essentially things that were just content to be right here on this driver now, supposed to be assigned someplace else as new hardware is added, as older hardware is removed and you have to work around that failure. And so essentially you get a lot of handoff partitions just from that. But in addition to doing all of those hardware changes, they were also doing quite a bit of ingest. It was again, an import of data from, I think it was from an older Swift cluster, they were moving out of that data center or off of that old hardware or something like that and moving into a different storage system. And at the time that this was taken, this is approximately a six petabyte cluster. So sort of small to medium size cluster. So that's how they got into this big circumstance. And I know it's really hard to see along the bottom there, but we're looking at on the far left hand side, that is July 16th, and then the far right hand side is about a week later at July 22nd. So this whole timeframe here is not really that long. And you can see that they added the, they changed, they took advantage of the new changes that we made with the different concurrency pools on somewhere around July 20th. And so you can see they were largely in a steady state of not making progress on these handoff despite things working very quickly. On July 20th, they implemented these new changes and you can see that about two days later, they were pretty much gone, which was just massive, massive improvement. And as someone who works for a company that does support quite a few Swift clusters, our support guys loved this change, that they can now adjust some config settings for a customer and know that problems that have been plaguing people for months will all of a sudden be resolved in days. So this was a massive success for seeing some of these back end changes and seeing that they're making measurable and massive improvements for the life of the operator, the health of the cluster, and ultimately the end users. So it's great success. So I guess in summary of those three big things that we did last time, we've been using them in production and all three of them have been extremely successful. And it's all good news, so that's great. So let's talk about some of the new stuff that we've had since then that we have added. One of the big things that we have done in the last six months is improving some of the interfaces and support for encryption. So if you remember that Swift does support at rest disk encryption or data encryption and what this, the threats model that we're specifically targeting is when a data hard drive is removed, the data drive is removed from a storage node. It is safe to, it's safe to RMA it. You can, the data that is on there, the user data that's on there is encrypted. That is the threat model that we are specifically protecting against. So there's a lot of people that really like this sort of thing they don't have to worry about isolating those hard drives, not putting them back in other servers that are being reprovisioned if the hard drive turns out it's fine or something like that. But to do that, we had a, we had two ways that you could implement this. One is basically in a configuration file, you set what the master key is for the cluster and you were able to, that then using some key wrapping techniques there were derived keys for all of the things that are encrypted. And the other way is you could use the Barbican service with Castellan Library and use that for your secret storage and to store the key over there. So that's where we were. The things that we have done, we have added two things. One is that using Swift without the Barbican service you can now talk to directly to a KMIP endpoints. It's one of the major protocols that key management systems are used. And so you can talk directly to a KMIP server without having to worry about also installing and managing Barbican. And the second thing that we added is the ability to support multiple encryption keys over time which means that you can now introduce a key rotation policy. So that was one of the big problems is you, you would set the key but you could never change it. And now it is possible to essentially add more keys and the most recent key is the one that is always used. You still have to keep the older keys around because you want to be able to decrypt the older data. And the reason you can't just re-encrypt things is because in a storage cluster you've got lots and lots of data. So if somebody is managing the cluster, they put 10 petabytes of data in there, it's all encrypted. And then they say, I want to rotate the key. You can't really read 10 petabytes off of the drives, re-encrypt all of that, write it all back onto the drives in any sort of reasonable time. And so you don't, you start using all new data as encrypted with a new key. And so the key, this multi-key rotation functionality works with both the Barbican Keymaster and also talking directly to the KMAP services. So I'm pretty excited about this. I think it will enable some other use cases and people who would previously considered Swift and chosen not to use it because it was not able to support this key rotation functionality. One of the other things that we worked on over the last six months is improving our testing, the way to hunt and find four bugs automatically. Especially during the PTG in Denver a few months ago, we spent most of that week, in fact, improving a lot of the automated testing that we do. A lot of this has been around taking advantages of new features in ZooLv3, which means that all of the test definitions can now be inside of the Swift repo itself without having to be someplace else. And the main benefit of that is that the Swift contributors who are the best people in the world to actually know how to test the Swift code do not have to find out how to manage and get approval from other teams to write a test definition for coverage inside of the Swift project. We can easily, as part of implementing a feature in a patch, define a new test job in our own repository and it lands with the patch itself. Which means that the best people in the world for writing the tests are in fact now responsible for maintaining those test definitions and it will automatically be picked up as soon as that patch lands and run throughout the amazing work that the infrastructure team does and the ZooL team does with that project. So the few interesting tests that we have added is, we've had a set of tests in the past that simulates multiple nodes. So we take one VM and then we, you know, mount four loopback drives instead of one drive and we're able to test some hand off failure kind of sort of things and we simulate four servers on there and you shut one down, they're kind of fake because they're all on the same virtual machine. So we have introduced multi node tests now so that we are standing up, I believe it is, it's a five node cluster. One is for a test runner. We have one proxy server and three storage nodes. So this kind of configuration lets us do some very interesting things which leads to one of the other things that we're now able to automatically test for in this multi node test is an automatic rolling upgrade test so that we can, if I remember correctly, it installs the previous version and it then upgrades the previous version will have already been passed functional tests from the fact that it would have had to do that to be the previous version. We will upgrade the storage nodes, then we will run functional tests and then we will upgrade the proxy node and run functional tests and ensure that all of that continues to work. There's something we have long held to that an operator should be able to upgrade to any version of Swift at any time that they choose without breaking any sort of backwards compatibility definitely for the user but if at all possible also for configuration options and things like that and being able to have this as part of the actual gate testing just gives us one extra level of confidence to say that, yep, this has actually been going through some automated testing in addition to the code review that also looks for those sort of things. And we've also added another set of tests that we don't run on every single patch but we can run say on every release and we don't run it at every patch because it is gonna be rather expensive and has the possibility to get, to be a very large scope and that is the ability to have historic version compatibility testing. So it's not just the previous version but now we can say the past stable releases up to and including everything. So you should be able to take a version of Swift that was released in 2010 and upgraded directly to the version of Swift that was released most recently, which is 2.19 and it will work. So we've added some more functional testing for making sure that that looks great. The last big thing that we've been working on is Python 3 compatibility. It is a long, slow process, fraught with peril and risk for introducing extra bugs. Most of the hard parts have to do with the facts of bytes versus Unicode strings and understanding how that actually works. There's problems with some serialization formats that work in one way but aren't deserializable the other way. And so when you have to deal with being able to read data that was written years ago, you have to make sure you're not introducing subtle encoding bugs or even crashing bugs that when you try to deserialize old data it breaks. So I've heard before and I've said before a few times that the most important thing a storage system can do is store your data that should go without saying. But the second most important thing a, or that means that the worst thing possible you would think a storage thing could do is lose your data. That's not actually true. The worst thing possible that a storage system can do is corrupt your data and say here is your data but it is actually not your data. And it's those sort of things that we have to be very cautious about which is one reason why the Python 3 compatibility is taking a long time. So as we all know the Python community is end of life in support in approximately 13 months at the end of the 2019 calendar year beginning of the 2020 calendar year. So it turns out we have some near return deadlines in that. All of the major Linux distributor distros are going to be by default shipping Python 3 only in their upcoming releases if they're not already doing so which means that it is, doesn't mean that Swift is automatically going to break but what is really problematic is when your dependencies start breaking and are not able to have bug fixes and are introducing, but yeah, it's just, it's a whole complex group of an entire industry basically trying to move forward and we're doing our part there. So those are basically the things that have been happening in the last three months. The question is can I forecast an ETA for Python 3 compatibility? The timeline that we are working under and that we have talked to the technical community about is meeting the technical community's goal of by the end of the T release, the train release now, we should be able to run under Python 3. So speaking of the community, I like talking about what's going on in the community, Swift developers are the best people I've ever worked with, an amazing group of people. I've talked about some of this in the past and I've got graphs that I make all the time about what's going on in the community and it's interesting tracking that over now, going on nearly nine years of history in the Swift community. You can see when companies come into the community, when companies leave the community and as we know over the past few years there have been in all of OpenStack, not just Swift, there have been several very large companies and large by a number of individuals there essentially assigning to work on a project have fallen away from the community and that's, Swift has been hit by that as well as many other projects. That being said, looking over the past 12 months which is what this is showing, we are more or less stable, which is good news. There may be a slight decline but basically we are looking at approximately 15 to 20 unique active contributors every month. That's where we are on that. It does introduce a slight amount of risk for the future of the project and this is something I think that most projects in OpenStack are coping with but a smaller team does introduce more risk. It makes it more difficult when someone, even an individual but much less an entire company pulls out from the community. It has a much more of an impact than say when we had 50 active contributors every month. And so it's something that I am paying a lot of attention to. I'm talking with people about and one of the things that keeps me up at night. Yes, is there any list of companies that is involved in Swift? There is and I normally include their logos of at least the major contributors in these slides but I just didn't this time. The major contributors to Swift right now, companies and this is not to exclude people but just kind of off the top of my head of what I'm talking about. SwiftStack is a major contributor. OVH is spending a lot of time contributing to Swift. Red Hat is doing a little bit of contribution as well as SUSE and NTT is still actively involved in contributing to the project. So those together are making up essentially the core of the companies who are still involved in the Swift project. So looking ahead, we have some interesting work on the horizon, again, very, very large work in just the level of effort it takes but also the impact to the overall design of the system. Two of them I want you to highlight here. One of them is something is called LOSF or lots of small files. Essentially it's a way to optimize the on-disk storage for small files. Swift is pretty good with dealing with data sizes or file sizes of all types but it's vastly different when you say I have a billion one kilobyte objects versus saying I have a million thousand kilobyte objects or a billion, or a thousand hundred megabyte objects. It's, those are gonna be relatively the same in number of bytes but they're very different operationally in how you deal with that. So OVH is leading some very interesting work on how to change the on-disk layout for objects to optimize the storage for small files so that there is less overhead of storing those, tracking those. It should improve the overall functionality of the background consistency demons, replication, reconstruction, auditing, things like that. Things that have to walk the drive to make sure the right data's in the right place. So I'm excited about that work. The other thing that I wanted to mention that is an ongoing piece of work that I've mentioned in the past is something we're calling a task queue which is an internal system so that we can more easily scale out work jobs that we find in the system. So for example one way that this may be used is a future piece of functionality that would say I have all of my data stored according to one storage policy it may be replicated and I would like to migrate that to not change any URLs or anything like that but I would like to make that into a erasure coded data because it's for whatever reasons you want to do that. Could be performance reasons, could be storage capacity reasons, whatever the case may be. So a task queue would be able to take this work and then fan it out to essentially the entirety of the cluster if necessary and keep track of or keep making progress on that rather than being bogged down on a single server or a very small number of processes. Another way that we're looking at improving things using task queue is with expiring objects. It's possible to write a data into Swift and then say after 30 days have it automatically deleted. But if you think that I have a cluster and I'm only gonna store say 30 days of data and everything that comes into it is going to be deleted after 30 days. Essentially after 30 days you've literally doubled your request rate into the cluster because you have your normal ingest but now you have delete requests that are automatically generated. So being able to track that and understand or take care of that additional object expiry work is something that can take advantage of the task you work so that more of the cluster can participate that and you don't get bottlenecks operationally. So that being said I believe there's a couple more minutes left and I would love to have anybody who's interested to come get involved. There's plenty of ways. There's plenty of bugs. There's lots of big features and small features where you can make a difference. So thank you for your time. Do we have any questions? All right then. This is the best way to make feature requests. What is the best way to make feature requests in Swift? I can come to you in an hour. I can come with a list of my requirements. But otherwise in the community we just come to the chart or... So maybe right here right now it's not the exact best place in the world to do that but there are a few things I would say. One is getting involved talking to people is one way. So IRC is a great way to get involved but a feature request it could be added into the Launchpad bug tracker but there's also a wiki page that we have that is linked in the channel topic in the OpenStack Swift Free Node channel called Ideas and the basic rules about that the only rule is if you have an idea write it down so that you can share your thoughts and ideas even if it's just requirements and then link to it. So write it down and link to it that's it and it issues all of the complexity of dealing with a review process on specs or getting buried in a task key or a bug tracker or something like that. So write it down, link to it. I don't care where you write it down just make it linkable and then on the Ideas wiki page which it's the OpenStack wiki and I believe it's Swift slash Ideas probably with a capital S and I and but the link is in our IRC channel topic but that's the best place to do that but I'd love to talk to you after as well. It's about the container sharing work just playing around with that. So the connected is using the CLA based sharing, right? The default is not enabled automatically. That is correct. So the question is about, or you had- Is there a plan to- It's a great question. So with container sharding, the comment is entirely correct that container sharding is not done automatically right now. It was not automatically enabled. We intentionally did not add that functionality to start with. It is operator initiated. So an operator needs to, through their own monitoring and metrics, which Swift does plenty of reporting on, identify that here's a large container and then initiate sharding for that container. So the question then, which is entirely appropriate, is when are we gonna make it automatic? And that is a good question. I don't have a timeframe answer for you other than to say, you're certainly not the first person to ask for that or to want that and it is something that we are looking at. I doubt it's going to be something that we rush to. There are some other things that we're looking at that's either already ongoing or may have, for various reasons, different prioritizations, higher prioritizations. The main technical challenge with automatically container sharding is designing a way to have all replicas of the container data converge on the same answer because you don't want something to be sharded in different ways. And so essentially, although I don't really like to go into this realm, you need some sort of consensus-based leader election sort of thing. Swift is an eventually consistent system, so the idea of adding in a RAFTA or PAXOS or some other consensus algorithm is a bit, it's a bit almost psychologically new to be able to say, should we add that sort of thing into Swift? And the answer may be, yes, but the answer may be there's other ways to do it and better ways to do it. Oh, awesome, awesome, okay, so. Something that's kind of. Right, so if you're writing a cron job to do that already, number one, go ahead and do that. You're going to write that way faster than we're going to write automatic container sharding. But number two, the hint on that is essentially, you as the operator choose your own leader based on whatever you want, maybe a coin flip, who cares? But as long as you choose one and only one and then initiate sharding there, then that data will very quickly get propagated to the rest and then it will be able to distribute the work out and work really well. So is there anything else? I believe we're at full time. Great, well thank you for your time. We can talk here or out front afterwards. Thank you.