 Okay great thanks everybody for coming. My name is Paul Luce. I'm a software engineer with the communications storage infrastructure group at Intel. I've been developing storage software for a little over 20 years for Intel ranging everything from RAID firmware to Windows device drivers to Windows file systems really all close source stuff so I'm really excited to be part of the whole open stack thing here I've been working on Swift for probably the last 8 to 10 months and today Tushar and I are going to give you a bit of an overview of what we've been working on the community along with folks from Swift stack and Box and Rackspace and Red Hat so I'll introduce Tushar now and then we'll get started. Tushar Gohar I'm a software engineer in the storage division at Intel being involved with open source for a while I came two and a half years ago from Montalista Software which is an open source company so being was involved with Linux kernel from the networking side and being in storage for last three years at Intel same thing as Paul said in the open stack Swift being involved in there for last last year or so. Okay well we've actually got quite a bit of material so I want to say just a few words about the agenda and what we're going to be covering and then we're going to jump right in and if you have questions as we go along if probably be better to hold them towards the end just in case we run at a time Tushar and I will both be available we can step out in the hallway and cover any of this stuff in as much detail as you'd like to go through. Let me get a quick show of hands here how many folks are familiar with Swift just to even know what it is fantastic how many were able to attend the session on Monday that I gave with John Dickinson the PTL where we talked about policies. Okay cool a pretty decent number so this first section is going to be kind of the readers digest version of Monday's talk but still meant to be incredibly informative for those of you that didn't make the talk. Second section we're going to talk about erasure codes this is where Tushar is going to jump in and give you an overview of what we've been doing in that area with the community. Erasure codes is really exciting for a bunch of reasons and it's really cool it builds right off of the storage policies so if you got to attend the talk Monday or get a chance to repeat the materials you'll see how it just flows really well with what the community is working on with these two projects. Then Tushar will talk about Cosbench how many folks have heard of Cosbench. Okay good so this will be informative since there's not a lot of you that are familiar with that so Cosbench well better not go over there. So Cosbench as Tushar will talk about is an object storage benchmarking tool that again it's open source it was contributed by Intel and maintained separately. One of the things that we just want to make sure you didn't get mixed up here is Cosbench is not Swift only. Okay so Cosbench is designed as a benchmarking tool meant to sort of become the industry de facto standard to run against any object storage system that you want with a plug-in model. I'll let Tushar go through the details of that and then last but not least wanted to just mention with one foil a public Swift test cluster that was announced this week if you go to Swiftstack.com and take a look at their blog their news section you'll see about this activity and we'll talk about that at the end because we're pretty excited about that that's kind of something neat in the community that is a little different than these other development activities. Okay so we can't really talk about storage policies without doing a little bit of level setting on Swift and I know there's a lot of folks here that do have some good Swift background based on the hands but we'll at least define what it is and again if anybody wants to talk more details about what Swift is we can do that afterwards. Obviously it's an object storage system it's one of the core projects in OpenStack. Wanted to mention something about the the cap theorem the specifically eventual consistency there's been a few talks this week that didn't completely quite get it right when they defined it. John Dickinson had a talk last night and I think he really set the record straight so I'm just going to kind of repeat what he said I'll say what it is not it is absolutely not the case that it means when you write to Swift it's like a posted write or something it doesn't mean that you write it and eventually it's going to make it to the disks it's not what eventually consistent means. It means under certain conditions failure conditions or if something is really really busy it may be a window where the object exists on your server it's stored durably but if you go to list it you might not see it in the listing yet. Okay there's some other little corner cases where other small strange things might happen but it is definitely not the case that it means you know you write and some day your date will get down there that's not that's not what it's all about. Another important concept in Swift specifically for explaining storage policies is the notion of containers so if you're familiar with S3 it's kind of like buckets but again seems like a lot of folks are familiar with Swift so you'll see how policies really leverages the the container model for grouping objects of like characteristics and leverages that to provide a new capability to applications. Restful interface I think that's probably fairly well known by everybody and it's a cost efficient scale out built on standard Intel hardware right. Okay so what is a what does a real deployment look like right this is this is sort of the big picture on one slide you can see that the Swift architecture is sort of set up into tiers this first box here at the top we typically refer to as the access tier and that's where modules like authentication and load balancer for dealing with multiple proxy servers live and then of course the proxy servers themselves which in the Swift world sort of act like traffic cops right they field all the incoming requests they rattle around failures they do all that kind of fun stuff and really serve as traffic cops to filter things out into the back end into the private side of the cluster which is what we call the capacity tier and this is your collection of storage nodes that run all sorts of different services and do all sorts of cool stuff to maintain replication to maintain data integrity and and this provides the the scale out capability on the capacity side so this this two tier architecture is really cool and that it really gives you this independent scalability right you can go in and scale for concurrency by adding proxy nodes you can scale for capacity by adding storage nodes there's also I guess I should mention there's I don't know probably another 200,000 ways to configure this these services that I mentioned can actually all live on one box they can live on multiple boxes you can break out services into their own dedicated servers depending on what your your usage requirements are so it's incredibly flexible to meet just about any deployment scenario so you can see an object comes in it'll it'll hit the load balancer load balancer will make a decision about Wix which proxy to use based on whatever algorithm load balancer set up for and then for our triple replication scheme the ring which we'll talk about a little bit more because it's another fundamental concept to storage policies is consulted by the proxy server to determine where do I send these three copies of this object okay so the objects now at home on three servers at some point later a client is going to come in and request to get access to that object and pretend that arrow goes to the load balancer because it really does it's going to hit the proxy and then it'll pick one of the object servers to retrieve the object and there's a few different algorithms for deciding which server to go choose it choose from that's not super important for what we're talking about here I just wanted to mention that it's not like one fixed server that it goes to all the time to retrieve data so with that sort of intro into Swift and what it is we'll talk a little bit about why why storage policies why did why did we decide to do this as a community this again is not an intel project that we dumped over the wall this is an extremely collaborative effort like I said it's been it's been a real pleasure to work with this community with the Swiss deck guys and the red hats and and the boxes especially when we get to the erasure code piece but we identified sort of collectively this set of technical opportunities as flexible and usable as Swift is today with an object store there are some technical opportunities that we address with storage policies and the first one has to do with the the replication level right I think probably most of you know you can you can choose your replication level in Swift doesn't have to be three times three copies right you can do two or you do four but you have to pick one right and that's it that's what your cluster runs at so we thought wow that's probably an opportunity where we can add some some improvement another one is today Swift will treat all nodes all storage nodes as if they're equals right doesn't matter what processors in them what acceleration capabilities are in them whether they have hard drives or SSDs they're all equals right with the one exception of account container databases if you're familiar with with Swift you know you can actually specify I want to store these databases on an SSD and you do that for performance reasons but today that's that's really the only differentiation capability that you have with your storage nodes in Swift and then finally how would you go about adding something like erasure codes right a new durability policy right so let's say you don't want to do just double replication or triple replication and it's not enough to say I want some of my cluster to be 2x and some of it 3x actually want to do something you know to keep up with the Joneses right and go home with the erasure code and get all the benefits that Tushar is going to talk about so these are things that without storage policies are really not that easy to go and solve so that's why we tackle this this policies concept about I guess we started on it eight or nine months ago I think so it's been it's been a pretty significant effort I'll show you a little bit more with a picture here in a minute about really sort of what all the touch points are and you can get a feel for what we're doing okay so I mentioned some of this already these are the the technical opportunities right the ability to group things by storage the ability to add additional durability schemes and the ability to deal with new usage models I spend a lot more time on usage models here in a minute because that's probably the most interesting thing for employers and users of Swift as opposed to when we talk about this in the design sessions later this week it's of course the most interesting stuff is what's under the hood and you know how did we make all the decisions we made so one of the one of the fundamental concepts behind storage policies is the use of multiple object rings so as I mentioned a few slides ago you can take account container databases and place those on SSDs for performance reasons but you can't do that with object why is that because account container databases have their own rings and all object storage today share one ring so you know the the definition of storage policies can almost be done in just those five words right there introduction of multiple object rings that's five words there's a nice picture here and I haven't talked a whole lot about the ring but we've got tons of material and backup and there's all sorts of stuff on the web it's a really interesting topic and if we didn't have so much other stuff we wanted to get to today I would just love to go through it in detail but today with triple replication as I mentioned earlier the proxy or whatever service needs to look up the location of an object within the cluster triple replication it would go up to the ring structure and it would go get three locations for reduced replication it would just simply go out and get two locations right it's just a separate a separate ring and then erasure codes we were able to really leverage all of the features of the ring code by representing fragments on the ring as opposed to copies so we're going to do a lot more detail now when when Tushar goes through the erasure code section second key element to this with storage policies is just one piece of metadata so there's one api change to enable all of the usage models we're going to go through so it's it's just really exciting how how much and how wide the variety of features and capabilities we get out of adding an object ring and adding one little api change so this api change is this thing called x storage policy and an application simply needs to specify that when it creates a container that's it you never do anything else right so the cloud administrator the cluster administrator has to define the policies and of course set things up and get things configured but then it just simply provides human readable policy names to the applications and the applications say all right if I want to have this data stored in triple replication I'm going to give it the name triple and it'll be stored at triple replication when it's stored in a container that was created with that policy name if it then wants to turn around and store a different object using erasure code it just writes it to a different container that was created with an erasure code name as its policy so really really simple interface for applications to take advantage of all this okay so this this is a high-level software architecture black diagram that covers all of Swift and I won't go through any painstaking detail here to give everybody the low down on how Swift is built what these interfaces look like really the intent of this slide is to is to show you what we had to go and muck with to get storage policies to work it's a significant undertaking it's taken a long time with a lot of people and their hard work on it in fact if you made the Monday talk right the title that talk actually came from John our Ptl who called us the the biggest thing to hit Swift since the project was open sourced and that biggest refers to both its significance and its impact on the operation of Swift and quite frankly on the size right so this this patch set it's a series of patch sets that we've been working on off to the side I think we're really close to being done I've been saying that for weeks but I'm going to say it again we're really close to being done I think at the end we'll be close to five thousand lines of code on this patch set and Swift including test code total is around sixty thousand so it's a pretty significant chunk of code and you can see here on the proxy side we're hitting three of the major modules and then done on the object server you know a significant number of services that run on the object server all need some sort of you know plumbing update or functional update to effectively utilize these policies and guarantee that all of the different kind of weird conditions that you can get into with the introductions of policies are handled appropriately okay so let's look at a couple of usage models these are probably pretty obvious the way I described them already but you know pictures are always nice to to get a better understanding of what we're talking about so let's say you've got a container with a 3x policy in a cluster that's already set up like I said today with Swift if you wanted to change that to 2x you could but it would take effect across the entire cluster with policies you just simply create another container and you point the ring associated with that container to the devices that you wanted to use and now when you write to that container you get double replication you write to the 3x container you get triple replication so we're allowing you to effectively segment your cluster based on whatever criteria you want to come up with and this one shows a redundancy criteria here's another thing that right is totally different kind of usage model right we can now create a performance tier just like you do with account and container databases so it's not related to redundancy but it's the same exact set of code you have a container with your hdd policy which should be your you know what we're calling now the legacy ring with storage policies you could go in and then create a a new container that points to ssds right maybe they're the same ssds as your databases maybe not it's whatever you want to do but now you've got a low latency tier that you access your application accesses via a container okay here's another one that again is just totally different than the other two but enabled by the same exact code base and the same exact philosophy with geotagging and this is still sort of a work in progress and i'll explain why that is here in a second but with geotagging let's say you have one cluster maybe it's a globally distributed cluster maybe it's within one site and you've got certain areas physical areas that you want to make sure stay isolated it could be for regulatory reasons it could be for whatever reasons there are some capabilities in swift today with global clusters and regions that allow you to help segment things but there's nothing today that would say i want to guarantee that when i write to this container it never leaves this geography or it never leaves this data center regardless of anything else any other changes i make with replication or rebalance or anything i have to have a guarantee that it never leaves so you just simply create one container and point it to the right physical media that meet the criteria that you have for that isolation and you create another one with with its set of criteria and that way you have that guarantee built in now i call this kind of a work in progress because we haven't addressed isolation of container and account metadata yet that's kind of a blueprint that we'll be drafting here as soon as we get the rest of policies done and murdered onto our master branch but still it's kind of another at least for me it's a really exciting example because it's so different than performance tearing it's so different than durability with erasure code and so different than what we get with multiple replication schemes within the same cluster so there's so many different things out of out of one features is just coolness okay and then and then finally erasure codes again there's our container with the triple x policy and all we have to do is create a new container and point the ring to a different number of drives in the cluster and is to show will go through these now represent fragments of your object as opposed to the object itself so we're no longer using the ring to track copies we're using it to track fragments and as you can imagine this involves some code that changes as well it's not just a mapping type thing so with that i'm going to turn it over to char and he's going to give you the overview of erasure codes and where we're at with that continuing with usage models for policy storage policies here's the first significant storage policy in swift that we are working on as the storage policies code gets merged into Swift master uh how many of you are familiar with erasure codes all right so 25 so i'll go with into some details here this works okay so erasure coding is a an alternate data durability scheme where an object is not replicated but split into you know in using the standard standard terminology k data and m parity checks so essentially you're getting as they call it space optimal redundancy and and higher availability because you you're you're taking your object putting it through a a process called encode which applies some Galois field math to the object calculates you know additional fragments called parity fragments in addition to the to the data chunks and essentially those chunks are distributed across your cluster and by by distributing the chunks across across clusters you can tolerate m node or cluster failures essentially so so this this actually gives you higher availability than a triple replication or in general a replication scheme there are papers you know that can that can support that statement so in terms of talking about space optimality if you actually split your object into 10 chunks and added let's say four parity fragments as you know for for data protection you you essentially use 50% less space than you would have when when you went went for triple replication so erasure codes since they involve Galois field math they come with a cost which is essentially higher compute requirements than the replication the applications that's it's mainly simply copying data in in erasure codes you are encoding an object when you're splitting it into and spreading across your cluster and when you're doing a re object reconstruction from from these fragments you you essentially incur a lot lot more CPU compute requirement and you know basically given that these are called erasure codes i mean i'll just explain the word erasure quickly here the reason they're called erasure codes is because unless you know you know which which of these data chunks where in erasure as in we're lost unless you know which chunks were lost you cannot reconstruct those if you if you don't know you know what data you lost then you cannot that's why they are they're called and so essentially it it is essential to know what data chunks were in erasure and and when you actually put it all together into an erasure code scheme it also means higher network bandwidth because you actually have to figure out in your network you know what chunks went missing so higher computer network requirements essentially make it suitable for more suitable for archival workloads where you have a higher percentage of writes and you're reading less often like you know traditionally called cold storage workloads where you you essentially in quotes deep freeze your data and then you rehydrated you know less number of times than than you essentially write now let's look at in the in the Swift context so the regular replication path in in Swift change to erasure code scenario here you you have a client that's uploading an object the object essentially you know arrives of the proxy the proxy passes the the object through through through something called an erasure code encoder which essentially splits the objects into I'm going to use the standard terminology here k plus m fragments and split and basically distributes those across the Swift cluster the the download process or the reconstruction or the decode process is essentially the the opposite so so the because you have k data fragments and m parity fragments the the rule in with the erasure codes is that you can reconstruct your data from any of the k fragments so so it's essentially if you had total k plus m equals n fragments you can you can do an n choose k get all those data chunks at the proxy and and be able to reconstruct them so so essentially when a when a client download request comes in the proxy gets k fragments and deep puts them together through again gallo field math which is you know high in compute and you know of course network requirements because you you are essentially pulling from more than one node here and and reconstruct the object and delivers it to the client so so that's sort of the swift erasure coding scenario the bigger picture so so what does it take to add erasure coding support to Swift when Paul and the community has been doing the great work of adding the foundation for for adding the erasure codes so so erasure codes do do build upon the the storage policies framework uh the the next uh I mean you know I'll just skip through the animation so that I can quickly cover this so essentially so so erasure coding policy is is available will be available in in Swift at the container level granularity which which kind of follows from the storage policy design where all all the objects in the in the container I mean for the containers tagged with a erasure code policy then all the objects inside a container will be erasure coded we we chose to do a a proxy centric inline erasure coding as opposed to offline erasure coding so I will go into a bit of detail on that so inline erasure coding is essentially the proxy doing erasure code you know as the object object is streamed in and pushing the the erasure code fragments to the object servers whereas offline design would have been uh the the proxy basically uh download uploads the data to the object servers as it comes in essentially replicates and you in downtime you you basically move the objects into an erasure coded container so that would have been done at the storage node level but as I said erasure coding comes with a higher compute cost which which basically makes it more suitable for an inline transfer design in code design because the proxy is essentially the focus of demanding services in in Swift so that's where you're you're likely to find more compute power as opposed to storage nodes leveraging the current architecture so essentially the repair is is is a big part of you know an important piece of the this design the scheme where if you if you have a disk failure if you if you have a node failure if you have a cluster failure if you have if you go and get into bitrot where you you are basically missing one fragment of an object you you know the repair scheme kind of gets gets pretty pretty involved so we we basically chose to leverage the current Swift auditor most of it and and come up with a with a you know change the replicator to be the erasure coding reconstructor so most of it was essentially leveraging the current architecture i wouldn't go into too much detail on that but you know here here is his sort of a a snapshot of what what we had to change to to make Swift erasure coding aware so as you can see in in the proxy nodes and in the storage nodes we there were there were existing modules like or the object control of the proxy the the whiskey application where in the storage node where we actually had to change you know there was some additional metadata that had to be incorporated we had to come up with a new EC reconstructor and one of the important pieces here is is that all the intelligence to do the erasure code encode decode and repair we we basically chose to move out of Swift into a a library which which ended up being a separate project which Swift talks to for erasure coding it's the the library essentially implements the the the api that's shown here so so we actually chose to keep the api simple enough for projects like Swift to to incorporate so this is this is not so it's specific library but it was developed along with the Swift community you know with this project the erasure coding project in Swift as the first user so this is essentially a python interface wrapper library with pluggable C backends the the the primary reason for creating this library one one was being a favorable licensing model as opposed to some other libraries that exist out there like ZFEC also this library is what was designed with with with performance in mind and so we we actually have it's essentially a convenience wrapper that that wraps the C erasure code backends or hides the details of C erasure code implementations from from a python user so in in version 1.0 that should be coming out pretty soon here we are we are supporting J erasure which is a very popular erasure coding back end a flat XOR back end that that was recently presented at the FAST conferences and also an Intel intelligent storage access erasure library scheme erasure coding scheme that i'll talk about in a minute here the library is BST licensed it's hosted on the bed bucket so it's this is another open source project it was jointly developed by BOX which is another storage company there are there's some erasure coding researchers working at the company that we we have we have basically their blessing for this library and also the Swift community now I'll introduce a storage accession library that that we the storage division at Intel develops it's called the intelligent storage accession library which basically provides primitives for accelerating storage functions like encryption compression dedu integrity checks etc so so as you can see the as of version 2.10 the library basically supports these primitives essentially there is an open source version out there so so i should i should back up a little bit and talk about the library licensing again so essentially the library is is right now available with a with an agreement with with Intel but there is an open source version out there which was which basically provides just erasure code support today and why is this library why does this exist it's it was basically written to parallelize you know take advantage of the Intel simd primitives to parallelize the erasure coding operations and it's it's shown to be order of magnitude faster than the normal table table lookup methods it supports the Reed Solomon van der moorn matrix matrices as well as koshi schemes it so so like you know there are several others non Reed Solomon schemes that were designed so that you you don't have to essentially go out to go to your cluster and and fetch you know basically look for the as many missing fragments but with the with the acceleration provided by this library you know it essentially though those those methods are pretty much irrelevant it's available on zero one dot arc for download it's bst licensed all right so so just kind of summarizing the project project status here so pi ac lib which is the python ac interface library is upstream and bid bucket and pi pi that's available for download today or and and even it's it's it's at version zero dot nine which is stable enough to use storage policies is is in plan for including inclusion in open stack Juno and ec erasure coding we are we're actively working on the data path there are design sessions this week at the swift design track so if you if you are interested you're welcome to join the the session tomorrow at 5 p.m. there there's always the open stacks with IRC and we actually have a Trello discussion board for discussions on erasure code design so welcome to contribute and the blueprints the several blueprints that that are available for comments or you know contribution and finally intel isle if you're interested in the in the primitives other than the erasure code you're welcome to check out our storage page on the intel.com so with that i'll changing gears here a little bit i'm going to talk about a benchmark tool that was introduced at the portal and design summit last year it's called cost bench stands for cloud object store benchmark tool so this was this was basically brought in to to kind of address the gap in the object storage space where where you actually have a single tool that addresses multiple object store backends and also exposes you know the performance matrix that that iometer word for for block storage iometer is pretty much considered as a standard tool for block store supports distributed client server model lets you analyze performance scalability of a block storage system so think about cost bench replacement for that in the object world it is open source it is aperture licensed it is cost benches cross-platform it uses the aperture OSGF framework can run on pretty much all windows or unix platforms that support java it is cost bench is distributed so essentially there is a controller and driver model controller is essentially your test control interface control as in launch manage monitor workloads drivers are essentially nothing but test generators and it's a scalable model where you know a single single workload configuration can be distributed to multiple test generators and you can add test generators as as you as you will in terms of storage back end support we support not just the open stack swift object storage but but also some other commercial ones as well as open source ones like sef we did recently introduce cdmi adapt back end adapters for cost bench and we do have like if you if you need to do stuff like cost bench you know tooling or how should i say calibration that's the right word we do have some mock adapters which can or something like a none adapter which is like a none adapter to to check the the calibration of the tool so i'll talk about the flexible workload definition and the performance monitoring aspects in the next couple of slides so cost bench supports a i mean you know the the configuration format for cost bench is essentially xml so it's extensible you can you can define all sorts of complex workflows object sizes object size distributions into uh combined with read write delete percentages per stage we and and typically you can you can actually mix and match stages workload so essentially you know it's very extensible flexible you can add your own object store support to it and extend this format and it'll work so cost bench um controller exposes a real-time monitoring ui which is a which is a web ui which which consolidates information from from all the drivers into a single interface where you you essentially have have all the drivers listed by by the names ip addresses you can you can go into the details there it shows that what workloads are running along with their states and and also workload history terms of performance reporting um cost bench essentially reports uh you know response times throughput bandwidth success ratios for for workloads this is this is like this is done at real time at five second interval typically uh at the at the end of the test run it will it will also generate a response time histogram for you which you can plot which is it's a standard csv format um and and this chart shows that you can actually do a scalability analysis of your of your workload along with uh you know along with workers which is essentially the number of connections simultaneous connections to cost bench yeah so this is sort of the progress report since the since the open stack of an release we've added several new storage backends we have added uh new authentication mechanisms there are new features in terms of job management um one feature that i would like to point out which actually takes cost bench a step closer to iometer is a batch workload configuration ui so we if you check out the 0.4 version of cost bench um essentially lets you configure multiple uh objects object sizes multiple number of worker combinations basically you know a whole whole combination you know i should say a number of combinations of workflows using a single screen essentially um on on the roadmap for for this year is essentially uh multi-part uploads for objects we we do uh we also want to turn it into a profiling tool and there's also google and azure storage support coming up this slide um shows you uh you know the the current industry users for cost bench and the top activities actually top chart is probably more important from an open source point of view uh cost bench is hosted on github and this this chart shows the activity i mean i actually captured this uh towards the end of april we actually had uh in two weeks we we had 150 unique visitors and 15-20 views so this is an active project uh you're welcome to contribute rather encouraged to contribute so here's some more information that's the uh it's cost bench is hosted under the intel cloud account on github and we do have a mailing list to where where you can post questions it is actively monitored and we do have within storage division and other intel divisions supporting cost bench so with that i'll turn it back to paul who is going to introduce you to the uh to the swift test cluster we recently deployed cool thanks to you sharn um what one other thing about erasure code and the status that we didn't mention and also i just want to make it super clear we mentioned policies is just about done that will be in a swift release sometime over the summer um and then you know overlap with with juno of course erasure code is too sharp mentioned is still in the design phase we do have some code but there's still a lot of opportunity for contributors to come in um to our design session tomorrow and we also are sponsoring a hackathon um a week from next week two weeks from now i don't know second second of june uh in colorado and we've got i think right now 28 core and active swift community developers that are coming together for four days we're going to be working on more than just erasure code but there's still a couple of empty slots there and we expect to make a lot of progress on the design and come out of there with some real action items to continue to build up our erasure code branch okay and then last but not least as i mentioned at the at the top of the hour here you can go to the swift stacks website and read the full press release but we did want to mention it here it is relevant to some of our contributions to object storage we've got a public test cluster that swift is owns and operates and and they're bringing up there in san francisco it's using intel processors some of our latest processors built for object storage the 2750 i only know the code not use part numbers but you can see it's a pretty small cluster this is kind of just to test the waters and to really augment the current vm-based testing and we expect as this becomes more successful this will grow over time and we'll really begin to see some noticeable improvement in in quality and catching issues that we can only find on real hardware versus vm's so yeah i encourage you to go check out that blog it gives you a little bit more information than just what it looks like and that's what we got for you any questions yes with regard to storage policies having all your container databases on ssds can be very expensive on the system have you considered storage policies for container databases uh yeah i kind of mentioned that around the geo tagging usage model so our approach to storage policies was to first attack the object side and leave the other rings the way they are and leave that plumbing in place and we will definitely be talking about that over the summer but first priority is getting policies wrapped up and merged on the master and available for for mass use and then uh if it wasn't obvious a lot of the same people that are doing the policy work were shifting gears and went right into erasure code and there's more people coming on board to work on erasure code but you know at the same time we'll start to look at account container rings for policies as well thank you good question thanks do you think there are concepts and lessons learned that can be applied to from cosbench that can be applied to the cinder world as well from cosbench as far as performance tuning and looking out things are working under the cover uh you know that that's a good question i think i think primarily the the original developers of cosbench and two shards contributed to it and some of our guys in prc you know really owned and started it i think they kind of looked at it the other way around from the block side of the house i think there's a pretty significant history and knowledge base of of how to do performance benchmarking across different back ends depending on what your abstraction is really wasn't there on the object side so so i'm not sure it's a good question but i really think we were more thinking about how do we take bkms from block and move them into object are we totally out of time now what does that mean one more question we're out of time okay so i guess we're out of time but um yeah we'll we'll hang out over here and then we can step outside and answer any other questions thanks very much everybody