 Hello, hi everyone, my name is Kamesh Pamaraju. I'm a Senior Product Manager with the Dell OpenStack Cloud Solution Group. Today, we're going to be talking about how wicked easy it is to use Ceph and OpenStack and install them using Crowbar. So quick check of hands here, how many of you guys actually use Crowbar or heard Crowbar? OK, well, how about Ceph? Excellent. So that's fantastic. So today, we're going to talk about a few things. For those of you who don't know what Dell has been doing in the OpenStack community, give a very quick introduction to that, an introduction to what Ceph is all about, and an introduction to Crowbar. So we'll start off with that. And then we'll go into why Crowbar and Ceph, what problems are we trying to solve? There is this OpenStack storage gap, specifically in the block storage area. And we'll talk about how the solution addresses that gap. It's mainly around automation and scale. We'll talk about that. And then what we have been doing with Ink Tank and Dell, we've been working together for several months in the community, and now we have a partnership with them. So what have we done to enable things for our customers? And then in the end, I have a specific example of a customer installation that's actually going on as we speak. So I'll give a very quick overview of what's going on there, and very quickly on what's coming next. So that's the agenda. So let me quickly jump into a little bit of a background on Dell's point of view around OpenStack Clouds and what we've been doing. So if you look at any cloud installation, it requires hardware, it requires software, but the majority of it is really all about operations. So where do you spend most of your money and time? It's in operations. So things like when you run out of capacity, you have to add more servers. And when those servers are added, how do you make sure that those servers become part of your OpenStack Cloud automatically? What happens if a server goes down? What happens if you have to patch an operating system? What happens if you have to update all your changes in production? So all of that is operations. And it requires significant focus on those operational controls in production. And those decisions around what you do in operations drive your hardware and software decisions. So we have done a lot of work at Dell, and we'll talk about that because Crowbar encapsulates some of those best practices. So this is how the current solution looks like. So on the software side, we have OpenStack. We use OpenStack for our cloud solution. Ceph is the OpenStack cloud storage solution. We have a reference architecture and a taxonomy around the Dell hardware and Dell portfolio. And the big part of it, as I said, is ops. What do we do there? That's where Crowbar comes in, and there's services in consulting. So I'm not going to go into the details of this, but you'll see the deck later on. But effectively, we have hardware that's a power-it, C and power-it are series that are part of the solution. Storage and computer part of that. We use our own portfolio of forced-end switches and PowerConnect. The software is OpenStack software. The installer is Crowbar, which we will talk about as we go forward here. We use Ceph for distributed storage. The operating system we use today is the Bantu, 12.04. We support both Windows and Linux guests. KVM is, as you know, a very popular hypervisor in the OpenStack world, so that's what we use for our solution as well. There's deployment and consulting there support. So it's the, when customers want to stand up on OpenStack cloud, it's not just a software piece. It's the end-to-end solution. And that's what Dell has been doing. We've been working with the community since day one. For those of you who may not know, we have been very active in the community. We run the OpenStack meetups in Boston and Austin. We have been sponsoring the summits for a long time. So we're excited to be part of the community. We are doing a lot of work around Grizzly. So with that, I'm gonna hand it over to Neil to give an introduction on Ceph and I'll come back and talk about Crowbar after that. Here you go. Thanks, Kamesh. So I'm Neil Levine, VP of Product at Ink Tank, who are the commercial sponsor of the Ceph project. So Ceph is an open-source, massively scalable, distributed, software-defined storage system. And it combines three storage flavors in it. We've got the object storage. So if you're looking to do an S3-type service or use an alternative implementation of Swift, we complete these Swift and S3 API-compatible. We also do the block storage and I'll come into more of that later on. So this is a way of abstracting out your block volumes from your VMs into a separate dedicated storage system. Finally, we have our POSIX-compatible distributed file system, which is ideal for legacy workloads. But object and block is ready where most of the open-stack effort and integration work has been done in Ceph. So the key values or differentiators that we have around Ceph, other than the unified aspect, which in and of itself is a huge boon to those of you using open-stack, you only have to deploy one system, is we have a very clever way of distributing data amongst the storage system using an algorithm called CRUSH. So when the clients are writing data to the storage nodes, they're using an algorithm to calculate where the data should be put and where it should be read from. This is very, very efficient and really gives Ceph a huge part of its scale story. This spans many, many, when you've got thousands of nodes having a fast way of to work out where data should be without having to go through an index table or do any look-ups as a core part of getting that scale out. Additionally, we have what we call our intelligent devices or intelligent nodes. So these are self-healing, the self-managing and they're using a peer-to-peer mechanism which again gives it this incredible scale where all of these nodes are talking to each other and as a way of working out which nodes are up, which nodes are down, which nodes are serving data, which ones are not. And every time you add a new node or you're taking nodes out, whether deliberately or because of failure, all the nodes communicate with each other what the state of the system is. So this kind of intelligence which you have to sit in the head of the storage admin, it is handled by the software. So scale here is not just about the amount of storage you've got, it's about the number of people you've got to manage the system. We work very hard to get the intelligence down into the software so you don't have to do much as an admin. One of the sort of lesser known features but probably one of the most powerful ones within Ceph's ability to actually take an object class if you're writing your application and attach a piece of your application logic directly into the storage system itself. So this is a way of doing true distributed computing. So the example is that you could write a piece of software which is transforming a piece of data whether it's an image or a video. You maybe want to create a thumbnail or encode it. You can create the software that does it, the application code and embed it into the storage system itself. So this is application aware storage development. So this means you don't have to take, have an application take data out, transform it, write it back again. It all happens directly on the nodes themselves. And finally is we'll go into in more detail in a second, we're integrated into, opens that quite heavily. We're also integrated into the Linux kernel. So on the block side of things if you just want to mount a self-block image you can just do that using any fairly modern kernel. So from an architecture point of view whether you're accessing the system for object or block or file system, it's all built on our object storage system itself called RADOS, which has an acronym which I won't go into. RADOS is really the sort of the heart and soul of the system. And it's where the true scale out happens. It's where the intelligent nodes sit. It's where you put the object classes if you're writing code. And it's the thing which is you're going to be scaling out here. The clients on top whether it's object or block are very thin pieces of code generally which are sitting on your VMs in the case of the block device. And you also scale these out as well but you'll be handling after the different capacity planning decision through the block as you will from the underlying RADOS system itself. So RADOS consists effectively of two components. We have monitor nodes which are watches of the system. These are the nodes which are deciding which object storage nodes or OSD nodes are up and serving data which ones are not serving data. So the monitor nodes themselves don't actually, they're not part of the transaction between the client and the server. They're merely there to monitor the state of the cluster. And again they use PACS as a consensus algorithm to decide what the state of that cluster is and to inform both the OSD nodes themselves so they know who's up and who's not. But also the client. So the client itself gets a copy of the topology state from the monitor and uses this as part of the crush algorithm to work out where the data's gonna be. So you'll have a bunch of monitor nodes, preferably an odd number, so you can get quorum in a split brain situation working out the state of the cluster and then the OSD nodes themselves which are again the heart and soul of the RADOS system here. So typically you'll have one OSD process running per disk, although you can aggregate multiple disks and have a single process monitoring that as well. And each OSD can also have the object classes attached to it as I mentioned before. So a general stuff cluster will look like something like this where you'll have three, five, seven monitor nodes, a relatively small number and a lot of object storage device nodes, OSD nodes as well. Now the OSDs themselves consist of just the raw disk itself, SCSI disk or a SATA disk or so on. We'll layer a standard file system on top. We typically recommend XFS, we're looking at butter and we do have people using butter who are adventurous and ZFS is also beginning to look attractive now. These are standard Linux file systems and then the OSD process on top. So from a provisioning point of view, these are fairly standard components, just basic disk presented kernel, fairly standard file system and just a part of our set packages included in the Dell OpenStack solution. All layered on top of each other and it's this layering and distribution of the OSD nodes which is where the deployments or the main effort around your deployment will be focused. So this is where I'll hand back to Commissioner to talk a little bit about how Crowbar handle these layers of deployment. Thanks Neil. So we'll come back to Seth about how Seth works with OpenStack in the next session. So let's do a quick introduction to Crowbar. For those of you who have not heard of Crowbar, I just got off of a session this morning with Best Buy and Crowbar is open source by the way and it turned out that actually Best Buy is using Crowbar and we didn't know about it. This is how open source works I guess. But anyway, what's Crowbar? Crowbar is the way we define it is basically if you have, think of a scenario where you get a whole bunch of raw servers just show up at your DOS step and from that point on you wanna get your OpenStack cluster up and running. What does it take for you to do that today? Right, think about that. There's a ton of stuff you need to do at the networking layer. There's a ton of stuff you need to do for setting up your DNS, your DHCP, your IPMI. There's a whole series of things you have to do before your infrastructure is ready let alone the OpenStack stuff, right? There was a user survey that came back this morning. I don't know if you've seen it, it's out there. The user committee has actually done a survey in the last two weeks and they surveyed about 415 people in the community. Various people, users, operators. What was their number one challenge? Any guesses? It was installing, configuring and managing OpenStack, right? And I'm talking about one layer of more difficulty down below that at the infrastructure layer. And this is what probar, this is the problem that probar solves for you. It automates the entire process of Baymetal provisioning and getting OpenStack and the operating system, everything up and running for you in an automated fashion. So from a marketing point of view we say get your servers, inboxes to a full functioning OpenStack cloud under two hours. So that's sort of the mission statement for us. It's fast and flexible. It's Baymetal installed using, so even if you have servers show up, there's all kinds of stuff inside that server. There's BIOS, there's RAID, there's IPMI settings. The servers have different ways of handling your actual hardware. So somebody has to go in there and do stuff. I mean, they have to go update the firmware, they have to go configure the BIOS. If you're doing that manually, that's not cloud. It needs to be automated. It needs to be done at scale, right? And that's what probar does for you. It's all DevOps. I mean, the whole idea is DevOps. So it's an ongoing operational model. So as you learn how to do these things that you do manually today and take those best practices and programming it, program it back into your system and automated system, that's what probar does for you. It's open. So by the way, for those of you who followed the Best Buy story this morning, probar, they're not using Dell servers. That was the entire HP stack that they showed in that picture. So probar actually works on other servers, too. It's not Dell specific. It's not specific even to OpenStack. We've actually used probar to deploy Hadoop clusters. And Neil will talk about it. probar uses, we use probar to deploy SEP clusters. So you can use it to deploy any application. And it's Apache 2 license. It's all open source. So you can go get it out there. So I just wanted to spend a little time talking about the soup versus the sandwich analogy. So for those of you who are in the virtualization world, you're all familiar with golden images, golden images. Golden images is what we refer to as soup. So you've got your applications. You've got your utilities, your operating systems. You sort of build all these together and you create this wonderful, great-looking golden image. And that's what you deploy. But that model, they're pros and cons of using that. The downside of it is, even if there's a single change, if there's a little patch to the operating system, if you have 1,000 servers out there, how are we going to go and deploy that? So you create another golden image. You go and deploy that. So that's the soup model or the single unit model. It's a single golden image model. The other approach is to take a layered approach. So you have all these different pieces that you see there, the layers, the sandwich model, the operating systems applications, et cetera. And the idea is you can take each of those things and manipulate them in production one layer at a time. So you get a lot of flexibility doing that. So this is how it works in an upgrade. In the case of an upgrade scenario, like I just said, you have to update an operating system. You create a new golden image and you go off and deploy your 2,000 servers. It's not scalable, obviously. This is where the whole DevOps picture comes into picture. So in the case of the layered model, just one component gets updated. And this is the whole philosophy behind crowbar, which you'll see in a second. So this is crowbar users layers for deployment, all the way from physical resources to the core components and operating systems, the cloud infrastructure, OpenStack in this case, all the way up to the APIs. We'll get into more details here. Again, as I said, it's not exclusive to Dell. We have built it in such a way that it's modular. There's something called Barclams, which we'll talk about in a second. But it's all modular and it's all based on this layered philosophy. So how does crowbar work? I'm not gonna go into too much detail. If you want to drill into the technical details, come by the Dell booth. We have all of our rock stars, basically, from Dell here, technical product managers, our engineers are all here. So they'll give you a quick tour of what it is, but basically what it does, imagine that you've got OpenStack up and running. Now you've found that all of a sudden you're out of capacity, right? Not just from a VM perspective. You're out of capacity from a physical perspective. Now you go add a new server. How do you do that today? It's a manual process, right? You go configure the hardware, you put it back into the cluster. But in the case of crowbar, the moment you add the server, crowbar discovers it. So there's this thing called, there's a DHCP server in the pixie boot. So there's a sledgehammer image that gets downloaded. It figures out what is on that server? What, how many resources does it have? What's memory? What are the networking configurations? The NIC cards, everything. And then it comes back and says, what do you want to do with this node? Do you want this to be a compute node? Do you want this to be a set node? Do you want it to be something else, right? So you can allocate at that point. And at that point you can say, and this is where the chef comes into picture. Everything that crowbar uses is chef underneath, right? It's a master orchestrator, if you think of it that way. And then it lays down the operating system bits. It lays down the networking configuration. It lays down all the apps on top of it. This is all automated, by the way. And then there's a whole orchestration cycle, right? It's a complete state machine that takes you from bare metal all the way up to a running app. So that's basically what it is. So basically crowbar is a very configurable system. It's, you can add these things called bar clamps, which are effectively things that you can use to extend the capability of crowbar. I'm not going to go into the details of this, but again, it has a crowbar API. It's called Chef Recipes in it. It's components and scripts you can use to extend crowbar to deploy your own application. Which is what Neil will talk about shortly. But again, if you look at the overall picture, all the physical resources, this is what I was telling earlier, networks, raids, BIOS, IPMI, all of these things you can overlook in any real cloud deployment. And at scale, this becomes a huge issue. And then at the operating system levels, you may be using multiple operating systems. You may be a Red Hat shop or a SUSE shop or a Ubuntu shop, or you want to have multiple things going on. How do you deal with that at scale? Again, the same kind of problem, right? And then on top of that, you have all the OpenStack components, Nova, Swift, Quantum, Cinder. You need to get all those components. And as I said earlier, configuration of all these pieces in the right way, in the right order, so that they all work together is not easy, even today, even with Grizzly. And Crowbar does that for you. And at the top, you can use other things like and Stratas and Ganglia and stuff like that. So I'm going to hand it back to Neil at this point. We'll go into deeper into the stuff with OpenStack. So to understand why deployment is so critical to this, it's kind of helps to understand some of the evolution that we've gone through, I think, over cloud, with the private cloud technologies, particularly with OpenStack over the past couple of years. So when OpenStack and some of its predecessors began a couple of years ago, everybody said, well, the way to do storage was just to use object storage. It came as part of these solutions. And that was the brave new way of doing it. But of course, the reality was that most applications couldn't do object storage that you were trying to, the most readily available application you wanted to move to your cloud didn't have any concept of stateless storage. So everyone said, okay, you got to re-architect. We can't, sorry, we can't re-architect. Let's just use local storage. It's quick and easy. And it's provided as part of these default installations. But obviously local storage is not like the cloud way of doing things. This is not really scale out. So if anybody starts to see the storage admins coming forward and saying, well, we've got our existing proprietary appliances. Let's just use those, plug those into these open source clouds. But again, it didn't really make sense that these weren't, again, true commodity scale out, which is really what cloud's all about. So we arrived now at a point where people realize that distributed scale out storage is the way you do things. And this both applies to new applications, greenfield applications, as well as the legacy ones. And so we have people saying, I want to do storage like Google. I want to do storage like Facebook, which is what CEP provides, but the deployment story is a large part of that. So we talked earlier on about Rados being the sort of the underpinnings or the foundations. There's no single point to failure. It's self-healing. It's got the crush algorithm and so on. So we talk specifically about some of the block components and then we'll see how crowbars have helped you deploy all of these pieces together here. So from a feature's point of view, the block storage gives you standard kind of services of cloning and snapshotting, but it's got some particularly good features which make it suitable for open stack and open stack clouds. So first of all, remember Rados is an object storage system here. So when we have a block volume here, we're just chopping it up into lots of objects. And then we're striping those across multiple OSDs, multiple disks, physical servers in your CEP cluster here. So just the abstraction alone makes it very easy to sort of have VMs die to remount in a sort of EBS-style way the volumes that they were using. Event also opens up the door for live migration which is an active topic within the Cinder Group and so on. But let's think about how do you spin up images? If you wanna spin up 100 images, what's actually happening here? Well, one way is you take your boot volume and you just copy it 100 times, but this is not very efficient. So we use a thin provisioning and a copy and write system to make this a bit more space and speed efficient here. So when you have your initial image here which is say 144 blocks worth on the left hand side, you can just clone that four times. And what we're cloning here is actually not cloning data, we're just creating an empty data structure which has a whole bunch of pointers back to the original image. So the ability to create those cloned copies is very, very fast. You're not actually copying any data at all. So booting up lots of images at the same time is very, very fast. Combine that with a thinly provisioned initial image where you're only maybe saying it's two gig big but actually you're really only using a one gig in practice. Again, efficient space optimization. Once you actually start writing data, once you've booted your volumes and are actually making changes to this initial image here, then the pointers get flipped into actual real data here. But of course the client continues to read from the original image. So again, very efficient way of doing multiple boots of hundreds of volumes. Now, in terms of the OpenStack integration and how this all fits together here, so I mentioned JavaScript gateway which I won't go into now, but on the block device here we're completely plugged into the Cinder API. So all of the Cinder API calls with I think the exception of one which isn't so important, are all implemented here within the Ceph block device which then also allows you to use Glance and have Glance use the block device with that copy and write capability to store your images in a very efficient way. And we've also modified the QMU KVM hypervisor. So it's RBD aware, it can see these volumes, it can attach and detach them and so on and so forth. So all of this is natively integrated and it has been since Folsom. We're tracking all the API changes and making sure that they're all compatible with Ceph. Okay, so how does this come into the deployment story? So we have these OSD nodes, we have these monitor nodes here. You've now also got your RBD aware, Nova compute nodes here. Now ideally you want to have these homogenous blocks which are just scaling out. So when you talk about doing storage the way Google do it or the storage the way Facebook do it, it's all about getting a completely homogenous stack which is just cloned repeatedly. And this is cloning both of the hardware, you want to have identical hardware reference architecture where possible, the operating system and the applications on top and just scale them out. And again, by abstracting out the storage you can scale your VMs on the Nova compute separately to the underlying storage system itself which has different characteristics and different times about when you might want to scale this out here. So again, homogeneity, these kind of stuff and getting this to be a repeated process where you just rack up a hundred servers at a time or tens of servers at a time, gives you a scale out and your scale down here. So from a practical point of view how does this work with Crowbar? So as a commission today we have these things called bar clamps which are these modular pieces of code which allow you to sort of define specific roles or profiles about parts of the stack here. So at Ink Tank we worked on the Ceph bar clamps and they do a couple of things here. So first of all you have the admin node itself acts as a repository manager for all the software that's deploying out here. So with Ceph you can do rolling upgrades. We've got a three month release cycle at the moment. So all of those packages, the freshest latest bits are pulled into the admin node itself. Then you can allocate particular machines as a monitor node or as an OSD node. It'll, as commission today will do all of the hardware stuff, the IPMI, the networking and so on and so forth. It'll get the right disks in place and so it'll be aware that these are the disks which should be holding the data. It'll partition them up correctly. It will get the Ceph processes running on all of them. So the OSD and the monrolls all handled by the bar clamp and then using just the standard Nova bar clamp, this is now RBD aware. So when the Nova bar clamp is provisioning the OpenStack software, it will ensure that the QMU is configured to be aware that it can mount and unmount these RBD images. And back to Kamesh, just to explain a little bit about some of our customers and ongoing work. Thanks Neil. So we've been working with Ink Tank for a long time. Even before they were Ink Tank, they were dream host at one point when they were doing Ceph. So we were involved in the community. Dylan and dream host were working in the community. And then last year, we started really working with them on a more professional basis, right? So we partnered with them and we built this crowbar bar clamp. The Ink Tank folks did it. And so we have a joint solution today that we are brought to market. It's a crowbar bar clamp, right? It deploys Ceph clusters as Neil was referring to earlier. It's automatic, it's all real time. You can expand your Ceph clusters. You can expand your OpenStack clusters and all magically works. And of course, you know, customers still want professional services. They want support training. So that's all delivered through collaborative support from Dell. And when Ink Tank built the bar clamp, it's not just a software piece, right? It has to work with the reference architecture we have in place. So we went through a whole technology partner program and the whole bar clamp and the whole OpenStack cluster with Ceph was validated against the reference architecture in our technology partner program and in our Dell Solution Center as well. So it's now ready. It's available in the market. And when we launched back at Dell World in December, that was the official launch. And since then we have seen tremendous interest in this technology with OpenStack and Ceph. It's just unbelievable, the amount of inquiries that are coming in, how much people are interested in this stuff. We actually have a customer deployment that's gonna happen next week. It's a university and the situation, very briefly, I'm not gonna go into the details, but this university has more than 900 researchers and they receive hundreds of millions of dollars in grants as one of the top research institution doing work on cancer and genomic projects. So it's big data. They do a lot of work with large amounts of data, right? So what the university was looking for was a centralized data repository, which they can ensure compliance and they know exactly how much consumption is happening, et cetera, et cetera. And the intention was for them, for the university to provide two terabytes of free data to all their researchers. And anything beyond that, they would charge the researcher a nominal fee. So that was their model, right? I mean, this is gonna be a very large cluster at the end of the year, they're talking about five petabytes. So the university looked at a lot of different solutions out there. They looked at traditional sands. They looked at the public cloud storage solutions like AWS. They even looked at Hadoop to see if that would be a solution. And at the end of all that, they came to a decision themselves. All of this stuff was on the open source site. So they checked it out and they came back and said, you know what, all of those solutions are very expensive. The traditional sands are not gonna help us. The dupe was not the right fit for this. So they came back to us and said, we want the Delling Tank solution because it fit our needs. It was the right cost point for us. It was the best of all the worlds for both compute and storage. They were interested in OpenStack. They were interested in Ceph. So we had a solution that best fit their needs. So what's coming next? We'll sort of, let me finish with what's coming from Dell this summer. We are already working on Grizzly. Our team is completely focused on that. So we will have an officially supported Dell Grizzly released by this summer. There's a lot of work going on around Crobat 2.0, which is the next generation of Crobat. It's all happening in the community. It's open source. And we've been working very closely with SUSE. SUSE has been a great partner. Their SUSE Cloud solution, by the way, is also used as Crobat as their orchestration and deployment engine. And they've been working very closely with us in the community for Crobat 2.0 development. We're doing a number of things in Crobat 2.0. I'm not gonna go into the details, but I think the most important thing is we, so today, if you look at all the Cookbooks, we use Chef, right? And all the Cookbooks for Chef are, we want to upstream them, meaning that they're available to the community so they can use it to deploy their own OpenStack clusters. So we're trying to decouple the Cookbooks from the Crobat-specific pieces. So that's another key thing. And come by the Dell booth and we'll give you more details on what that is. And Neil will talk about what's coming next on the Chef site. So as mentioned, Chef has now got a three-month release schedule, which is something we've recently changed. So Chef Cuttlefish, which is the next major release, is coming out in a couple of weeks. Major features here. We've got incremental snapshots on the block device, which is really important when you're trying to do a copy of a block image from one cluster to another. You don't have to be moving the whole thing. You just move the incremental changes. We've also got a whole bunch of APIs coming in. So at the moment, a lot of the functionality of Chef you have to do through the CLI. We've got an API everywhere policy. So the first set of APIs are coming for the RADOS gateway part, but the RADOS system itself will be fully enabled through from a provisioning and from a monitoring management point of view through APIs in the dumpling release coming out in August. And also we've got geo-replication and multi-site for the object gateway coming then as well. And as mentioned, we're gonna be porting the bar clamps over to crowbar v2 when crowbar v2 sort of goes GA, we'll start work on that. We've got a session tomorrow in fact about the Chef roadmap. And so people are gonna get involved and sort of join us to plan out the next set of work, including the bar clamps work, but also the continuing integration of Chef and over the stack. We sort of invite you to come and join us for that. And then we've also got another session where if you really wanna get into some of the detail, one of our engineers, Josh Durgan, it's gonna be giving a talk on Thursday where he'll focus on a lot of the Cinder stuff, but just be going to the sort of general changes that we've had in the previous release, Bobtail and those which are coming up in Cuttlefish. I think that's it. Yeah, so there's the contact information for us and this slides will be up on the site so you can take a look. But with that, are there any questions, comments? Yes? In crowbar? Great question. So that's, so yeah, the question is, does crowbar support puppet? Today it does not, but the direction we're going with crowbar to.o is for us to enable both Chef and puppet support. Yes. Other questions? There is already a setting for that. You can set the mandatory number of rights which have to be done. Sorry, the question was, is there a way to sort of get it to, to not have to wait for all the replicas to be created before a success is returned back on the right. It's already a configuration in the crush rules where you can say, if you've got a replica of say five, you can say just give me two, three and return a success and then the rest will happen in a lazy eventually consistent way. Yes, it is, yeah. Okay. Well, thank you very much. Well, there's another question. I've got one more, one more down here. Crowbar and Heat. So if you have been to some of the sessions from Rob Hersfeld who is our chief architect, there have been some discussions around that. I don't know if there's anything specific going on right now. Okay, thank you very much. Thank you.