 Hi everyone, thanks for showing up today even though it's the last day of the summit. So everyone is pretty much tired, but we'll try to entertain you as much as we can. So today we're going to talk about building a SAP cluster for your first or maybe not the first OpenStack Cloud because what has been happening over the course of the last two years is that everyone is talking about SAP. Some folks like Mirantis is deploying SAP and we still get the same questions on how to plan the cluster properly, how to plan for performance, what SAP is doing, what SAP is not doing. So over the course of the last six months what we happen to have in Mirantis is we happen to establish a number of rules of thumb and a couple of magic numbers of how it is relatively easy to calculate the SAP cluster sizing and the SAP cluster capacity. That will power obviously OpenStack Cloud. So my name is, today we will do the, my name is Dmitry Novakovsky, I'm a Solutions Architect at Mirantis EMEA. So today we will do, with a couple of good colleagues, we will do a SAP overview. We'll talk a little bit about what SAP is in overall just to make sure in case someone is not yet familiar here. We will tell you about how to plan your hardware with a couple of examples, a couple of numbers which you can use after you walk away from the session and start planning your cluster and figuring out how much capacity and how much performance you need. And we will tell you about a few lessons learned, sometimes in a painful way, which we did in reality in the deployments which Mirantis has been doing over the course of last year. So with this I would like to hand over to Uda for the SAP overview part. Thanks a lot, thanks Dmitry. And as mentioned already, so my name is Uda Seidler. I'm not working for Mirantis, but working with Mirantis. And after my part of the session, I'll hand over to Craig who will guide you through the hardware planning. So now we mentioned all the three names. Let's start a little bit with Seth. Now if you go to OpenStack events these days, as big as the summit or as small as a meetup in Outback, somewhere then people are talking, writing, speaking about Seth, left, right and center. So it looks like in these days, Seth is really the de facto standard for being storage in the OpenStack world. However, Seth is much more and it can provide much more even outside the OpenStack world. So I would like to do with you a little journey into Seth land, apart from the OpenStack part of the universe of it. Seth, as mentioned here, is an open source software defined storage solution. And interestingly, it's not the only one available. There are more software defined storage solutions available and there are even more open source software defined storage solutions available. And if you just focus at Reddit, the company in these days behind Seth, you will see that they actually have two solutions available. So before they acquired Ink Tank in 2014, they had already cluster of S which they got via the acquisition of cluster in 2011. So something must be really unique and different for Seth to actually be in these days the de facto standard for OpenStack and maybe also for other bits and pieces. And this is quite interesting. I'm pleased about that one because when I started working or when I actually set up my first Seth cluster in the lab was in 2010. And then I didn't think about this evolvement that it will be for open source infrastructure as a service solution, the storage solution available. So when I looked at it, I was really concentrating on this part to the distributed file system layer. Back then in our data center, we had roughly 2,000 servers available, Linux servers. Not all of them highly utilized. So I thought about, hey, that sounds interesting. I just put some Seth devins over there. They create a common name space. And then out of the sudden, I can use the compute resources and the storage resources to dump my data in there. The things have changed a lot since then. And also the development of Seth has changed during the years. So it started with the file system layer but then later on they concentrated much more on the core, on the heart of the storage engine, which is RADOS. And this actually, we are all the magic and all the beauty things of Seth happens. And with this shift, making this one industry-ready, robust, reliable, improved, and enhancement, they actually build the foundation for then exposing this nice storage engine via different functional and technical APIs to clients, to other softwares, to applications, and so on and so forth. For instance, for OpenStack. So object storage is something which, of course, is well-known in the OpenStack context. It's there since the beginning. But it also has use cases and business cases outside OpenStack. So I recently talked to people working for a hospital and then use an object store. They use Seth as an object store to store medical data there, like X-ray pictures and something like that. This is totally a site of OpenStack, but they still use the same interface which you use in OpenStack for Seth, the object store. And if you are used to run your stuff, for instance, on Amazon S3, with some tricks, you can also pretend that your Seth cluster actually is an Amazon S3 and you can use all the tooling you had in place for talking to Amazon, short-term also talking to Seth and storing the data they are using via the object store interface. Now for some people, especially in traditional data centers, using HTTP as a protocol to store data is not something they really like or understand and they really would like to talk SCSI or at least block device layer. And Seth can help here as well. So on top of Rata's, there's additional protocol called Rata's block device layer and that one you can actually use to carve out part of your Seth cluster and make it available like a block device. And on the Linux side with the corresponding driver, you can actually create an entry in the depth directory so it really does look like either a local disk or a disk you would normally get out of traditional sand environment via ice SCSI or via fiber channel things like that. So again, this is used in an open stack as well for different use cases, but it's also useful, for instance, if you want to replace your traditional sand set up in a classical active stand by a high availability cluster with Seth and I've seen this in small implementations so people have not really thrown away but have decommissioned their fiber channel switches and the storage arrays and implemented a similar thing with Seth in the background using the block device interface and then cluster management software on top. Last but not least from those more functional APIs to defile storage. As I said before, this is where Seth started as a distributed file system and then it was left behind a little bit and back then I thought, yeah, maybe it's not needed anymore because we have an interface library to talk directly to labors, radars, libradars. We have the object store, we can use the fancy stuff HTTP to do those kind of things and for the people who are a little bit, let's say more traditional in the mindset, they can use the block device layer to do so. But apparently there's quite some demand out there also to use a POSIX interface to Seth which has its own challenges. First of all, you need additional components in your Seth cluster. Before you need only, without POSIX, you need only, of course, something like storage, which are the object-based storage devices and you need the monitoring system, the monitors for that one. Now for the additional tasks of the POSIX layer, you need a meta data server and all the things you have thought about, how many OSDs do I need? How many monitors do I need? How do I set them up? What about high reliability? What about network connections? And so on and so forth. You have to do the same considerations again on the meta data server layer. Now luckily on the OpenSix side, because Seth is highly integrated into OpenStack, we don't need this really. This is needed somewhere else, so for today's presentation we're gonna skip that one. So the point I wanted to make with this little journey through the Seth universe was, yes, Seth is a storage solution for OpenStack, a highly integrated one, it works very well. But it's not the only use case, not the only business case to do so, especially if you have quite a heterogeneous data center or a data center which was evolving over the years. So there might be some traditional workloads which you want to migrate also to a new storage solution. And then you can converge those things. You can have your Seth cluster for your infrastructure as a service using OpenStack, but you can use to a certain extent the same hardware, the same software also to put on traditional workloads on that one, either using it as an object store or using it as a block device, or if I really can't help it, then even with the POSIX layer. And the different ways of running that one, you can use the community version and have the technology or the technical skills, the technicians in-house, and they fix everything whenever there is a problem together with the community. Or if you're less mature and you want to have some more consultancy, then you go for the more enterprise products which have better support for rolling upgrades and things like that. And if you're even more traditional, you can also buy products based on Seth. Of course, there's a Reddit product, Reddit storage based on Seth. Of course, there's a Susie product based on Seth as well. So there are really a whole variety of ways to manage and to introduce Seth in our environment regardless of using OpenStack or not. What is true for all of those use cases is, okay, it looks good on paper, but for your particular workload, for your use cases, you have to tune the things. And there are things you don't want to fall into. And this actually is now the main part of the talk. And my friends from Mirantis will cover that part. So what you have to think about, about the size things, what are the magic numbers, as Dimitri mentioned in the beginning of the talk. And with that one, I hand over to Craig who guides you through the hardware planning for that one. Thank you. Hi, folks. I'm Greg Elkunbart, CN Technical Director from Mirantis. We've been working with Seth for quite a number of years. And pretty much it is our standard storage platform that we start with. Obviously, we support others. We support extending the storage through addition of, let's say, NetApp or an EMC enterprise. And you'll see why that is necessary later on in the slides. And we also support Swift as a scale out multi-data center object option. But let's go through Seth hardware planning. Seth is an SDS. It runs on white boxes. So what kind of white box do I want to pick? Well, so what you have to do is you have to analyze your storage needs. What is your net storage requirements? Because Seth will use an end wave replication configurable between two and n. Typical number is two in lab, three in production. How many IOPS do you really need? Both aggregated for your entire cluster and per VM on the average and on the peak. What are you optimizing for? Cost or performance? There's probably, and quite in order of magnitude between the low cost Seth configuration that's optimized for bulk storage and high cost, high performance configuration where Seth can start competing with some of the tier one, tier one and a half offerings. Okay, so there's few rules of some for Seth sizing. So basically you do want to provision a 10 gig nick for about eight to 10 hard drives. If they're Sats, if they're SATA, you're probably building out the bulk enclosure so you don't need quite as much capacity so you can have up to 12. By the way, folks, I believe we're posting these slides later on, so feel free to take pictures, but you don't really have to. It's still nice to see people doing it. Right, so thank you. Thank you, yes, we're photogenic. All right, so one of the main features that you want to do if you're optimizing Seth for performance is you want to include SSDs for write journaling. Now, Seth actually uses SSDs one of two ways. The default Seth configuration has write journaling where all of the writes are logged to an SSD device then replicated to all other targets and then acknowledged. As you can see, since Seth is synchronous, that is a very important component to reduce Seth write latency. Now, when would you do that? So when you have a significant fraction of your IO being writes because you're sacrificing spindles so you will reduce your read performance somewhat, right? So it's a little bit of a trade-off, but typically 90% of our Seth environments include a write journal. Seth also has recently introduced a cash-in tier which can be all SSD and we're currently deploying it in a number of accounts. In there, Seth is going to be reading and writing to the SSDs. All right, so, but for your base storage tier, how much RAM do you need? Well, rough guidance is about a gigabyte per terabyte. So if you're not using erasure coding, if you're not using SSDs, you can probably turn that down a little bit, but don't do it too much because Seth does use memory to log it to, for write replication to the other targets. So you probably don't want it to start reading and writing that stuff from disk. How much CPU do you need? Well, the typical guidance is about one gigahertz per hard drive, but we noticed in our testing that the SSDs actually need quite a bit more. One to two cores per SSD in your cash-in tier. Now, this presentation is not gonna cover the cash-in tier so much, but in our blog post, after the summit, we will explain how to start using and planning for your cash-in tier. Seth monetizing. Well, basically, script some hardware together pretty much. You need roughly one monitor per 15 to 20 OSD nodes. So, minimum with three, that means if your cluster is less than 60, you probably don't need any additional ones. If it does, you need to add them and the number should be odd. Seth is a majority quorum device. All right. So, let's say you wanted to build your Seth cluster and you wanted to figure out how much IOPS it will deliver. You can make some assumptions about your workload if you don't have it fully characterized. The typical workloads we've seen out there are roughly 70% read, 30% writes, and it looks like unless you're running some specialized applications, just the generic IO adds up to small random block writes, which means that in order to get the performance, you're going to need lots and lots of spindles. So, Seth is in SDS. It's a software. There's going to be packets flying all over the system. So, how efficient Seth is? Well, us and a bunch of other vendors have done some benchmarks and found that in general, and these are very, very general numbers, just use them for guidance, Seth is going to be relatively efficient on reads. You can consider it to be 88% efficient of your maximum spindle capacity. So, if your spindle can do 250 IOPS, Seth can do roughly 220. On the writes, because Seth has to do replication, Seth is going to be slightly less efficient, but still pretty good at 64%. So, you'll see how those numbers add up when you calculate system sizing. So, let's go through an exercise. Let's say you had to build out a petabyte cluster. It's actually fairly substantial one. Remember the CERN cluster, I believe it's only three petabytes at the moment and the dream host is 10. So, but you're going to have lots of big, big VMs and they want lots of storage. So, you're only going to have 500 VMs. Where do we get 100 IOPS? Well, 100 IOPS is a pretty good guidance target for general storage planning. Obviously, if you're running a database, you're not going to be happy with 100 IOPS. If you're running a web server, you're probably going to be overjoyed if you even get that from your cloud provider. Typically, some of them I've seen providing up as low as two or three IOPS per person because there was so always pride on their IO. But, that means that you need about 50,000 IOPS aggregated from your storage cluster. So, let's go through some numbers. So, self-monitors, like I said, get some hardware, something to scrape off the ground. One CPU, 64 gigabytes of RAM, a little bit of hard disk, and make sure you have a 10-gig nick. So, one per every 15 to 20, don't forget, minimum is three, you have to add them to make sure that you have maintained an odd number of monitors for reliability. All right, so you're going to be building a low-density performance optimized cluster. Now, obviously you can dump stuff on a bunch of SSDs. I've seen people do that. It's a little expensive, and you need giant release because right now, without giant, the SSD performance is not that hot. But, our standard server, and this is no endorsement of HP, you can use Dell, UCS, any other one or two-U rack mounted server. We typically stick with it to you. It forms a relatively small failure domain in a larger cluster. It has a decent enough performance where the network and the memory does not get saturated because in the larger systems, those are going to be large problems. Cef is relatively high performance. You can get two, three gigabytes out of your OSDs with the right IO pattern. So, you're going to be building out using basically six core CPUs, whatever is relatively inexpensive. Remembering, of course, the ratio that you need of course to storage. How much RAM? Well, if you're going to be having, what is it, 30 terabytes? That means that you need roughly 30 gigabytes of RAM. But however, remember that Cef runs on Linux and the more RAM you have, the more this caching you will be using so the more reads are going to go from RAM. This is your only opportunity for read caching unless you're deploying your dedicated cache here. So, you probably want to bump it up modular too to the next fraction power or even increase it further if you want to optimize the right slightly. You do need some drives for the OS. They probably should be separate from the mainline drives. So, now let's talk about the bulk of the performance drives. We've chosen the 20, 1.8 terabyte SAS drives and we put them at 10 KRPMs to get a decent number of IOPS through the system. Like I said, this is performance, not cost optimized. Obviously, the 1.8s are relatively expensive now. You can do 1.2s, you can do the 900s. So, or if you don't have the performance needs, you can probably turn it down to about 700. I probably would not go to 3.5s in a performance based system. You're just going to lose too many spindles. All right, so SSD write journals. Ceph is not terribly hungry for the journal space. 20 gigabytes should be quite enough per OSD. And remember, the ratio should be four to six hard drives per SSD. So, that Intel should be quite sufficient. Now, the SSDs get a little tricky. Some of the vendors actually cheat and did not include the right number of write channels on the lower capacity SSDs. So, take a look at your specs. You may have to up spec your SSDs if you do not have enough write performance. Remember, this device will be heavily utilized for writes. So, take a look at the spec and find a sweet spot. At some point, the right number of chips will be there, the right number of controllers will be there, and that's the point where you wanna buy. Which means that you probably will have more SSD capacity than you need just to get the right performance. Now, like I said before, Ceph is going to be relatively bandwidth intensive. It's an optimized system. You should be able to push out at least a gigabyte, probably more, more like two, out of this cluster. So, in most of our 10 gig environments, we've seen Ceph under stress to be network saturated, not disk IO saturated, especially in a system like that or in a larger system. So, why not a 40? Well, it's nice, but typically a 40 is a little too expensive for most people. And honestly, we haven't done one gigabit Ceph deploying Ceph outside of some of these lab. So, most production environments do use multiple 10s at this point. Okay, so, how many servers do you need? Well, yes, I know 1,000 is not a petabyte, but let's round up or down, shall we say. So, 1,000, you have 20 drives in the system, the size of the drive, 1.8, and there's a magic number on there. What does it mean? Well, Ceph is going to be using some regular storage system on Linux underneath it, and XFS, which we typically use, does not like to be full. So, we're going to leave some space, both for Ceph to reallocate blocks in case of a failure, and also, so XFS does not have to start fracturing its extents and start losing its performance. So, I've chosen 85%, conservative folks choose 75%, some people, I've seen push it up to 95%, and don't leave any reserve for anything. In the system, but hey, it's not my money, right? Okay, so 98 servers are necessary to serve about a petabyte net. So, that means they're going to have roughly three petabytes raw of drive capacity, hanging around, consuming power. And yes, it will consume power, mostly the time we do not send down the drives. So, all right, so where did I pull 250? Well, from my hat, I went to Tom's hardware, looked at the drive model numbers. Actually, this one is not tested yet, because it's brand new. So, I looked at one model down. It was hovering somewhere between 290 and 300, nice through all of the IOMETR random IO benchmarks. So, I said that to 50 looks like a good number. So, I do use Tom's hardware fairly extensively. For IO benchmarks, honestly, I have not found a site that spends as much time and money on testing various hardware components. If you have a better one, please recommend it. So, unfortunately, most of the manufacturers seem to not to bother including IOMETR specs, just because they don't wanna see how badly they suck. So, we have to go to independent third parties. All right, so this cluster, remember we're going to use an 88% read factor. We'll be generating somewhere north of 43,430,000 IOPS. That's an awful lot of IOPS. Or you're gonna get 800 IOPS per VM. So, that is really great, but don't jump for joy yet. Ceph does have a latency issue. And it's latency is typically 10 to 40 milliseconds. So, even though the IOPS count is high, I would not be using it to run your high-performance database. Medium ones can go ahead. Yes, you had a question? So, it's 250 times 20 times 98 times 0.88. So, J-Bob. So, when would you need to read for self? So, we typically try to make sure that our cluster does not have more than few thousand OSDs. And we typically configure one OSD per hard drive in our normal environment. So, if you have lots and lots of hard drives and you have a very, very big cluster, it makes sense to combine them using some sort of a RAID. So, where did 1000 came from? This is an old limitation. If you release this back on the efficiency of the self monitors in the crash algorithm. It's been removed since, but we just slowly gently ramping up our guidance from 1000 drives to 2000 drives to 3000 drives as we test bigger and bigger clusters. So, we typically do not use RAID. So, there is an interesting point. So, you have a RAID card in there. What do you do with it? If you have an older six or three gigabit card, use it in J-Bot mode, turn it off, get rid of it, do something, get it out of the data path. If you have a 12 gigabyte gigabit card, especially one of the newer LSIs, you can use RAID zero mode on the individual device just to get a little bit of write caching and smooth out the XFS barrier writes. All right, so the write cluster rating is a little bit lower and you're gonna have about 600 IOPS of VM capacity available. Did I make a math error? Yeah, looks like I did. Sorry. That should be a 64, not 88. Sorry about that. Thanks for caching. All right, you have to understand your applications. So, as a storage guy, I cannot tell you what your peak applications are. On the average, I would assume that you need about 100, but if you're, let's say, running database workloads, those, that's gonna be your peak requirement. No, I think you misunderstood me. All right, I see. All right, so currently, you shouldn't plan on being able to deliver more than a few thousand IOPS using Ceph to an individual VM. Right, so if you need 10,000 IOPS, there are some other systems that use some zero copy magic and others to get to be more efficient. So, I see in few systems, which are right now in beta, which are capable of delivering roughly about 30,000 IOPS to an individual VM, but they have extensive use of zero copy. Now, Ceph does run on ICER, on Melonox ICER cards. We typically do not add yet additional third-party card or the 40 gig switches in the environment. So, if you are using the ICER branch of Ceph, you can probably increase your IOPS to the whole host and to the individual VM. All right, okay, so- Folks, let's keep the questions to the question section because we're running out of time and we still have some material to go through. We'll be here after the session, so you can come up and ask questions. Yeah, sorry about that. All right, let's run through this. So, let's say you have bulk storage. You wanna use Ceph as your object store or you're just scrolling away data and you do not care about the performance. So, you'll be using a higher capacity chassis. You're probably gonna go to a four-year chassis with let's say 36, three and a half inch drives or probably even higher. How ridiculous can you get? Remember, Ceph is a modified DHT ring, which means any system failure requires a replication in order to restore read resiliency. That means that no system should be more than 10% of your total pool, right? So, larger clusters can take larger systems. Smaller clusters require smaller systems. Do not put Ceph on two super giant boxes in cold today. Okay, so, since you have lots of hard drives in there, you're gonna need more CPUs. You're gonna need a little bit more RAM because you have lots and lots and lots of terabytes in there. Again, two drives for the OS. Sometimes people put the OS on the SSDs in order to save the two drive slots. So, you're gonna have six SSDs on there and you're gonna go with four to six, depending on your needs, terabyte drives, three and a half slow drives. So, by the way, they're not that slow nowadays. The performance is decent. They're pushing those IOPS on the enterprise versions above 100. You're going to bond the NICs. So, in this environment, you're not gonna get the maximum stream capacity of your drive. If you do the testing, you'll find out your NICs are going to be saturated on the public side, serving all of the read IO. So, if you need to get streaming read capacity, you probably need to get to 40 in a system that size. All right, so you're gonna need only 32 servers to serve one petabyte. So, the magic number, 85% still counts. And then, again, I went to Tom's hardware. That drive is somewhere between 170 and 190 IOPS in various random IO benchmarks. So, I said 150 would be a safe number. You can be as conservative as you want to and simply overbuild your cluster. There's nothing wrong with that. So, that means that that cluster roughly has, if you notice, probably about a quarter of the performance of the cluster before, even though it has the same capacity. And it will serve about 200 IOPS per VM on the reads and about 150 IOPS per VM on the writes. Look, no typo on this one. All right, so that means it's barely good enough to meet your performance needs. Now, we sized it relatively ridiculously low because normally a petabyte cluster will be used by more than 500 VMs. So, you can see that this cluster will not be satisfactory to primary block storage needs of a larger clouds, but will work very nicely as an object store. All right, okay. So, what about SSDs? So, if you're doing all SSDs in Firefly, you are limited to about 5K IOPS. So, you're not gonna get maximum amount of performance. Giant takes a giant step forward. We saw roughly about 30K IOPS, but it's not available out of the box because Giant is not LTS and we only include LTS releases. So, right now, we are not delivering the maximum performance clusters available, but by the end of the year, there is gonna be an LTS release that will be around 30 to 40K IOPS per SSD. And then, cache interiors will start being very important in self, especially if you start to compete with the tier one systems. All right. So, I'll forward it to Dmitry for Lessons Learned. Sorry about taking the time. That's fine. We're actually having, we'll have some time for questions. So, Lessons Learned, right? First of all, how do we deploy self-cluster set Mirantis? We have a software called Mirantis Fuel. It's an open source tool, which we use as a basis for deploying Mirantis OpenStack Clouds and also deploying a self-layer for Mirantis OpenStack Clouds to support object storage, to support image storage, to support block storage. So, the steps are quite simple, right? We discover the servers. We define that it will be a self-powered OpenStack Cloud. We assign roles to the servers which we have discovered and decide to make them self-OSD nodes. We allocate disks. So, we specify where the journal will be, where the OSD will be, where the operating system will still sit. So, that's, I mean, you need to make sure that the SSDs will be actually used for the journals you bought them for. A couple of other options to consider is whether you want to put Nova ephemeral drives to SAF and get, like, fully save it back system with all the disks of VMs being put on SAF and natively enabling live migration for you, right? What else? Block storage. So, block storage goes to SAF. Object storage, we can expose Swift API to, that you can later give to your tenants to consume object storage. Like, there is a Swift cluster here in the cloud. Images get backed by SAF also. So, as we mentioned earlier, that gives some needs, this gives some neat features such as copy and write for images at VM boot. When instead of doing the full copy of image from the SAF, you will just do copy and write and this allows for quite a rapid spin up of big portions of VMs backed by SAF. So, what we now know practically by building clusters over the last couple, not couple, but year and a half and actually testing the performance. So, in reality with current release that we are using, are we on firefly, right? With current release that we are using, effectively in most clusters, the performance of the IOPS per VM is limited to some, to somewhat like, 100 IOPS per VM as long as your OSDs are HDD, spinning disks. For fully SSD build system, even though the theoretical IOPS will be sky high, in reality where single IO test per VM will give, will be somewhere around 10K IOPS. So, that's why the earlier slide was saying this would be disappointing because you would expect much more from an SSD system. So, if you're building a full SSD SAF cluster today, expect to be a little bit disappointed before you upgrade to giant or the next LTS release. As Greg mentioned, there is a latency problem. So, the more loaded the SAF cluster gets, the higher latency may get. So, we typically estimate it at 10 to 40 milliseconds of latency getting as building up as soon as the SAF cluster gets saturated. And if you again go with a way of full SSD cluster and say, I don't care, I will upgrade in a year and it will be a beautiful storage platform to work. I will don't need to buy whatever. Then, get ready to spec more CPU power because CPU gets saturated in such case. And as this slide says, there are certain gaps in how the Swift API is implemented today on top of SAF. For example, you cannot really do the object version in a while. Swift guys have already pulled this in. So, if you're going to expose this, if you're going to just use it as a backend for Glance and for RBD disks, probably you're fine. If you're going to expose it to your tenants, well, at some point they may ask you why the hell your Swift API is not supporting my object version. A couple of other things. So, as I was mentioning, we've been doing, we are doing the deployments using our life cycle management tool called fuel and some configuration options we include by default. Then we figure out that this is not really working and we need to change it. So, we propagate it to customers, we fix it in the next release of fuel where you publish the patches and so on. So, first thing we found out, it was in the timeframe about OVS 1.0.1.2.1.2, something I guess, is that you don't really want to assemble bonded interfaces with OVS. As soon as you get them saturated, the CPU load goes high. SAF performance is not very good. So, our recommendation today, and that's how we do out of the box with the product, is to assemble bonds using native Linux bonding. That works much better. Another thing is that SAF bridges, which we will be putting together on SAF OSD notes. So, bridges for SAF public traffic, bridges for SAF replication traffic. You need to make sure that you will not put them through OVS. Again, in our case it was OVS 1.0.1. Some folks are telling me that if you go with OVS 2.2.3, I guess, or something like that, it gets fine, but really there is no reason to introduce one more switching layer in front of the performance, in front of storage network, which you really want to be as performant as it can be. NovoEvacuate, the holy grail for many people trying to do VMHA on OpenStack. It hasn't been really working with RBD-backed VMs, in Kilo, they landed the patches. Honestly, I haven't yet tested. I don't know, maybe someone in Miranda's have tested. If it really works with the Kilo code base, it means that with external monitoring system, it will be quite neatly and quite easily to date to implement the automatic HA for SAF-backed VMs. Network collocation, so in some cases we've been doing and we've seen folks doing collocating the SAF management network with SAF public traffic. It is possible in some situations, you want to do it to reduce amount of ports and reduce amount of network interfaces you need. If you're free not to do it, better not to have a dedicated NIC payer for SAF public traffic and not used for anything else. Unless obviously the cluster is very low performance. And testing the SAF configuration, so obviously whatever default one comes with distribution with the software that you're using, it's usually not the best optimized. If you track the fuel, if you track the fuel project progression, we raise bugs from time to time and enable this and that, like RBD cache and other things and make sure that they are shipped out of the box to the customers using fuel. So, the rule of thumb, just make sure you're using the latest version of fuel. If you're a Miranda's customer, check with your SC or with your consultant on what are the latest best practices on SAF because this is still quite a dynamic area to optimize on. Okay, I guess we're in time. Let's open for questions. Go ahead. We only tried, Giant. Could we repeat the question for the recording? I asked him. Have we done performance testing on Hammer? We have not run Hammer through the lab yet. It's still on the backlog for the performance testing lab. So far, we've went as far as Giant. Giant did improve the performance very substantially. So, Hammer believe is the LTS release, right? That's coming up. So, we'll probably be going to Hammer as soon as the Hammer is available and expect Hammer performance to be a little bit better than Giant's. So, for the rest of the questions, we have to take them offline because we're actually out of time. Out of time? Yeah. So, we're not. Okay. Keep them and ask us afterwards. Thank you.