 All right, well my clock is just kicked over here to 440. Are we ready to go in the back? We're ready. OK, excellent. Well, thank you very much for coming, everybody. We're going to talk about performance tuning a cloud application. And like I mentioned earlier, if you weren't in the room a little earlier, we have the QR code here. Publication is posted on slideshare.net, so you don't have to scribble notes or take pictures unless you really want to. You're certainly welcome to. It's already posted up on Slideshare, so you can take a look at the presentation there and refer back to it later. Today's agenda, we're going to talk a little bit about Symantec, what we're doing at Symantec with the Cloud Platform Engineering Team. We have a really cool division that started up about a year and a half ago. We're going to talk about key values of service. And the key values of service is the PESCII application that brought up this presentation for us. It's a service we deployed within Symantec's Cloud Platform Engineering Team. We're going to talk a little bit about how it's architected and how it works, has a significant impact on how you do performance tuning. Talk about the problem that arose when we did the performance testing, when we went from a bare metal environment to a virtualized environment. We'll talk about how we went about resolving the problem and some general performance tuning recommendations, summarize, take a break. Those of you that are interested in some additional performance tuning and specific detail about OpenStack performance tuning right after this in the same room, I have two of my coworkers, Raj and Gabe, that'll be here. That'll be doing a presentation on OpenStack performance tuning for large enterprise deployments. So that's a good opportunity to learn some more additional performance tuning. About Symantec, Symantec's Cloud Platform Engineering Team was formed officially less than a year ago. Unofficially, it was formed about 18 months ago right after the Portland Summit. I was one of the original team members that was on a proof of concept to determine if OpenStack was a viable product and a viable platform solution to do an IAAS build. And we are in the process of starting from a relatively small seed. We kind of started in a couple of different growth spurts. We started originally with a 56 node cluster of OpenStack environment that I built. We grew that to a 200-some node cluster. And then we kind of quickly started scaling from there. Today, we're just about to go live with just short of a couple thousand nodes. And ultimately, we're looking to scale to somewhere in the tens of thousands of nodes. So we have a pretty big challenge ahead of us in terms of performance, since there's a large scale that we're looking forward to. Small changes that we make now, 5% performance here, 10% performance there, will have a long-term and lasting effect on the environment for us. So it's critical to us that we do some good performance tuning of the environment. One of the things that I really enjoy about the new team that I'm in with the Cloud Platform Engineering is we have some really cool open source technologies we're playing with. This is just a summary of sort of the big names, right? So OpenStack, HUBduke, Storm, Cassandra, MagnetoDB, Puppet, Salt. There's all kinds of things that we're playing with. And we've developed a really strong commitment to providing back to the open source community. We have a number of team members that have been contributing back to the OpenStack environment and to some of the other projects. We have a couple of team members that have their own standalone open source projects like Range++, which is a classification system that's a really interesting product. Am I accidentally forwarding? Yes. Myself, I started in computers in the Marine Corps. I started with mainframes. I started with mainframes that were 15 years old in the early 90s, so my computing technology goes way back in the era of mainframes. I left the Marine Corps relatively quickly, went into the corporate world, and I've done a lot of things from UNIX, Linux Systems Administration, network architecture, security, stuff like that. Those are all good backgrounds for me in terms of an infrastructure architect for a Cloud Platform today within CPE. Moving on, we're going to talk about the key value as a service. We are using a solution that's an OpenStack project called MagnetoDB. MagnetoDB, it's a key value store, has an OpenStack REST API, as well as an AWS DynamoDB compatibility layer. Originally, it was conceived as a DynamoDB-like key value store solution, and it's quickly grown from there, and it was designed from its inception to be part of and embraced by OpenStack. It uses a pluggable backend storage so you can tune the performance metrics and how you store data and the performance in terms of your CAP theorem that you get out of your platform, depending on what kind of different database solution you use as a backend driver. They're considering integration with Trove, the OpenStack Trove project, which is one of their integration points with OpenStack, as well as non-integrated Trove environments. You don't have to integrate with Trove if you don't want to. Essentially, it's a composite service. It's made up of the front-end APIs. It's made up of the Cassandra backend. In this case, it's the solution that we're using for the key value store solution. And then some of the other additional solutions that are used that integrate tightly with OpenStack, Keystone for authentication, uses a messaging bus. RabbitMQ is obviously one of the more popular ones used in an OpenStack. It doesn't have to be. You can use Cupid or 0MQ. And the messaging bus does not necessarily have to be your OpenStack messaging bus. You can keep those separate if it makes sense for your application and your environment. And then obviously, since it's got an API front-end solution, it's HTTP, HTTPS, REST service, you're going to need to load balance it. My presentation is eager to move me along here. I guess I'm boring my own presentation. So you need some load balancing capabilities, either hardware or HAProxy are a solution to that effect. MagnetoDB specifically, in the API services, you have a data API, you can have a streaming API, you have a monitoring API, which is interesting because you can hook that into a salameter if you'd like. You can inject MagnetoDB information via salameter into your salameter infrastructure and use it from that point of integration. Or you can just inject it into your RabbitMQ bus and something else can deal with it. It's up to you how you want to do that integration. And obviously, the DynamoDB API that goes along with that. The Keystone and Notifications integration, the Notifications integration is part of that monitoring API solution. And the MagnetoDB database driver, like I mentioned before, Cassandra is the solution that we're using. When you pull that together, you get something kind of like failure to use slides appropriately. And you get something like the VIP ports, the proxy service, load balances to your MagnetoDB API services, whether you have your straight API for your data or you have your streaming API service. So you have a very traditional REST API solution architecture. Looking at Cassandra, some of you may not know what Cassandra is. I didn't know what Cassandra was until I started working on this project. I was very fascinated to learn about some of the aspects of Cassandra in relation to some of the other database store and this NoSQL database store solutions. In this case, we're using it as a key value store. It's a very massively scalable system. I don't know if you've heard of the Google 1 million queries per second project that they did that with Cassandra because they proved out Cassandra in a virtualized environment with a million queries, concurrent queries per second. It's a pretty big scale. It's very highly available. There's no single point of failure. There's no masters or no active, active masters. There are no controllers. There's simply a Cassandra node and it communicates with other Cassandra nodes via a gossip protocol. Kind of like a little old biddy sitting on the fence gossiping about her neighbors. You learn about everything in the neighborhood that way. And that's what Cassandra uses as its protocol to communicate and learn about other Cassandra nodes. And if one goes down, it communicates that via the gossip protocol. Some of the cool things, it's got tunable consistency within terms of the CAP theorem. Depending on what you'd like to do with it, you can kind of tweak and tune how you want it to behave and your performance characteristics that you need for your application. It uses a ring topology and it has very predictable high performance and fault tolerance. You scale linearly. You just add new nodes and you get linear scale. And that was proven out again with the Google tests with a million node count. I believe it was about 200 and some servers that they used within that test to get to a million queries per second. So it gives you kind of a baseline of what you would like to need for that. Yes. Indirectly, yes. And we can discuss that a little bit further in the QA session at the end. If you have some more specific questions about that, we can talk about it. And then within the Cassandra service, we've got essentially a proxy service for the administration capabilities if you want to talk to the Cassandra service. And then you have your Cassandra nodes. And this is where we have this gossip and protocol and replication in the center that just talks out to all the different nodes as you scale them up. Some other stuff you need with it. You need a load balancer, HA proxy, a hardware load balancer. In this case, we're using HA proxy for a lot of our testing. We use a lot of hardware load balancers in our environment, so we may shift our API services over to hardware load balancers. Keystone integration, RabbitMQ, in this case is what we're using. So those are some of the other actors in the whole service. Bringing it all together, we've got our MagnetoDB cluster. And the primary thing to take away from this slide is you've got your MagnetoDB cluster. And it talks to your Cassandra cluster. And it uses a very similar protocol to Cassandra's gossip algorithm to be able to learn about the cluster. So you basically point MagnetoDB at Cassandra, tell it to start learning about it, and it starts talking to the Cassandra nodes to rewrite data for the service. So they're loosely coupled in that respect using the Cassandra protocol that probably answers, hopefully, the gentleman's question and how it's integrated within OpenStack. So it's integrated with MagnetoDB as a front end of the Cassandra cluster. In this case, you can use a different database driver, a different key value store if you wanted to as a service. All right, so the pesky problem. Why are we here? What did we come up against that we needed to solve? In this case, we initially deployed our key value service on bare metal. And we were testing and learning how to operate it. We were writing code in conjunction with some of the other open source community members that have fostered the MagnetoDB project. And as we learned, as we tested everything, we started layering together the service, putting our MagnetoDB API services on the same physical nodes as the Cassandra data storage nodes. Some cases, most people wouldn't do that because you want to keep your services separated. In this case, we found that the MagnetoDB was very CPU hungry, CPU intensive. And the way we drove Cassandra was very disc IO intensive. So they were very complimentary services that didn't impact each other too terribly much on the same node. So that allowed us to use less hardware and use a more efficiently from a bare metal perspective. And we did Cassandra's directly managing the disk via JBod, which is good because Cassandra's very good at that. It's one of the things that's good at. I bring that up for a specific point you'll see in a moment. And from that, our KVAS team, they sort of had a performance expectation set. This is how our KVAS should perform. And when we moved it to OpenStack, a fairly untuned OpenStack NOVA environment, we ran into some issues. One of the primary problems was our OpenStack nodes of femoral disks were RAID 10, not an uncommon configuration. And we were using SDN layers with OpenControl, which introduced some additional dependencies and performance caps as well. And we found that when we did essentially a comparison, it wasn't exactly an apples to apples comparison, but when you boil it down, it came to about 66 percent. So each compute, we based it on a hyper thread core. So each hyper thread core gave us about 66 percent of the capacity on a virtualized environment from a bare metal environment. And that's a relatively drastic cut. So it means you're going to have to scale your service up virtually much, much bigger. So it may not make much sense at that point to run it virtualized if you have the ability to run it bare metal, because it would be more performant and less resource utilization. So that was our baseline that we wanted to start from and make better, because clearly 66 percent we should be able to do a lot better than that. And we had some very unhappy KVAS team members. They were sad. So essentially, when it boiled down to bare metal, we were getting about 250 requests per second with a list tables operation per hyper thread core, which is what that metric's about. They were happy with that. And then on virtualized, we got 165 requests per second, made them unhappy. And so our goal, we wanted to deploy the KVAS service in a virtualized environment, because we wanted to obtain some of the benefits of flexibility, auto scaling, all of those solutions to allow us to use a single control plane, to manage it with an open stack integration that it was essentially designed to be run from. We wanted to also have the ability to provide tenant-managed KVAS solutions as a possibility in the future. We wanted that to be well tuned, so a tenant could deploy a private KVAS solution for themselves, as well as our managed platform service for KVAS. And we did not consider any containerization during this testing. Containerization is a whole big topic itself. We're considering it. It's definitely something that's on the cloud platform engineering team's radar as a general solution, whether it applies or not to this, we'll see. So how did we go about resolving the pesky problem? The pesky problem, we decided to set up a test bed environment, a smaller environment from our initial test where we had a much more controlled platform environment where we could test, tweak, tune, test. And we initially needed to set that up, provide ourselves with a baseline on the bare metal, then deploy open stack on top of that and deploy open stack Nova compute on those nodes that we were originally running the KVAS service and then test that implementation and then tweak, tune, tweak, test and go through that whole process. And as we did that, we kept three things in mind with our performance tuning. We wanted to do performance tuning of Linux OS and the hardware, so we wanted to keep that as a separate performance tuning mechanism that we could play with. KVM, the hypervisor, and the guest VMs that as a separate performance tuning exercise and then MagnetoDB and Cassandra itself because those each have their own performance tuning mechanisms. Some of the tools that we used, there's an enormous amount of tooling out there you can use for performance testing. The biggest thing to take away from this is MagnetoDB test bench is a specific test bench tool that was written by the MagnetoDB team that exercises MagnetoDB very well. And that's sort of our baseline tool that gives us the thumbs up or thumbs down whether ultimately things are good. We can tune things in the Linux layer that make Linux go really fast, but if it provides contention to what's actually happening with the application stack, it may actually hurt the performance of the application overall. So the final test whether any of the performance testing that we've done is proofed via the MagnetoDB test bench. Obviously there's any number of other tools here. My favorite is DD and a lot of people say really DD, I'm like, DD rocks, it's a really good test tool. If you want to exercise disk IO, it's good, it's old school but it works really well. So there's a lot of different stuff there. There's specifically a Cassandra stress tool so if you wanted to stress your Cassandra environment independent of the entire MagnetoDB cluster environment itself, you can do that as well. And there's some big test suites out there. Pheronix is an open source test suite that's a really good test suite. We didn't specifically run it in this case but it's one that was on our radar as well. The test lab looked a little bit like this. We had our KVAS loaders that actually were the test loaders loading the data for us, essentially acting as our clients. Up here we had HA proxy which was the interface to the API service and then of course our supporting services up here in our Cassandra cluster. Now this lower layer changed depending on the test environment whether this was open stack of Nova compute or individual services or a mixed Magneto Cassandra, Magneto Cassandra cluster. On the network side we had 20 gig LACP bonds for each of the individual nodes and an 80 gig link between the two racks that were housing the test bed environment. And MagnetoDB test bench itself it has its own sort of unique requirements and tooling that it uses. It uses a test bench server which drives Locust as its test bench platform, has the master which drives the slaves, loads the test scenarios for Locust against the actual cluster, some additional tooling which allows you to collect your metrics and monitor and take a look at what's going on, collect DRID tool graphite, et cetera. Some pretty standard sort of test bench tools. When we pulled all of that together we had a number of parameters in knobs and tweaks that we were starting to make. One of the big things in these next few slides I wanna note is none of these are specifically the right solution for any application. There are just various ones that do provide you with some significant performance tweaks and tunes. This whole process for us is something that's still in progress and we're still in the process of tweaking, tuning, tweaking, tuning and getting to the point where it's a fully tuned application stack but these are some very good general performance tuning guidelines that you can take a look at. On the Linux side you have host things that you wanna look at, things like Vhost, Net, transparent huge pages, high res timers, these are all elements within the Linux kernel that will make significant performance gains and impacts to your environment. Obviously depending on your workload it may or may not help you and it may or may not help your application in the long term which is why you need to look at tuning each of these individual elements simply and then testing the whole system and seeing what the results of the whole system is at that point. One of the big things, file system mount options, a lot of people tend to miss although it's relatively well known if you're a good Linux sysadmin, no A time, no dirt time, rel A time, mount point options can make an enormous impact change. Also I should note that some of these things can be bad in some certain situations so you wanna be aware of those things as well. If you're any questions can we hold those to the end? Thank you. When we pull that together, these are just a few of the highlight metrics that you can pull out from some of these performance gains. WMEM and RMEM buffer changes can give you almost two times the throughput in terms of your performance gain and capacity. Like I mentioned no A time, no dirt time, rel A time can make as much as 30% impact on your IO workloads. V host net can give you a lot better latency and a lot better throughput if you're using V host net and the guest at Linux kernel itself assuming your guest VM is Linux based. Obviously para virtualization it's pretty well known now. You should be using para virtualized drivers in general within your guest if you can get away with it. One thing that a lot of people tend to do is they turn on system performance gathering metrics and statistics and capabilities within the virtual machine itself. If you don't own the underlying infrastructure you have to do that. But if you do own the underlying infrastructure don't do that. Do it through your libvert tools or your other hypervisor tools to gather your VM statistics. That'll make a pretty big impact on the performance of your platform in general. A lot of these can be taken together as composites and turning some on will actually negatively impact others. Again you need to be aware of that when you're doing your performance and doing your iterative tests. On the KVM Nova compute side there are a number of things you should do. Transparent huge pages. It actually is a host and a guest option. You can turn it on on the host, you can turn it on the guest or you can turn it on in both. In the terms of turning it on in both generally make a significant performance gain. SRIOV is for dedicated hardware. Allows a virtual machine guest to talk directly to a NIC driver. If you really need high performance network capability SRIOV is a good option. However it makes your virtual machines less modular, less portable. You can't move them off the hardware as easily because they're tied directly to the hardware or at least your aggregates need to have the same hardware profile for you to be able to migrate guests if you need to do that. If you're in the environment where you're a fostering cattle and you don't care if you lose a few cattle here and there it's probably less of an issue. Having migration capabilities and portability of your virtual machines. When we start taking a look at some of these another big one a lot of people have started getting into is KVM uses generally QCAL2 or RAW. Virtual machine guest types, file system type, QCAL2, RAWs are your better options for performance than emulated. And so you can get a significant amount of performance gain out of QCAL2 with some of the newer features and KVM's QCAL2. And they've just recently in the last year or so have started making significant strides and making QCAL2 almost as good as a RAW image for a virtual machine. So they're getting very close in terms of parity of performance. When we take a look at that, transparent huge pages, when you do it for host and guest you can get a significant gain and memory utilization or performance. Multi-Q VIRT IO net, that's an interesting new solution. I don't think it's hit mainline in KVM yet but it's a patch that you can apply and that can give you a pretty darn good gain in your performance of your network environment underneath the hood. If you need to really drive some better performance you might consider taking a look at this individual patch and consider it as a solution. Async IO, if you set your async IO to native you can get a pretty good gain in transactions per minute throughput, pre-allocate metadata. That's a significant gain for a QCAL2 image option there. It pre-allocates all the metadata for the file system so it doesn't have to do as much work with the IO should significantly help with your IO workload environment. And then when we take a look at MagnetoDB and Cassandra it has a couple of things that you might look at tuning and tweaking itself. VM Dirty Ratio and VM Dirty Background Ratio has an impact on the cache of the workloads. Commit log directory and data file directories those are Cassandra elements. Those are where your commit log for your Cassandra solution lives at and the data is where the actual data for it is. If you can separate those on different spinning disks or maybe an SSD it would be a preference for the commit log and then a spinning disk for your data file directories. Cassandra is very good at doing compaction and ordering of data before it writes it to disk so it's pretty darn IO efficient. It can be pretty IO efficient so you don't necessarily need really fast spinning disks depending on your workload. You would wanna test that. Obviously Cassandra is a, well maybe not obviously I didn't mention this but Cassandra is a Java application so there's a lot of Java performance tuning characteristics that you'll need to take a look at with Cassandra itself. Garbage collection and the heap stack and the new gen heap stack. You can make some significant improvements in the performance and the latency of individual queries from those changes. Cassandra has two bloom filters, data caches and compaction tuning options as well that can make some big performance as well as within your application itself you can make some significant performance gains doing some tuning of the actual data and how you represent it and how you handle it. In this case, if you use compression for similar column family, so if your data has very similar characteristics and you use compression you can make a pretty good performance gain. If you have very different column family types and your data has very different characteristics you could actually hurt your performance significantly so you'd really need to know what your application looks like from that perspective. Some of these changes can give you some pretty significant changes particularly on the application side. You can see some pretty significant gains if you have very similar data sets within your key value stores that you're providing. You can see some pretty good read and write performance gains as well which are pretty interesting gains that you can make. So that's kind of the bulk of the technical. If we take a look at the summary, clouds are really based best composed of small modular units, gives you the option and the ability to tune them individually from each other, gives you the option and the ability to scale them horizontally separate from each other so you can grow the different services even within an application stack like our KVAS environments the composite service of MagnetoDB and Cassandra in our case. Each of those can be scaled independently if you pull those apart in a virtualized environment you can scale your Cassandra store solution, you can scale your MagnetoDB API interface to that solution. Obviously some of the supporting elements keystone, RabbitMQ, they have their own scale metrics as well. One thing that was interesting from this exercise were that the expectations of our KVAS team were kind of set from the first bare metal test and so it was this emergency, this panic, like oh my God, the performance is terrible when we moved it to a virtualized environment. We can't do that, we can't run to virtualized. So that kind of kicked off this whole process of let's fix the problem. How much more can we fix it? The biggest thing with performance testing is it's an iterative process. You need to have sort of very scientific mindset, you need to have a playbook that you operate by, you need to go through each step and stage, it's very repetitive work. You're doing a lot of the same things over and over, make one change, do all the test bench, run it and gather all of that information and pull all of those metrics together. And all of that means automation, automate your test bench because otherwise you'll go crazy and probably won't get the results out of your testing and tuning the operations that you would like to get. Did we advance automatically again? Yes we did. So going back to that test-tune test, when you make changes, you wanna test the individual component that you're making the changes to as well as test the overall performance characteristics that you're getting from that with the actual application itself. Like I said earlier in the presentation, if you make changes in one area, it may have a negative impact, even though it shows a performance gain at the Linux kernel level, CPU. If CPU is not the right solve that you're going for, it may not actually give you any performance gain or it may actually hurt your application in the long run if it's not appropriate for the application itself. And like I said, automation is absolutely critical to this. If you can't automate your test bench environment, you'll just, your QA people will be unhappy, your test people will be unhappy, your ops people will be unhappy, and you just won't get the performance gains that you're looking for. It is still absolutely a worthy investment and virtualization is definitely an area where there are a lot of very complex moving parts and pieces that you need to consider tweaking and tuning and understanding each of those layers and how each of those changes you make affects the entire system is critical. It's not just your Linux kernel guys that need to be involved. It's not just your KVM virtualization guys that need to be involved. It needs to be someone that can either span all of those disciplines or bring those guys all to get in gals together as a team, get them working together so we're doing the right solve at the right places. And again, this is one of those basic tenants of cloud that seems to keep surfacing in so many presentations is you really need to know your application workload and your application characteristics and what you're trying to solve for to be able to do that. I did not, I'm not pushing anything. I swear to this. This thing has a mind of its own. We don't have total numbers unfortunately to share with you. Like I said, it's a work in process where our teams are actually this week still working through the iterative process and going through that tweak tune, tweak tune process. But as you saw from the original 66% of performance we have a ways that we can go in terms of making things better. Absolutely, so that's what I've got is that I have a few folks you any of you have questions I think we had a gentleman back here who had a question. Yes, I'm sorry to say that again. Yes, so the question was what was the base operating system we used on the hypervisors? In this case Ubuntu 1204 with KVM. I think I inferred that indirectly but just to clear up using KVM as our hypervisor. Any other questions? Yes sir. Watch this space, we don't know yet. That's the work in progress, yes. To be honest, I hope that we get somewhere between the sort of 85 to 95% metric. I think that would be a win for the solve. I think we can get there from some of the initial performance testing we've seen. Come to Vancouver, maybe we'll have some answers for you. Hey, that's the same boat I'm in right here. My boss is right behind you there so give him a good word. Actually, my boss and my boss has bought, how many bosses do we have? I got three layers in the management chain there. So, any other questions? Yes. Absolutely, and that was sort of going back to that human expectation when they chucked the key value service over the fence so to speak onto the Nova Compute OpenStack cluster environment and then they ran the benchmarks against it. Essentially it was an untuned OpenStack cluster. There are some performance characteristics and changes that we made to it in general, but it was just a general OpenStack environment and configuration. Some of the things specifically in this case were the RAID 10 arrays had a big impact on the Cassandra performance because Cassandra originally we were using JBod to talk to 24 disks. And then when we chucked it into a virtual machine guest and then it was talking to its ephemeral storage, it was a single QCOW image spread across the RAID 10 array. And instead of being able to drive 24 spindles, it's now trying to drive a single QCOW file system instance that the controller is then handling. Cassandra is very well tuned and specifically for being able to talk to multiple drives and very much like Swift or Seth having the ability to drive JBods efficiently. So that was an inefficiency. So one of the things that we're looking at as a possibility is can we provide a class of compute nodes that maybe have six or 10 drives that are independently not part of that RAID 10 ephemeral storage and can those be allocated to services like Swift or Seth or in this case Cassandra as JBod devices that can be bubbled up to the individual virtual machine and then we can have some better performance by driving straight down to the hardware. Cassandra can control the hardware a little bit more closely. Obviously we lose that portability and that migration capability outside of that class of equipment and that class of hardware so that provides an operational problem that we have to deal with. Do we want a different class of compute node for this kind of workload? In the case of maybe we're driving Swift, maybe we're driving Seth and we're driving Cassandra, it might make sense from a performance perspective that we have two different classes of compute nodes in this case that would allow us to do that. Those are still investigations and tweaks and tuning options that we're considering. There was another question, yes sir? Yeah, well we absolutely. We've considered pulling Cassandra apart and doing the commit log on SSD and maybe the data storage on spinning media but again that gets into the issue of having a different compute node class specifically for that unless Cassandra itself is sitting on a bare metal or a containerized solution where it can be a little closer to the hardware than through an abstracted virtualized environment. So that's absolutely something that we've considered but it's not one of the things that we're tuning for right now. We're tuning for the environment, the hardware that we currently have right now and what's the best performance we can get out of it from that perspective. Does that answer your question? Yes, absolutely. So operational cost, when you start looking at a service that's 66% as good as it can be on the same hardware, you're gonna need to scale it much wider and scaling it much wider means we're gonna have to spread it across much more equipment and then you have a trade off that you have to look at. The cost of that equipment, running that equipment, operating that equipment, is it worth it then pulling out a bare metal set of services within our open stack environment and we'd drive that service on a non-open stack but a bare metal profile. So the goal is we'd like to have that ability to be flexible, to scale it, those layers independently, to be able to migrate services if we need to, to be able to have a control plane driven through open stack with a single set of management tools and APIs that we can use and non-differentiated services that we have to manage. But to do that, we need to get much closer to that bare metal performance barrier to be able to make it worth it. Yeah, I've got time for one more or two more questions maybe, yes. In the case of doing the testing and the benchmarking, that's embedded within the MagnetoDB test bench solution itself. It uses collectd and it collects all the information through RAD files and propagates that up through a graphic console. That's part of the actual test bench which is our final measure of good or bad in terms of the tuning of the different layers that we do. So from the actual performance monitoring that we do within the test bench is different than what we do from an operational aspect. We have different tooling and solutions that we use from an operational aspect and what we use within a test bench environment. Does that answer your question? Yeah, last question here, yes. There's a, absolutely. So the question was for the tuning we didn't mention O-profile and there's actually dozens of things I didn't mention here in terms of performance tuning. There's a lot of knobs and features and tuning options that we can make. These are a highlight of some of the options that make significant changes. Not all of the ones that make significant changes. There are, if you look into performance tuning as sort of a science and an art, it's both a science and an art. It's an overwhelming amount of things that you can change. And you look at OpenStack, there are some 3,000 different configuration parameters within an OpenStack environment and all of those will have an impact on the infrastructure that you're running and the performance you gain out of it. So it's a very complex environment and there's absolutely a lot of things that you can do that may tune your workload very well better than someone else's workload. Does that answer your question? I can't speak to it directly. I don't have a whole lot of experience with it. Yeah. Okay, thank you very much everybody. I appreciate your time. Hopefully you're enjoying your summit and have some fun this evening. Like I said earlier, please stick around. We have our, some of my colleagues here, Raj and Gabe, who will be talking about OpenStack specifically, performance and scale, some really interesting stuff that they have to talk about there as well. Thank you everybody.