 Hi folks, I'm Scott Kelso from Lenovo's Cloud Technology Center. And I'm gonna talk today about a project, a small project we did, building a prototype hyper-converged configuration. I'm pleased to see so many folks here this late at the end of the summit. So thank you. Let's dig in. We had a requirement from our China Geo to define an architecture, really a product that could start very small, implement an open-stack cloud very small, but have the potential to grow large. And so that naturally led us to look at a hyper-converged architecture. And so we put together a design that combined both a converged or hyper-converged control plane, virtualized control plane. And hyper-converged on the data plane or the workload plane, depending on what you want to call it, it looked roughly like this. And so whenever you're working in a hyper-converged, it's important to protect the compute from the storage and the storage from the compute. You don't want one to necessarily starve the other. We didn't really want to reinvent the wheel if there were already approaches for doing this out there. So we went and looked around and found that Red Hat had published a reference architecture describing the best ways or at least their impression of the best ways on how you should do this. So we adopted that to our configuration. Consists of observing RAM for the OS and also for CEPH, and the mechanism was designed just for reserving for the OS. But you can extend it and use it for CEPH as well. For setting the allocation ratio so that you reserve the right number of cores for CEPH, held back from those. You don't want Nova consuming them all. And then finally, it's best if you pin the OSDs to the CPU sockets. It's actually going to take the interrupts just for efficiency's sake. Then whenever you've now built this, you've taken an initial setting. You want to test it and see if your configuration is good or if it needs to be tuned. So you need some sort of testing environment to do that. We chose Foronix Testing Suite. It's one that we have some experience with internally. You want something that is going to provide you a really rich set of different diagnostics or different benchmarks that you can choose from. To model the workload that you are trying to put on this. And Foronix does that. It has almost 1,000 benchmarks already pre-packaged. It's also very easy to use. You can define your workload quickly. You can gather the results. You can compile it. You can graph it. So this is what we put together. And then finally, you can scale it by packaging workloads in VMs. So if you want to run one, you want to run three, you want to run six, you want to run 50. That's how you can ratchet your workload up and down. And so that's the technique we employed. We will, by the way, eventually move on to other benchmarks and other benchmark frameworks. Browbeet and Pbench are ones that I have been talked about a lot here. But PTS is where we began. So our specific benchmark that we assembled, we used FIO for storage, which is pretty common, packaged that in a VM. And then we used a suite of compilations. So compiling Apache, compiling the Linux kernel, compiling PHP, some of the examples. And we had four or five of these, put them all together, run them simultaneously on the system. So an example of how this worked, some results from our compilation suite, is we would run FIO at a constant level of demand, try to drive that fairly high. And then we would start with just one compilation suite VM, and then three, and six, and 15, and on up. And you can see what this generated on our system. Right, we had early on the lower VM demand levels, not really too bad, then you had the linear scaling you expected, and then you start to see the performance break a little bit as you get higher up. But what's obvious is we never got quite far enough to really break the thing somewhere in here, further out to the right. I would expect to see this thing really shoot for the sky. So that's the next thing we have to do. One of the things we learned though, out of this, is that it's actually not as easy as you might imagine to create resource pressure on a modern, very well-provisioned server between the storage and the compute. So for example, looking at the storage, we got to the limitations of the storage itself pretty quickly before we ever saw resource pressure from the compute. You see the graph drops very quickly. In this case, we're holding the compilation steady and increasing the storage demand. So here's the read test, and likewise the write test. You can see that even more quickly. Heads for a fairly flat line. All right, so those are the results I have. Does anybody have questions? So unfortunately, I don't have the compute data to show, but that is the next thing that I want to do is actually go over and graph the constant load and see how it was affected by the varying load. So future work. Any others? All right, well, thank you very much.