 Good morning, folks. I'm Greg Elkinbard. I'm Senior Technical Director for Mirantis. And I'm here to describe the effort that Mirantis Intel and other partners are doing to get the MVME overfabric technology into OpenStack. So a quick agenda. I'm going to provide a very quick technology overview. Obviously, MVME overfabric is a large topic. But we'll be happy to answer questions at the end of this. And then a quick demo of the standard driver that we wrote. If there is a couple of minutes left, I'll be more than happy to answer questions. And then I'll stick around just in case somebody has some more. All right, so technology overview. So first of all, what is MVME? MVME is Non-Volatile Memory Express. It's the standard for attaching fast low latency devices. Storage devices primarily. Devices that look more like memory than they do like legacy magnetic storage. It replaces the old SAS SATA interface. And it's targeted for devices which are ultra low latency, have a higher degree of parallelism than a traditional disk drive, with a single stack of heads reading stuff. And it's for devices that offer a very, very high throughput. It's a standard interface right now for the PC flash devices. But there will be other devices that will be coming online that will use the interface, just like the SAS interface begun to be used for practically everything once it's got standardized. MVME overfabric is a relatively short extension to the MVME protocol. All it does is it provides you alternative transports. It allows you to extend your MVME devices over Ethernet, over InfiniBand, or a Fiber channel. It uses several RDMA protocols over Ethernet. There is a couple of them apparently. And then as well as RDMA over InfiniBand. And it uses FC MVME protocol over the Fiber channel transport. What are the key values that it offers? It now provides you ability to deliver this aggregated compute and storage, which means that your compute cluster can grow at the speed where you need to offer the computes. And your storage cluster can grow proportional to the storage that you need to grow. So you have no loss of flexibility. So you can evolve either side, the storage or the compute side completely independently of each other. So just like the traditional SAN unlocked us from having to rely on small storage pools in the local systems, this does the same thing with the Superfast MVME devices. So while the current generation is not quite there, the future generations are going to be delivering 10 microseconds or less access latencies to over 100 gigabit fabrics. It also provides you a data center-wide storage pool. So again, imagine SAN on the cheap. All right. So here is a quick picture of the MVME over fabric. It is an industry standard. It reuses most of the MVME spec. It just replaces the transport with more flexible transport options. So the MVME device no longer has to be attached strictly via the PCI bus. Why is this not? Guys, this idiots. All right. Thanks for pointing that out. OK. So here's what the data center fabric looks like. You have disaggregated compute nodes, and you can scale them independently. You have a storage fabric. You can start with a 40 gigabit ethernet or 40 or 56 gigabit infinite band, but you can advance to 100 gigabit and get higher throughput and much lower latency. All right. So what have we done up to date? So our goal was to create a sender driver based on the Intel SPDK in order to popularize this technology, get the preview of it out to folks, and with the goal of integrating it into the Akada. So the Intel SPDK is a reference implementation of the Intel of the MVME or a Fabric Target. Intel is distributing it to all of the MVME over fabric vendors to jumpstart the technology. Like I said, our goal is to just get the basic demo out into Newton and then get the code into the Akada. This is based on OpenStack Newton right now, using Ubuntu 16.04. We had to update the kernel because a lot of the MVME over fabric hooks in the kernel itself only come in in kernel 4.8. We extended Nova a little bit, just change OS Brick slightly to define the MVME over fabric devices. And we created a sender driver to demonstrate these basic functionalities. The goal for Akada is to commit to Nova updates at least in there and try to develop the fully functional driver. Akada is a relatively short cycle, so we don't know if we'll get the driver in, but at least the Nova changes should be in. So who are the partners in crime on this? All right, so this project got started with Mirantis and Intel. And Melanox and Supermicro are joining the projects to help out. Melanox was their gear and software help, and Supermicro was their hardware. All right, so here's the quick demo setup before we switch into the demo itself. So the hardware is using Supermicro servers. I didn't put the particular model. It doesn't really matter. All they have to do is have a spare PCI slot to fit the Melanox cards in, which are necessary for RDMA. So a lot of the Ethernet cards still do not support RDMA appropriately, or they're not supported by the kernel code. So Melanox is probably the best bet for you to go ahead and get the environment up yourself if you want to use it. So we use Kinect X3 cards simply because we have a really, really old 40 gig switch in a lab. Obviously, you can use Kinect X3 Pro, Kinect X4 cards, whatever is your underlying networking gear. The Supermicro NVMe target that was graciously given to us by Supermicro can fit up to 24 cards, which means it can have up to 48 terabytes of NVMe. That's actually quite impressive. It's probably worth a lot more than my car. So the target right now was only equipped with a couple of devices, mostly because we're not testing performance. We're not doing anything but testing basic functionality. And again, it's using the Melanox Kinect X3 cards. Again, switch limitation. We didn't want to try to see if the newer cards will talk to the older switch. We had very little time to prepare for the demo. The software is Newton using DevStack. Ubuntu 16.04, we did upgrade the kernel to 48 because the 4.4 kernel does not have necessary hooks. And we're using the standard Intel SPDK as the target. Intel is actually actively developing it. So they've developed a series of API calls for our driver to call, and they're going to be improving those calls in the future. All right, so let's see if I can get to the demo. So we're going to wire the MVME device into the cindra.conf to activate the driver, configure a few MVME parameters so we can reach the device. And here it is. We've created one small volume on there just to try things out. On the back end target, now we're going to go through and create the other one using OpenStack. So we're going to create a volume on our MVME target of 2 terabytes in size. Then we're going to make sure it got created properly and launch an image to attach the volume to. OK, now we're going to attach this volume to the image. Great, it actually worked. Magic of video. No, the demo actually works. The code will be available out on the GitHub shortly. So if you guys want to go play with it, you're more than welcome. So as you can see, we created a 2 terabyte volume on the back end. It's now completely visible. So what we're going to do next is we're going to try to put a file system on it. So we're logging in into the seros image that we built. And we're going to fdisk it and format it, make a facet, in order to put a file system, do a little bit of IO to it, and verify that IO is there. Right, so we've created a 2 terabyte disk. Now we're going to mount it, do some DD to it, just to make sure that we can do basic IO. Now we're going to mount the same file system on the back end, just to make sure that what we write got persisted. And voila, we have our test file that was created in the VM. We could see it. So now we're going to detach the volume and destroy it to demonstrate those sender operations. As you can see, the volume has been removed from the back end, so the detach and destroy operations were successful. All right, folks, that's the end of the demo. Now welcome any questions you may have. Or not, in which case, enjoy the rest of your afternoon.