 Greetings, good morning. My name is John Mark Walker. I'm the Gluster community leader. I work for Red Hat. With me is Eric Carney. He's one of the OpenStack engineers, also works for Red Hat. We're going to talk today about different ways that Gluster FS can be integrated with OpenStack. We've done a lot of work in the last year, especially on integrating with all the different storage interfaces with OpenStack. From Swift to Sender to Glance. We're going to go through what we've done on all those interfaces, in addition to giving you a little bit of a heads up on some of the road map that we're working on, road map around the Manila project and Savannah specifically. We've done a lot of work. We're really happy with the results so far and so we wanted to share that with you, talk about how it's being used today, what you can do with it, and also I'll give you an example of a real world use case of someone who's using some stuff in production. With that being said, when we started out about, oh so many years ago, in 2006, we had the idea that we came across a lot of different storage solutions that were complex, costly, black box, proprietary solutions sold by vendors who built complexity into their business model so that they could charge more money and keep you locked into their solution. We wanted to do away with all of that. We thought storage should be just like any other application you install on the data center. It should be easy to use, easy to manage, and if you need to, easy to get rid of without destroying your data. If you look at the past 10 to 15 years, the storage market has kind of been lagging the rest of the market because the rest of the market, especially on the compute side, has been going all virtualized, all extracted away from the hardware, all redundant in software solutions, whereas storage kind of took a while to get to that point. It took a while to get to that point because, for very good reason, nobody wants to destroy their data. Virtualization of storage is a scary concept when you're thinking about all the petabytes of data that you have that you don't want to lose. That was our starting point. Over the years, we've built that Glow Surface to be easy to use and install, manage distributed file system that was great for scale out file system storage, for file sharing, essentially. Over the years, and especially in the last two years, we've been adding to that. Not only is it a scale out NAS solution, but it's also something you can use for virtual block storage with the, thanks to the KVM integration that we've done. It's something you can use for object storage, thanks to the Swift integration that we've done. There are lots of things that we've added to the mix now, and I wanted to make sure that everyone's aware of the work that we've been doing and how you can use it. When you think about our core design principles, there are three key ones that we follow when we do anything in GlusterFS. Two of those have been with us since the beginning, and one is fairly recent. When you think about no data silos, I mean, we always had the concept of a global namespace or being able to present a Gluster volume to many different clients, but the idea of no data silos really came about with the addition of multi-protocol support. Up until two years ago, we were exclusively a POSIX compliance shop. It was all about NFS. GlusterFS client was POSIX compliant. I think actually, after a couple of years, was when we added the NFS V3 support, so it was all POSIX-based. Everything was a file from top to bottom, from server to client. Everything was a file and treated as a file and everything was POSIX. Then in the last two years, we added more to the mix. We did the Swift integration. We've done Hadoop HDFS integration. We've done the QMU integration for virtual block storage. We've done all these different things, but we kept the idea that everything should be a global namespace. Everything should be available to all the different avenues of access that you have, and they shouldn't be siloed from each other. It's one of our very key design principles that we follow now. The other design principle was no single point of failure. Again, when we first started back in 2006, we realized that a lot of solutions had a metadata server. At first, we tried to mimic that. We realized that it kind of got in the way. There are two reasons. One is it could limit the scalability, because once you scale it, it would be on a certain point. Metadata servers would tend to fall over or at least lose some of their performance and reliability, but also because metadata servers were a single point of failure. We wanted to have an architecture that didn't have a single point of failure, a share nothing architecture that was redundant and consistent and reliable. Then the third principle we follow is the global namespace. That is, no matter where you deploy GlusterFS, whether it's in a public cloud on AWS or on an open stack cloud or on a virtualized environment or on the guests of a virtualized environment or on the bare metal system, it should look the same and behave the same no matter how you're interacting with it. Every application you use to access it should be able to access the global namespace from multiple methods and access the same data. At our core, we're unified distributed storage system, user space. We don't have any kernel space technology except for on the client side, we do integrate with the fuse kernel module and that's how we do, that's how we mount a file system over a network using the GlusterFS client. But other than that piece, we're a strictly user space. Stackable architecture, we borrowed a lot of design terminology as well as the design architecture from the GNU HURD project. How many of you have heard of GNU HURD? Okay, cool. So one of the contributors to the GNU HURD project was the co-founder of the Gluster project and of Gluster Incorporated AD Periosomy. And so he borrowed a lot of the terminology and the architecture. So we have stackable user space translators where all of our features are implemented. And then ultimately, everything is treated as a file. GlusterFS serves as an overlay, an aggregator that sits on top of disk file systems. And those disk file systems are anything that supports extended attributes. So they could be EXT4, XFS, ButterFS, and almost exclusively on Linux, but anything that supports extended attributes. We also have a maintainer who keeps our ports for NetBSD going. And so it is a portable architecture, but I think pretty much everyone uses Linux these days. So it's kind of, you know, we're kind of in the process of going through, you know, adding functionality to GlusterFS that we didn't have before. And as we do this and as we approach the cloud storage market, we're realizing that, you know, some people tried to use GlusterFS for things that maybe didn't do so well a couple of years ago. And as a result of that, you know, sometimes there's the impression that the things that we're trying to do now may not be so good for. And in truth, you know, maybe a couple of years ago we tried to, maybe we were a little bit too forward thinking, maybe we thought the technology could handle things that it couldn't actually do until we added new things later. But in spite of that, we're now sort of, we have to address the elephant in the room, which is the ghost of Gluster past. And so you can see all these things I've heard at various conferences. I just want to tell you that it's a lot different story now than it was two, three, four years ago. If you look at the differences in the project now compared to then, it's a vastly different architecture, both from a governance model as well as the engineering team to the support, you know, that came with, you know, Red Hat acquiring Gluster Incorporated and adding more engineers to the mix. So the comparison is really stark. And I hope that this slide helps us put it in a bit more relief so that it's easier to see that difference. You know, we've added, you know, it's since the Red Hat acquisition that our engineers started adding in the multi-protocol story. It's when, that's when we started moving to shorter release cycles, where we started taking a more multilateral approach when it came to recruiting developers and organizations to work with the project. You know, before we were required, we were a, what I would call an open core project. We happened to release software and an open source license. But then since then, we've changed and we've made it more inclusive, more of a big tent. And to the point now where our main, our primary, our premier feature in the last version 3.4 was contributed by IBM engineers. So a slide about performance. And this just serves to show you that, you know, we've come a long way in two years. Two years ago, I would not have been able to put this slide up. Now, it says Red Hat storage, but what it really means is Gloucester FS. And you could have said two years ago that, you know, maybe we didn't perform so well in certain workloads. But we've spent a lot of time and effort over the last two years fixing that, fixing bugs, making it perform better so that you can actually use it for a lot more things than you could before. Now, I think we know that benchmarks, you know, their lies, damn lies, and benchmarks are benchmarking. I can't really say for sure that, you know, every workload is going to behave like this. But I can say, and this kind of serves notice to those who, you know, kind of wanted to write us off. But going forward, I think it's safe to say that, you know, we're pretty competitive. And I'll leave it at that. Whether or not, you know, what kind of performance you see really depends on your workload and your environment and the other variables in the mix. So, but I invite you to compare it. I don't know. I'm sure someone will know. This kind of gives you an overview of the architecture. And I'm just going to give you sort of like the, I'm going to finish giving you kind of a bird's eye view of the project and then we're going to go into more of the open and stacky goodness. But just from an architectural standpoint, again, you can see at the bottom you have your storage servers. A volume encompasses multiple storage servers. And inside of volume are bricks. And the brick in the Gluster nomenclature is any file system that you export or any file system that you want to share. So you can have multiple bricks in a volume. And just like you have multiple servers in a volume. And you present that volume over a variety of ways. You can connect to it via the Gluster Fast Client, which again mounts using the refuse module. About three years ago we added the NFSB3 client, or I'm sorry, we implemented an NFSB3 server. One Gluster FS. And then so NFSB3 clients can connect to it. There's also NFSB4 working on just so you know. And then LibJF API, which is a recently released client library. And that's how we did the QM integration, which is also how we did the NOV integration, which we went into. And that's how we're going to do all future software stack integrations. Any questions about the architecture? And in the same vein, it's kind of just getting an overview of how things connect to each other. On the Gluster Fast server side, you have the volumes that are replicated and distributed. In this particular case, we have a two-way replication. And we have a volume that's distributed over two servers. And then replicated on two more servers. And then you can see the way the clients connect to it. The Gluster Fast native clients, when it connects to a group of Gluster servers, it downloads the translator stack and so it can understand a failover and distributed pathways from the client side. So you don't have to route everything through a single server on the bottom. It can failover to another replica from the client. The same thing with LibJF API, when you integrate LibJF API, same story. The only case where it doesn't apply is with the NFS v3 client because NFS v3 doesn't simply add by definition of the protocol, doesn't have that kind of support. Any questions? Is there an overview of all the different possibilities of interfacing with Gluster FS? You can see on the file side, we've got the Fuse module that works with the client. SMB support via SAMBA exports with the via a recent SAMBA integration. If you're using SAMBA version 4.1 and higher, this integration is baked in. If you're using SAMBA 3.6, we have a project on the Gluster forwards that you can download the source for and compile in Gluster support that way. HDFS, we have a plugin that you can put on the Hadoop server and then it implements the HDFS API, or I guess now it's called the HDFS API, Hadoop compatible file system. And then the NFS piece, which I went over. On the block side, we have an integration with sender and sender via an integration with NOVA will also work with the QMU integration and this is how we do the virtual block storage. On the object side, we have a collaboration with the upstream SWIFT project and we have two committers to the SWIFT project and so we've implemented a pluggable architecture for SWIFT so you can plug in Gluster FS on the back end. And then we have the libJF API, which is the access point for all the software stacks you want to integrate with. On the transport side, we feature IP as well as RDMA support. I think with a previous release on 3.3, it wasn't necessarily the best implementation but it's significantly improved for 3.4 and we have several community members or community guys using Gluster FS with their InfiniBand cards. How many of you use InfiniBand? Okay, is it just me or has InfiniBand gotten really cheap over the last year? It's like, I don't know, I see a lot more of it popping up now. And on the back end, again, everything is a file ultimately but it can be projected as something else. There are a virtual block device which works with the QMU and KVM side and I think there's a DB translator somewhere, although no one really uses it. Some of the features of Gluster FS specifically, there's no metadata server. Again, as I mentioned, our developers decided to take a different route. They weren't file system experts per se, so when they approached the storage problem, they had a different method of solving the scale-out piece of it. And so they decided to, that was faster, instead of having a metadata server where you route all connections to the metadata server and then to the data, they actually created a solution where you calculate a hash based on, excuse me, a hash value based on the file name and the file metadata and then you store that hash inside the extended attributes of the file. That's why we only work with this file systems that support extended attributes. And then we use that hash, we calculate that hash to both read and write the data to look up the location. Multi-protocol access, which I mentioned. Replication synchronous and asynchronous, you can, by default, you can replicate synchronously when you define a volume as a replica or as replicated, by default it's synchronous, but we also have an asynchronous replication piece called Georeplication, it's master-slave. And then as a 3.3, we feature a product of self-healing. And again, when we talk about multi-protocol access, a lot of solutions these days will have or have implemented multi-protocol access, but we really have a unified storage backend that is very unique to GloucesterFS. We don't believe in data silos. We think that if you store files and you store objects, you should be able to access it no matter where you're coming from. And I have an illustration of that coming up in a second. So how do we do all this great stuff? Well, it's about being a modular architecture. When you look at that middle translator stack, you know, distribution, replication, that stuff can all live on the client and the server. You know, again, when you connect to a Gloucester volume from the client, the GloucesterFS client reads the volume definition and downloads the translators that are needed to route the data and to implement the features that they're needed for that particular volume. So those translators live and operate on and have to run on both the client and the server. And so when you route the data, when you look at the data path, you see how it goes through the client side and go through the different translator stacks to determine where it's replicated or distributed to. And then you go through the RPC communication and down to the server side and then on down to the local storage on the on the disk drive. And you know, when we say stackable, you know, you implement features with translators and you also remove features by removing the translator. You can remove the translator and you can have, you know, fewer features if you really don't need them. Or there's a great way to build translators. We have a really good API, a solid API in place for many years now for building new features with the translator API stack. What do you generally use it for? Lots of things. We aim for the unstructured data market and that's the one that's expanding by, you know, double. It's a doubling in size every year, pretty much. So all the things that you need to store that require many petabytes that require you to scale out and grow as needed. GlossierFS is intended to be, you know, not just scale out but easily scaled out so that you don't have to pre-allocate how much space you're going to devote to it. If you need to add more data or more space to your storage piece then you can just add more servers to the mix and expand that way. Again, we pride ourselves on the global namespace and being able to behave the same and interact the same with all applications regardless of where they're deployed. You know, whether it's AWS, whether it's an open stack based cloud or as one guess of a hypervisor or bare metal, it should behave the same. You should be able to consistently interact with it with your existing application stacks. And this shows you, again, kind of gives you a diagram of what we mean by multi-protocol access and the single namespace, the global namespace and our, you know, no data silos philosophy. If you look at the top you can see the connection between the Swift client attaching to Glossier Volume and when you see the Swift proxy box there, understand that that is the same proxy server that's used by the Swift project itself. We've been in collaboration with that project to make that proxy server pluggable and so we take their proxy server and then we map it out to Glossier Fests back end and that's how we implemented the Swift API support. But in addition to that, when you access data via the Swift API, you also make that data available to other connection methods, whether it's over in Fests, B3 or SAMBA or some other posits compatible way of interacting such as the Glossier Fests client or other means, it's the same data. So you can actually mount the volume over in Fests and look at and interact with and manage the same data that you're interacting with over the Swift API. So there's a lot of value there if you need to make data available to your existing tool sets and you don't want to rewrite your applications. That's a very powerful feature and conversely if you have a bunch of data sitting in your storage you don't make it available via the Swift API. It's fairly easy to do that as well. The same concept applies to our Hadoop integration. If you have a lot of data that you want to do, be able to do MapReduce jobs on, it's very easy to now make that available to your Hadoop cluster or you can make it so that when you run your MapReduce jobs it stores the data on Glossier Fests servers, thus making that data available to your other analytics toolkits. Any questions? Yes. I could not tell you but there are two engineers sitting here that can address your question after I'm done. The other thing that we've implemented especially with 3.4 and again this feature was contributed by engineers from IBM's Linux Technology Center was the, for the virtualization use case. When you talk about integration with QMU KBM and again this is the basis for the NOVA integration which came as of an open stack Havana. But in this case you can now designate a Gluster protocol assuming you're using QMU 1.3 or higher. It has support natively for the Gluster protocol and you can designate, you can spin up a VM that goes directly to the Gluster volume. And we're able to do this via, there are three layers for this integration. There's a QMU protocol support which was contributed by IBM engineers. There's the block device translator which allows Gluster Fests to present a file as a virtual block device to KBM. And then there's the libJFAPI client library that sits in the middle. We did the client library, IBM contributed the block device translator and the QMU integration. This is a very bad diagram done by one of the engineers of the solution showing you the difference between the standard QMU stack and the Gluster backed QMU stack. And that brings us to the libJFAPI client library. We had started this project a couple of years ago I think with the 3.0 series of releases and then due to resource constraints we had to sort of drop it and then we brought it back when the QMU integration started happening because we realized that if we're going to start seriously dealing with the use case of people managing and hosting VM images on Gluster Fests, we had to do something about latency and this, and I'll show you a diagram in a minute that tells you exactly how it deals with the latency issue. But it's libJFAPI, by using libJFAPI integration, it bypasses the fuse mount and goes directly to the Gluster volume. And so when you look at our standard client, our traditional client, and you look at the number of context switches between user space and kernel space along the data path as it goes from application to, you know, through the networking layer and then down to, then onto on the server side, you can see I think the total number of context switches from user space to kernel space and vice versa is 14 if you count them all up. In contrast, now when you think of random IO use cases, this will kill your latency. And so for things like VM image hosting and management, it's a non-starter once you get beyond a certain number of VMs that you're trying to host. So that's hence the libJFAPI solution. And in comparison, when you integrate with libJFAPI, the number of context switches drops dramatically to I think I think 8 is the final count. So as you can imagine, the latency of this solution is much better than previously. It doesn't mean, by the way, that the Glustrophice client is going away because there's still a need for, you know, general use case file serving, just general purpose mounting over a network. But for particular use cases, this is the preferred method. And now we get to the good stuff. OpenStack integration. So two years ago, we had just started on the Swift integration piece. And that was the only OpenStack integration that we could really do or they claimed to. For everything else in the OpenStack realm, there were, you know, there were glants and I don't think there was Cinder at the time two years ago. But in order to interact with OpenStack, you would essentially just mount the Gluster volume using the Fuse module and you would do all the interaction that way, just as another, you know, path on the file system. But as I mentioned, due to some issues, when you tried, once you tried to scale out down a certain point, latency would go up and it would be, you know, not a good solution for random IO. And so for the last year, we spent a lot of time because we were working on the QMU integration. We also thought, well, since we're going to do that, let's also work on the Cinder integration. Let's implement Gluster protocol support within Cinder directly. Let's, let's get the notice support integration going and make use of the great work that we were doing on the other layers of the stack. And a lot of that is thanks to the gentleman to my left and who will describe in detail what he did with the Cinder drivers. If you look at in the previously released under Grizzly, Grizzly was the first open stack release that featured Gluster support in Cinder, but it was still using the Fuse mount. It was not going to libgfapi. And that's because Grizzly was released before we released libgfapi. I will and it was, I think it was at first it was geared for Gluster FS 3.3 and then then we released 3.4. But the Glance and Cinder integration were both via the Fuse, the Gluster FS client mount. Fast forward to Havana and that has changed because by the time Havana was released, we had released Gluster FS 3.4, which included the QMU integration and we wanted to make sure that the new release of open stack could utilize the new QMU integration. So the Cinder integration was built out a bit further to expand the support and we had the NOV integration which allowed Cinder to make use of the QMU integration. Also with Havana, we finished a lot of the collaboration that we were doing with the Swift client upstream to make it more compatible with what we were using. Previously, like for the beginning two years ago, we kept a series of patches against the Swift release and so it was a real pain to carry those patches forward to every new release. And so we thought that's a really stupid way to do it. So we worked a lot more closely with the Swift project and I'm proud to say that collaboration is bearing fruit for both parties. So it's a vote for open source collaboration. Glance integration, thanks to a lot of the new features that have come out with Havana, we don't have to do a lot as far as Glance integration. A lot of that is handled via the Cinder driver now since with Havana, I think the Glance can point to you that the same location is Cinder. And then NOV integration. As I mentioned, we now have NOV integration with the Havana release, which does make use of the work that we did with the KVM and libGF API integration. As I mentioned, we've been very, very happy with the results of all the work that we've done with the Swift project and the Swift developers. It's been useful for both parties. And we spend a lot of time working with them to make Swift pluggable not just for GlusterFest, but also for other storage back-ends so that we can all become Swift API advocates and there can be multiple implementations of Swift and then you use the one that's best for your particular use case. Yeah, as I mentioned, we have two contributors now to the Swift project, two committers. Wait, is this? So why would you do this? Well, I mean, there are a variety of reasons. It's still really early days yet when you start talking about the open source software-defined storage back-ends for virtual machine management, for object storage. And so there's a lot of work to be done yet, but we think we're at a point now where we can start going forward with real-world use cases, which I'll get to at the very end, at least one in particular. And we think it's ready for a bit more usage from the general audience, that for people who want to check it out, I think it can be useful for several workloads now as opposed to previously where I would really only recommend it for one. But in general, you want to look at our modularity, our extensible architecture, the fact that you can add new features via our transit API, or via our LibJAP API client library, if you want to integrate with your particular software. You have multiple choices of transport. You're not just limited to IP. You can also use RDMA. Day-lil CalDate, which came in handy when we implemented the HDFS API on GlusterFS. And then the fact that it's transparent, no matter you can run applications on your storage servers, so you can have the whole storage resident application thing going. And it's easy to manage and maintain. We pride ourselves on the fact that GlusterFS is the easiest to install and get running of any of the distributed file systems. In about four commands, you can have a cluster of four machines running, a distributed, replicated, Gluster volume. And then we'll turn over to Mr. Harney, the gentleman to my left. Any questions about this? Yes. The root file system for the virtual machine is mounted from some Gluster volume. Is that what you mean? Yes. You can do this with sender, boot from volume. So if you're using the sender, GlusterFS driver, the standard boot from volume support will give you that ability. By default, I'm not sure what the distinction is there. OK. Sorry. Without further ado, I'm going to turn it over to Eric Harney. He'll go through some of the details of the implementation. Oh, you might ask you. Yeah. So this is about the GlusterFS sender integration. We have a sender GlusterFS volume driver. It's initially added in grizzly with basic functionality for creating, attaching volumes. Havana has added a number of features, primarily snapshot support for sender volume snapshots, as well as the ability to clone sender volumes, copied to and from Glance images, and the new sender extend volume operation. So this is kind of now a more robust, full-featured sender volume driver that is comparable to most of the other ones in sender. I'm going to go over kind of a summary of how snapshotting actually works. So for clarity here, sender volume versus Gluster volume, there's this is kind of a many-to-one relationship. So a Gluster share can host any number of sender volumes. As a sender Gluster volume is a file on a sender share. And so when we create a snapshot, we are using QCAL2 external snapshots, which basically means you get an additional file that contains your snapshot delta for each snapshot you create. When you delete them, the snapshot is merged into its backing parent in the snapshot chain. One reason this is a little bit different from a lot of sender drivers is that this being a file-based solution, we coordinate with NOVA to create and delete snapshots in the while the VM is running rather than having the storage back in and just do it itself. So when you create a sender GlusterFS snapshot, it actually coordinates through NOVA and tells Libvert and QMU to create a snapshot and handle the manipulation of the snapshot chain. So Havana, as John Mark had mentioned, we also added LibGFAPI support to NOVA. You basically just turn on this one option in NOVA and whenever you attach a sender volume, it will use LibGFAPI to connect to your sender GlusterFS volume so you can get the QMU performance benefits that he had described. The main limitation of this at the moment is that we still need work done to support LibGFAPI with the sender volume snapshots. So for the moment, this is only for full sender volume without snapshots that you can use LibGFAPI. Since we're coordinating with NOVA, no other sender driver currently really works this way. But what we can do is since we're having Libvert and QMU handle snapshot creation for us is going forward, we want to leverage some of the QMU guest agent capabilities that have recently been added to NOVA to try to drive toward guest queuesing for sender volume snapshots. And that's something I'm going to be looking at in the near future. And the other primary sender feature that I want to still add to the GlusterFS driver is the ability to back up GlusterFS backed sender volumes. For those of you that have looked at RDO or pack stack on any Fedora or Red Hat set up, it has options built in to set this up for you. You can basically just set a couple of options. As long as you have the required versions of QMU and GlusterFS which are present in Fedora and coming in REL65 I believe this will basically set up sender GlusterFS integration for you out of the box as a deployment option. And this is for any developers, there's a comparable set of options in DevStack that also will handle the same deployment configuration for you. So I'm going to let John Mark talk about Swift deployment a little more. So as I mentioned, the goal is to have the same Swift client for everything, for whether you're talking about deploying Swift. The traditional Swift deployment or on GlusterFS. But I think as a Havana you still have to install the Gluster Swift package if you want to make use of the Swift API with GlusterFS. I'm thinking that should change for IceHouse but I want to confirm that before I can say for sure. But again, like I mentioned, we are working upstream with the Swift community to enhance the plugability of the Swift client, the Swift API and the Swift proxy server. And also one thing to understand, at least for now, when you deploy Gluster Swift, there is one Gluster volume per tenant. Meaning that when you map out, I think I'm going to go back to the previous slide where I showed the diagram. No, not that one. Yeah, this one. When you look at the way we map the Swift API to GlusterFS, the account container object gets mapped to volume directory and file. So a single tenant goes to a single volume. We know that going forward, that's not something we continue. So we're going to change that. We just have to figure out the implementation details of how we go about that. Moving right along. So on the roadmap, what's in the future? I've told you what you can do now, but what's coming up and I'm very excited about a project to implement files as a service within the OpenStack Camp Pantheon. Right now it's codenamed Manila. There are a bunch of engineers from NetApp and Red Hat working on this. I think a few other companies, I'm not entirely sure who all is working on it. But I'm thinking that one of the, in the design summit, they're trying to make it trying to incubate it for the ICES release. I don't know the status of that. Do you want to say something about the status of that or is there anything to report there? Stand up. A lot more NAS vendors and distributed storage systems, including SAF, are interested in this project. So we would have to figure out what happens on the 19th. Okay. So the status is we don't know yet if it's incubated for IceHouse, but okay, fair enough. But yeah, the whole point is to provide multi-tenant file sharing in an OpenStack context, so you can just load file servers on the go for each tenant. Now you can see the URL there for more details. I think there's actually some code that works. I'm not entirely sure where they are in terms of implementation or how well it works, but it's something that is certainly worth checking out. And it's something we're very excited about, something we're very excited to be contributing to and to be taking some leadership on. In that same vein, as far as incubating projects goes, there's Savannah. How many of you actually use Hadoop for anything? Just want to, okay, fair enough. But it's a collaboration between Red Hat, Morantis, and Hortonworks to make Hadoop be able to scale out with OpenStack, or using OpenStack to scale out Hadoop deployments. This kind of gives you a basic diagram. I don't want to go into too much detail here, but the ability to spin up Hadoop clusters on demand is a really nice feature to have. It's something that, if you're an Amazon user, you have the benefit of elastic MapReduce. This is an attempt to recreate the EMR experience in an OpenStack context. I think it'd be a very powerful thing going forward and sort of shows the flexibility of OpenStack as an app deployment environment for more things than just simply spinning up VMs in a cloud. Then I wanted to go into kind of real world usage. There are, in fact today, there are people using GlusterFS with an OpenStack integration. And the one I want to highlight is Amadeus. Amadeus is, and I didn't know this, I didn't know who Amadeus was until I understood they're using the Gluster Swift integration. But they're a large travel website, apparently very well known globally, but not in American audiences. But they're deploying GlusterFS, they're using the Swift integration that we did. There are many reasons that they went this route. One of them is sort of the ability to, the mapping between the metadata that you store in an object and how that goes into the extended attributes of the file side. So that when you're accessing the data either over a POSIX mount or an object storage layer, you're getting at the different, you're still able to access the metadata that gets stored in the extended attributes. The co-location with other workloads, this gets back to the, for them the node data silos detail, the design principle that we follow is very important to them. It's something that allows them to make data available via the Swift API as well as over a POSIX mount. And you can access it either way. That's something that they was very, one of the reasons why they chose the solution. And they've been using this since the Grisor release and they've been looking at exploring other ways to utilize GlusterFS integrated with other pieces of OpenStack. Yeah, okay. And I'm gonna wrap it up now. Just one final thing I wanted to show you. If you want to start or look at some integration projects in the Gluster community, or one with GlusterFS, go to the Gluster forage. We have a lot of interesting things there that you can find. And it is the central clearing house for developers and users of OpenStack, of Gluster software. Thank you. Do we have time for questions or? Yeah. You know what? I didn't realize that we had actually gone past the allotted time. I thought it was 9.50, but apparently it's 9.40. I apologize. But if you want to talk to me outside, I'm happy to talk to you. I apologize. I didn't realize that we had gone past time. Thank you.