 So, if you want to come in and take a seat, kind of start the presentation here in the next minute. So, we're going to talk about unified data storage management for OpenStack. That's the topic. Come on in and grab a seat. Okay. Well, thanks for attending. My name is Paul Spiccialli. I am the Director of Product Management for Scality. Scality is a data storage company. We specialize in creating a software solution for petabyte scale data storage. And we're going to tie that in and talk about OpenStack and some of the OpenStack data management challenges and how we can provide a solution for that. Okay. So, I think one of the key things we have to look at, of course, is what's difficult about managing storage, right? So, in any cloud framework, what do we make it easy to do? We make it easy to start spinning up many instances. We need instances, of course, right? That's the whole point of this is to run applications. So, it very quickly gets into this ability to dynamically spin up lots of instances that can create storage. So, we have a sort of a virtual machine storage problem. All of those virtual machines are there for a purpose, right? They're there to run applications ranging from test dev to more production applications. And a lot of those production applications are now going to start dealing with big data, right? They're going to deal with file system data. They're going to need to store rich media for a lot of the applications that we're creating. So, ultimately, we have two problems already to deal with. The number of instances and the pace at which they're coming at us and the variety of data, right? I have block data, I have file data, I have object data that I need to deal with. Moreover, one thing we're starting to see is that just as in the traditional enterprise, people are starting to deploy lots of different silos of storage, right? I have an optimal solution for doing file systems. I may need to start thinking about doing Swift for open stack object storage, but I still need to be able to boot. I still need an image repository. So, this gets to be rather complex, right? It's sort of the enterprise storage problem but magnified by the agility and the on-demand nature of the cloud with open stack, okay? And I think the other key thing that we look at is if we talk to customers, there's definitely a set of hot edge tier one applications, right? People are deploying high IOPS applications that are transactional or analytics driven, but they're not all like that, right? There's another set of applications that are more capacity optimized, that don't need the super hot edge performance, right? So putting absolutely everything on high IOPS storage economically doesn't get us to cloud economics, right? And that's really what we're after here is cloud economics. So, the argument we're going to make is that there is sort of a convergence to two tiers, right? There's a hot edge tier and then there's a capacity tier. And if we can start managing things at sort of the two tier level, that's going to simplify the problem quite a bit. For those of you that are managing storage and open stack, you know the story, right? There's sort of the idea of transient or ephemeral storage when you spin up a Nova instance, you get some onboard storage. The instance can use that, but when the instance terminates, that storage doesn't live. So if you want your storage to outlive it, you need some form of persistent storage and there are a variety of services in open stack to provide that. So we have Cinder, which are data volume storage. A lot of instances use that to format a file system, right? So you can use it for local file storage. But it is only associated with the instance, right? It's not a shared mechanism for file systems. Swift, as you know, is the larger scale, very scalable, petabyte scale object store, perfect fit for things like media data, long term archives. There's also the glance service, where you want to maintain your library of images. These are your operating system images, the flavors that you want to spin up, but it can be application stacks as well, right? Databases that you're going to deploy. There's a lot of talk here and in a few days at the summit about Manila, which is the upcoming shared file service. So you can see that there's already sort of four different services and projects related to persisting storage in open stack. So to kind of paint the picture a little bit about the various capacities of these, if you start thinking about just boot images, right? On boot volumes, typically pretty small, right? These are going to be measured in megabytes to gigabytes. However, you want to have a library of your images, and that's something that represents your repository. That is something that we're hearing from customers can grow into terabytes, even up to tens of terabytes and slightly beyond, but still in a manageable sort of capacity range. Where you start getting more interesting capacities into the terabytes and even into the petabytes is when you start thinking file systems, right? And here you have the classic document repositories, your content for web servers, a lot of applications deal with design data and design files. You need to log, capture your logging data, and even messaging, right? Sort of unified messaging data. However, when I start thinking about really big data, petabyte level, tens of petabyte level for storing things like video content, right? With 4K video now, the numbers are pretty incredible, right? It's something like 12 terabytes per hour of 4K video. More than that when we start getting to 8K. So these are really enormous requirements that really require a different approach to doing data storage. We're also starting to see is that people want to keep this data longer term, right? So it's not just for a year, it's also an archive, for example, that you might want to repurpose over the next coming years. You may want to pull assets out, repackage them and monetize them somehow. But the idea of keeping multi-year archives in a lot of domains is becoming quite popular. Okay, so how are we going to store all this? Well, the OS images, that's the home of local hard disk drives. If I have a boot storm problem, where I have a lot of users coming in needing to boot images from Nova at one time, I probably want a flash array, right? This is going to offer the IOPS that I need to tolerate the boot storms. Glance is obviously the right solution for creating an image repository. So that's the OpenStackGlance service. What choices do we have today for doing file systems? Well, clearly we have the ability to spin up data volumes and sender, format them as file systems today. It's a solution. It's not a shared solution. That's something that we're looking at Manila to help us solve going forward as a shared file repository. And then, of course, we have Swift at the top for doing this very big petabyte scale data. Okay, so I think hopefully that helps kind of characterize the different services. What does it lead to, though? It leads to many deployments that we've seen with the classic problem, right? Multiple silos of data storage. NAS solutions tend to be quite popular for the file data. Block storage underneath cinder volumes. And then we have Swift, of course, as a solution for doing the bigger scale object storage, okay? And then perhaps a purpose built boot volume. The problem that this creates is the silo problem. You need administrators to think about each one of these. You don't have sort of a single pool to manage across the different islands, okay? And then tying that in again with the argument we made that a relatively small set of applications really are what I would call hot edge. Tier one, these need the thousands to tens of thousands of IOPS. Something to tolerate boot storms, low latency, quick storage, probably flash driven, flash based. We would argue, however, that there's a relatively larger set of applications that are more capacity centric, capacity optimized. These are the things that you can leverage lower cost storage for because the IOPS demands are lower, the capacity requirements are there. So our view of it is that really the world is sort of converging from this multi-tier storage solution approach where I had sand, nass, some object storage and even some tape for long time archives. Can be converged down to two tiers, right? The hot edge tier and the capacity optimized here. Okay, so what do we need to really solve that back end problem, right? I think it boils down to four things. Something that can accommodate my capacity growth over time. I need something that doesn't have interruptions at a few hundred terabytes. It needs to be able to grow and grow. And furthermore, it needs to do that with constant uptime. It needs to be able to seamlessly be on as I upgrade the software, as I add capacity, as I perform management operations, whether they're planned or unplanned. I want to be able to do this with a single pane of glass so I can manage things simply. And it would be very nice if I could combine all of these different storage types that we talked about earlier into one system. We're starting to get to the point now where we can do this in software. So that's a big enabler for cloud economics. It means that the customer, the user, the administrator can choose the optimal platform, optimize it for the type of deployment they're thinking about. So true hardware agnostic approaches here, okay? So this is something that we're really trying to steal the ideas of the internet giants, right? The Googles and the Facebooks to be able to enable this in software. So the product that I represent, the Scality Ring, is a product that addresses these requirements. It's a software-defined storage product. Think of it on three layers. So at the top layer, you have a set of interfaces that can present file-based applications with things like NFS or SMB protocols. It can present a variety of REST APIs for object-based applications. And then a set of drivers for OpenStack, notably for Cinder, Glantz, and Swift, as we've talked about. In the middle is sort of the core engine. And the core engine consists of a very scalable engine that can do growth of a single system across many, many nodes. It provides the redundancy and the durability of the data, both in the form of replication and erasure coding, right? So this is another way that you have to start thinking about multiple workloads, as I may have small objects that benefit from being replicated, but then I have big media objects where I don't want to create three copies. I want to do something more efficient with the erasure codes. A set of management interfaces, both graphical and command line and API-based, and then this hardware agnostic tier, right? So I can choose whether I want to run this on very inexpensive boxes or something that's a little bit more purpose-built, something that's very dense with a lot of drives in a single chassis or more compute-intensive. So truly hardware agnostic at the bottom layer. Okay, so what we've offered in the last several months is the ability to host a Cinder driver on top of the ring, so you can use this for local data volumes from Nova. This is in the upstream submission, so it's something that we've supported going back to the Grizzly release. Last fall in Paris at the OpenStack Summit, we introduced a Ring Swift driver. The news that we are presenting at this summit here is that we support storage policies within the Swift driver. This lets us carry down the knowledge of if the user wants to store an object as a replicated storage policy or an erasure-coded policy or even to be able to pinpoint the location. I might want to say that I need this object stored in a data center in New York and another one specifically stored in a data center that's in London, perhaps. Okay, so very fine-grained control over the policies. We're also announcing a new glance driver, so that gives us three of the repositories and then coming soon, the Manila shared file system approach. Okay, so ultimately this is a convergent solution. We get rid of the multiple silos. We try to get to the point where we can consolidate about 80% of our workloads. And again, to do this with a very, very scalable platform that grows with you over time without any disruption, without any downtime. Okay, just a little bit on us here. So we are about a five-year-old company. We do have many, many large-scale production deployments. A large number of the consumer big email deployments are based on the scale of the ring. We're right here across from this black curtain beyond that on the booth here in the booth T-13. Very viable, very growing company with 130-plus employees today. Please come and talk to us and get a little bit more information about our open-stack solutions. Thank you very much.