 So I think we're ready to start here. So thanks for attending. This is the scality presentation We're gonna talk about storage for open stack Scalable storage to address all the the wide demands and requirements of the software-defined data center and open stack I'm Paul Spiccialli. I'm the director of product management for scality. I'm joined by Giorgio Regni Who's the CTO and founder and we'll try to make this interesting and make it kind of a dual Presentation so I'll take a little bit of the intro and then I'll have Giorgio come in with some of the technical tidbits about How we integrate with open stack? And then at the end we'll leave a little room for questions. So let me kind of start this off So data storage, why do we need a different architecture and a different approach for open stack? And I think the key thing for us to recognize is that the demands are changing dramatically I think one of the things that we're gonna start seeing now as the software data center evolves into open stack is that We're gonna need storage systems that are much more agile and are able to handle diverse types of data So I think what we've seen in the industry is systems that can address block storage adequately or file storage or object storage but what about creating a system that can merge these capabilities and Provide a very scalable durable foundation for multiple workloads multiple applications talking to the storage and really provide this sort of agile layer that can address these changing requirements So one of the things that we're starting to see and I think we've all heard here is that people are planning Open stack deployments that really address a wide spectrum of workloads We're gonna see hundreds and thousands of VMs in these clouds they're gonna want access for Storing the VMs themselves the VM images, but also the application data So that's really the challenge here is to design a system that can address that with the durability But also to do it at the economies that the cloud demands, right? So we have to sort of figure out a way to do this in a very cost-efficient manner Okay, so we think that the right answer here is software defined storage, right? So we've all heard the term software defined data center We know that Nova represents the compute side of the equation in open stack There's neutron for software to find networking But really we need that storage foundation that sort of complements the the triumvirate there in those three components of the stack So the way that we've addressed this is through the scality ring It's a software approach to doing scale-out storage. So the idea is that it's entirely hardware agnostic So this gives you freedom to select the hardware platform tier that you desire That's most optimal for your needs. You might choose highly dense servers for an archiving application You might choose higher performance servers if you have more high IOPS demands So that's the hardware layer that you get to choose and you get to scale out We lay on top of that and what do we provide? We provide a very scalable architecture that has a routing protocol to be able to do fast storage and fast retrieval of Millions and billions of objects as you run out of resource. You're able to add more resource on the back end to complement that On top of that we layer a variety of data protection schemes So one of the approaches that's been taken in the industry is that every time you store an object You have multiple replicas and that's one of the policies that we can enable and in fact the application can choose The class of service so you can say I want to have two replicas or perhaps I want even more I want four replicas the other approach that's being taken out is erasure coding Erasure coding is an optimal data strategy data protection strategy for larger data So if you need to store image data or video data or large documents This ends up being a much more cost-effective lower overhead approach to storing this data with high durability So the ability to protect against multiple failures in the system So the system really provides those as choices as policies And lets the application make the right determination for how to store these things on top of that Then we provide this set of connector interfaces which provides the application Connectivity to the storage system and as we've said before we see a variety right so most of today's legacy applications Are going to be file based so doing things over NFS or SMB from the windows side These are very common mechanisms and those are the ones that we support as well as local File system type adapters over things like fuse Newer style object application so rest API's right this is sort of the today scalable Very simple address of addressing mechanism for putting objects and retrieving objects out of a storage system So we have a number of object connectors the one that we're going to talk about here is the Swift connector specifically for OpenStack and then also VMs We need to be able to store our VM images and our VM data stores in a storage system And really to do this for the class of virtual machines that are sort of the bulk of the demands right the 80% of our VMs That don't need super high-end IOPS or the ultra high-end requirements that perhaps an all flash array might demand But to be able to consolidate all of these into a single infrastructure is what we're really after So where are the use cases what kind of usage deployments are the customers using the scalability ring for today? It's really quite a variety Let me demonstrate a couple of these so one of them is an active archive This is something that we're starting to see as an emerging Category or a use case in industries like media Where people generate lots of content lots of video content and image content? Typically they've put these on tape in the past They would like a more online accessible archive right something that builds over the course of many years But allows them to do quick retrieval and really have instant access to the data We're starting to see deployments in Customers in the hundreds of petabyte scale for some of these active archives For example, we have a customer at Los Alamos National Labs Which is planning for the day of half an exabyte here very shortly The web and cloud use case we have a lot of customers doing Consumer messaging consumer email deployments, but now we also have video services inside the cloud for example Customers like daily motion are storing their their user-driven content in the scalability ring So I think the common element here is customers that today see the needs for petabytes and they quickly see their demands growing to hundreds of petabytes But in some cases they need to ingest the files or ingest the data over file APIs and then retrieve them over object API So this sort of intermix access patterns is something that's critical for them Okay, so let's kind of start talking now a little bit about open stack So we are very in line with the vision of open stack Our world is all about software defined and we believe that's where open stack is as well So this is a really nice alignment and synergy We've actually been involved in the community for a couple of years We actually introduced our first cinder driver almost two years ago with the grizzly release and have maintained that and supported it for all the Incremental releases going forward what we're announcing this week actually actually is our support for Swift We provide a Swift connector and Georgia will also talk to you about some of the work We've been doing in contributions for the open source Much more work is planned to integrate with all of the other services But we want to tell you about what's real today and what the work that we've done so with that Let me hand it over to Georgia. Thank you Not sure it's working. Oh, yeah, it is perfect. So I'm gonna work you for some of Integration point with open stack so that two main pieces Swift as the object storage and cinder for the block VM type storage So you install a standard Swift deployment can be read that can be Ubuntu can be my aunties In advance doesn't really matter and then you plug in the skeleton connectors You can mix and match with all the type of connectors in your environment So to zoom on Swift first So the way we've done the integration We didn't want to change Swift itself because Swift has connections to keystone as connection to a lot of our open Stack project. I wanted to keep that intact. So what we do is we Interface below the proxy layer of Swift and we're a method to store blobs so we for those who know Swift we are part of the object controller and We basically are a way to push the data into security directly But keep the containers and authentication on an existing Swift ring So this way we don't change the way Swift interact but going for the security ring You benefit from our mutual a watch according and the scares that we can be deployed at Okay Yeah, if you look at so this is a screenshot of a skeleton back end to Swift So that's a container that's backed by a skeleton and that's a transparent to the application So any application that he can work on Swift can then work on in skating Cinder so Cinder is for compute for VM storage as Paul said We are not trying to be the fastest VM back end What we're trying to do is to be capacity optimized and if you deploy us for the object side you can Point to your slow large VMs to us without having to deploy any extra hardware So the way it works is that we are tied into Cinder. So You will see the back end and that back end creates Operates on files into our system and the files can be of any size and they look like a VM to the outside We're working on another type of Cinder driver and I think we have a slide about that coming slides so that's an example of Volume inside of horizon with a skeleton as a back end so they they appear as Files into the skeleton file system and the files can be of any size and we take care of distributing the load in the back end So this is the rest block driver So it's a second version of our Cinder integration that's going to be into kilo. It's not into Juno yet so what it is is To be able to get a little bit tighter and faster integration. We wanted to create a block Driver into the canal that can talk rest in the back end So think of it as a block driver in Linux I can talk to any object store as a back end for the actual pieces of VM storage So this is an overview. So it's a rest base. It does get put delete. It works with our CDM my connector as well and what it does is You install it on the Host side and you create as many block devices as you need and the block device are backed up directly by object storage So you kind of collapse all the different tiers because you don't need a file system You don't need all the different layers. You go directly from the a canal block device to the object storage So this code is an open source and it's available at that you are aware and it's an announcement that we made yesterday So thanks, Giorgio. So I think one of the key things that we wanted to convey to you is that we're really trying to Provide a series of services for data storage underneath open stack All of these kind of follow this core theme of being scalable to really support big workloads You know, we're really talking about things that eventually will get into the tens and hundreds of petabyte range So the key idea here is that this is a single system that unifies the ability to do VM storage and Application data together the closest analogy. I think in the industry, of course is Amazon web services with EBS They have a variety of flavors of EPS magnetic and now flash The way to think about the performance that we offer is in the range of the IOPS that the EBS magnetic offers But to do this in conjunction with the application data as supported by S3 All of that with these multiple data protection mechanisms And it's key to note that our system is also optimized for a range of application sizes or data sizes So some application some data storage systems tend to be optimized only for big data You know megabyte and gigabyte sized objects what we see is that typically these use these use cases have a mixed workload It's it's often more challenging to address the needs of small file use cases So I think the key thing is that this is a system that can really enable low total cost of ownership If you have very high IOPS requirements There are purpose-built systems that do thousands and ten thousands of V of IOPS per VM But again the TCO goes up commensurately So we think this is a really perfect match For the types of deployments that we expect to see here with big server deployments and and wide varieties of workloads in the cloud So as Giorgio mentioned, there's three specific offerings that we have available They're all current and available today The scale AD cinder driver can be downloaded right now from the open stack cinder drivers webpage The open stack Swift connector for object storage is available from scality and our rest block driver is available also As an open source offering so that's something that's today We as a company are very very committed to open stack. We're investing in a dedicated team We also have that team working on these open source projects as we've talked about and we're providing resources and time for doing code reviews as well So we really want to be involved in the community and help sort of advance the state of the art as it goes into storage But also into the related services for things like salameter with telemetry and the various other services in open stack So a little bit on us the company's been around for about five years. We're privately held R&D is based here in Paris and we have our US operations in San Francisco we're about 90 people and Experiencing very rapid growth as driven by some of the use cases that we talked about earlier And really starting to see people pick up on this trend of a software enabled storage system That decouples their choice and there gives them freedom on the on the platform side We have a booth just over here down the hall So come and see us if you have some more detailed questions, but if you have any questions right now We're certainly happy to take them Anybody with any questions for either Georgia or myself Okay, well we'll stick around so come up and see us and thanks very much for your time