 All right, great. Thanks, everybody. I'm Leo Leung. I'm the VP of Marketing for a company called Scality. What we provide is software-based scale out storage for petabyte scale types of applications. So today I'm going to walk through a little bit of our open stack approach, what we're seeing in the storage environment, as well as some use cases and case studies of deployments of Scality. So as much as I think we'd like to believe that we're in a world now where we have pure flexibility, where we have the cloud-like aspects that we're looking for, what we're seeing in the market is sort of falling into the trap that we've been in for decades now, which is, well, maybe I need a different type of storage for every different type of use case or every different type of requirement that the applications have. So while I think there are some vendors here that are talking about one system that is good for every single application, what we see in practice is people starting to split into silos of different types of storage. So while we have increasing requirements of open stack, where you're talking about thousands of VMs, as well as tons and tons of application data, again, I think we're starting to fall into the trap of building many, many different storage silos, which leads to all the problems that people have had for decades, where they have to manage all those silos. In addition, while you're starting to see more and more adoption of software defined and storage architectures that are based on software, you still have lots of technologies that are very, very tied to hardware and tied to proprietary stacks. So I'm going to talk about that a little bit as well, versus an approach that gives you the economics that you're really looking for. So again, in some ways, open stack is trying to support different requirements, but also adding some complexity in terms of the different APIs, the different ways you want to manage your storage environment. So you're familiar with a lot of these different projects, different services that are available, to be able to handle both your ephemeral storage as well as your persistence from the boot all the way to your block storage or your volume into your image repository as well as now the file storage service called Manila. So lots of different types of services, lots of different things to think about as you're building your storage environment. When we look at the requirements though, another way to look at why you would even think about needing all those things or needing different types of storage, really the drivers are the applications and the data types. So in talking to customers, when you're thinking about images, it's really smaller scale in the megabytes. When you start to get into storing whole catalogs, that's when you start to get into the gigabytes. But what we're really seeing is an explosion of the application data. So not only you're talking about retention of more and more data from these applications, but you're also talking about larger and larger data sets. So a classic example is media or video, where you're talking now about 4k frames that are each 50 megabytes each going to the terabytes for an hour of content. So large, large content starting from the documents into your video, your IoT types of data, lots of types of stuff that drive the requirements around the storage. And when you look at the difference that block diagram I showed earlier, there have been various approaches to try to address these different types of data, whether you're talking about, again, dealing with the actual boot data, the ephemeral data, all the way into being able to store the application data, looking at object storage, looking at different types of storage underneath, and now, again, moving into the file environment. It's interesting that file was not a big discussion point until very recently. But again, there's some use cases that require it. So one approach is to go and layer lots of different storage technologies underneath. All these different types of data, all these different types of protocols and requirements. And again, this is what we're seeing, falling into that trap of building many, many different silos to try to service the different applications and the different requirements. We fundamentally believe that it doesn't have to be that way. You have technologies now that are able to encompass multiple use cases, multiple different types of data sets, and reduce the complexity as opposed to increasing the complexity. What we see is very much a world moving away from, again, purpose-built types of storage for every different use case, every different application. So in many of the customers we talked to today, it's very typical to see three, four different types of storage, tiers of storage. And in some cases, many, many more, if you're counting all the different locations and facilities. That's your classic SAN, NAS, multiple tiers of NAS, some object, and even tape in these environments. What we're seeing is a movement away from that into essentially a two-tier kind of world, where you're always going to have your very, very latency sensitive types of applications, where that's really moving away from the SAN and into your, for example, all flash array types of products. So certainly there's going to be a need for that, but it's usually a smaller amount of data relative to the overall data set. And sometimes you need custom hardware to deal with it. Absolutely a need for that. For a lot of the rest of the types of data that I just talked about, whether you're talking about documents or images or backups or large media types, you could really deal with it with what we would call capacity-driven types of storage. So being able to encompass all those different types of files, all those different types of use cases into the exabyte scale is what's available now. Five years ago, maybe not so much, but today you can absolutely do it with various technologies that are available. So I sort of laid out some of the challenges earlier. You have this massive capacity growth. So typically people refer to growth at a 50% type of growth. It actually varies greatly in the customers we deal with. In classic enterprises, we actually see it's more like a 10% to 15% growth every year, whereas certain industries, whether you're talking about some of the genomics research, for example, media, oil and gas, that's more like a triple digit type of growth in terms of data. So a big extreme in terms of growth, but certainly a challenge for people. And the desire is to be able to scale out a solution without a lot of work, being able to continuously grow that environment without new administrators, without taking the system down. Second is this silo problem that still exists. As much as we'd like to say that it does not exist, go into any enterprise today and you will see many, many tiers of storage, many, many islands that don't talk to each other that require separate maintenance. We believe it's possible to consolidate a lot of those things, again, into something that can handle all the capacity requirements that you need without adding overhead. Actually making the environment simpler. Third is the requirement for always on. So there's the classic three nines or five nines types of availability, which typically doesn't count maintenance. It's no longer adequate. There's not a single one of you here that's going to tolerate a service being down for even a few minutes if you're trying to use it in your day to day work life or even your personal life. So the infrastructure to support it has to change. It cannot be a three nines or five nines types of infrastructure any longer. And then finally, the economics. So happy to talk about that in depth, but what we see is when you talk about proprietary types of technologies, proprietary hardware, they use commodity components. There's usually a very big margin that's built into that versus being able to buy software and run it on standard X86 servers. So when you look at a large storage vendor, typically you're talking about margins in the 50% or 60% range versus a server vendor where the margins are only in the 10% range. So there's a huge difference in terms of cost when you're talking about proprietary storage hardware versus even standard X86, even name brand X86. So I represent a company called Scality. What we do is, again, we provide a software-based storage solution that addresses some of the things I just talked about. First is, again, this ability to run standard commodity X86 hardware and standard Linux to be able to build a massive type of scale environment. One of the interesting things you'll see as you start to roll out these solutions is you're going to have a very heterogeneous hardware environment, even if it running commodity. So having experience with those types of scenarios is what's going to give you the reliability and environment you want. So for example, we have customers that are running five different generations of hardware, completely different form factors under one system in production. So just a common scenario we see in the environment, particularly over time. Second is the ability to create a very large pool of storage. And that's not just capacity, but also the amount of objects that are in the system. So for example, we have a customer that has over 60 billion objects in a single system. So not just capacity, but many, many, possibly many small objects. Reliability, so reliability not just from a data protection perspective, but also from an availability perspective, from a fault tolerance, whether you're talking about software or hardware failures, very, very key to this type of storage. Performance, so you can certainly get performance in many different ways. We believe that the architecture itself has to be a very high performance, has to be able to serve many applications in parallel. When you're talking about consumer clouds or you're talking about video streaming, for example, very, very high performance requirements out of the storage. And we can do that natively. And then finally, being able to support not just archive types of use cases, but also more tier one, higher performance types of use cases. So again, the way we have worked with customers is, absolutely there's a low latency type of requirement, and then a high capacity, high bandwidth type of requirement, we fall into that second category. Okay, so we've taken this ring architecture, the Scality Ring product, and we've applied it to OpenStack, okay? So we've been working with some of the OpenStack services for close to three years now, starting with Cinder, then building up Swift support. Okay, so we fully support the Swift API, including all the storage policies. Announced Glance earlier in the year, and we now have a technology preview of a Manila driver as well, all right? So all the APIs and protocols you're familiar with, you can leverage them on top of the storage platform that I just described. So talking a little bit about a few use cases, and I'll talk about some more generally in terms of what Scality has done. So recently we've been working with a large service provider around a very, very big file environment, okay? It's a data as a service type of environment, over 12 petabytes, multiple sites, okay? They started with a different solution, wanted to go into production, moved into us, again, seamless ability to take advantage of the OpenStack APIs, but change the backend. And now they're going into production with us, again, multi-site, they're using ArrayShr coding across those sites, and being able to support many, many different clients hitting that same environment. So again, what we've seen is you need the ability to scale, but you also need the ability to support performance scale, right? The ability to handle multiple clients hitting the environment, reads and writes in a large scale. Second customer, numergy, public cloud, IaaS provider, currently using Swift, looking at different implementations using OpenStack as well. So going into a broader set of use cases, so where we came from, where Scality came from is from the service provider space. So we have over 30 customers using us in production for email storage, right? Very tough workload, small files, large files, many, many objects, again, customers in the tens of billions, tens of billions of objects, as well as oftentimes multi-site, okay? So going left to right, you're talking about large archives, again, oftentimes is very high bandwidth requirements. So for example, video production, right? Again, you're talking about having to deal with not just larger content, but if you're handling video on demand, you're having to support binge watching, right? Producing 13 episodes or 22 episodes at a time. Massive scale in terms of needing storage to support those environments. Going to the second category, you have tons and tons of video on demand. So every geo that we're working in, you have really the end users saying, I don't wanna watch broadcast anymore, right? I wanna be able to watch content whenever I want it, which means storage, right? And you're not just storing the titles that are available today, potentially you're recording for network DVR, right? You're recording personal programs that people wanna watch later. All that requires storage, all that requires a huge amount of bandwidth capability out of storage in addition to capacity. So deluxe on demand is a customer like that. As I said, lots of web and cloud services providers, many of them in the email space, but more and more into other areas. So recently we've signed a document provider. Again, billions of documents that they have to store for their customers, most of which are law firms, right? Another customer building a big radiology cloud, serving many, many private practices and hospitals with radiological images, whereas in the past, literally still shipping things around. Sometimes through FedEx, sometimes through very, very basic file stores. Okay, and then the last category, I think is the most interesting is the notion of being able to leverage a unified platform across many different use cases. Oftentimes that first use case is a backup type of use case, archive, sync and share type of thing, but once you build the economics, once you build a platform that can handle that much scale, you start to get some great efficiencies of scale. So a little plug about the company. We've been around again since 2009, so not a brand new entity, have had many customers in production for years. Our oldest customer is actually from 2010, has been actually 100% available that entire time, okay? They've grown 20 times in capacity to over 10, two petabytes. They've had many, many generations of hardware live at the same time and retired hardware all available all the time, right? And I think that's a testament to what we're able to do. Over 170 employees and just recently got some additional funding. So check us out if you're interested in working for a company like this. We're over at booth T60, right down the hall here. Happy to talk more about your challenges, your use cases, your applications, try to figure things out. We're not a fit for everything, unlike what some other people might say here, but happy to talk through some of the use cases you might have. I'll point you out to scality.com slash trial. Completely free, you can actually just sign up and actually run the software in a completely hosted environment. So you don't have to stand up VMs, don't have to stand up hardware. You can try the technology yourself. Absolutely free, scality.com slash trial. Thanks very much and I'll take any questions on the side.