 All right, I think we should get started. We've got a very tight studio here. This talk is supposed to end at 12.45. And I am here, I'm calling here. We've been both co-president. Anyways, before that, let me introduce myself. I'm Tushar Pataki. And with me, I have Neil Levine. We are both going to talk about enterprise storage for OpenStack. I am a product manager at Red Hat in the storage business. And Neil comes to us from Ink Tank, the acquisition that was talked about at Lens, even in the previous panel. Just to conclude it on the other side. So I'm sure we would put... So what we decided we'd do is, since this all happened really quickly in the past two weeks, what I and Neil said was that, you know, I'm going through a quick introduction of kind of OpenStack in general and all the storage technologies from a Red Hat point of view. And then Neil will focus on the CEF part because, and the CEF is very popular within the OpenStack community. And so we can talk about CEF a bit later. So we can kind of give you the stuff in that way. So we'll talk about the payments and hopefully we'll have our payments in the end for questions. So without further ado, really quick, a bit about me. I've been at Red Hat for three years. This is my second summit, the OpenStack summit. And it's always great to be here and see all the people. I'm sure you feel the same with that. And Neil will give a little bit about himself when he comes back. So here's what I want to talk about. A quick introduction to OpenStack as well as Red Hat's take on OpenStack. And then we'll dive into one of the storage use cases in OpenStack. Surely there's a lot of, I mean a lot of you might be aware of Cinder and the volume service, but then there are a few other services storage things that you've got to consider in OpenStack. So we'll talk about that specifically Cinder as I said. Manila, which is the newest kind of project for file sharing. And then finally, OpenStack Optics Swift, which is probably the oldest, one of the oldest projects in OpenStack. And then we'll have Sambir and Sugar and Neil. With that, let me go over to this, which is OpenStack. So OpenStack really is a infrastructure for cloud-enabled workloads. It's an infrastructure platform for cloud-enabled workloads. And the way it is really with it's very basic tenants really on a modular architecture. So because of that, you'll see all these different services which we'll talk about in a bit. And then it's a modular architecture used for flexibility, not only from a user but also from a developer perspective. The developers can hide some implementation details and make progress independently. Whereas from an admin perspective, they can expect these various services to talk to each other. So there is a lot of value to this modular architecture. It's designed for easy scale-out. So I think the idea again with cloud really is that you want it to be elastic and for you to be elastic, you want your services to scale-out seamlessly. Start small, scale-out, scale-out, scale-down, that sort of stuff. And then finally, it's based on a core set of services you'll see here and it's ever growing. I mean, that's one of the exciting things about OpenStack is there's a basic set of core services and then all these products which help you add value on top of it. So at a very high level, for those of you who don't know, Horizon is the dashboard which allows you for a self-service and then you have all these core services below that, starting with NOAA, which is your compute service. That's where you get your virtual instances. You can request and NOAA will give you that. And then you have three storage services. One for, one for glance, which is the image service. That's where you can store your ISO, where you can store your OSs if you will to put your guests into. Then you have the volume service here at the end which is Cinder, which allows you to basically think about them as virtual prop devices. And then you have Object Store, which you could Swift and then to pan them out whether you have networking in, it was formerly called Quantum, it was called New Front. And then finally on the right, you have the Identity Service, which is KISO. So this is at a very high level, a quick 30 second tutorial on OpenStack so that we are all grounded. So what's, again, another quick introduction to what is Red Hat's offering with OpenStack. We call it Red Hat Enterprise OpenStack platform. So one of the things that we realized is OpenStack is greatest in waiting, it's going fast at the same time. Enterprises are always looking for in a stability they want to deploy it in production management. And so what we did was we put OpenStack and ship it on top of Red Hat Enterprise Linux. And so with that, what you get is the OpenStack integrated with the world's most trusted and pro-prone Linux operating system. And so you get the hardening to get the stability and the life cycle associated with that, which Enterprise customers care about. And then the other advantage is Red Hat Enterprise already has an extensive partner ecosystem. So you are able to leverage that with OpenStack and extend that. So that's kind of bringing the good of both worlds. OpenStack innovation, Red Hat Enterprise Linux stability, and so that's basically what you're getting with Red Hat Enterprise Linux OpenStack platform. So this talk is going to, I mentioned this because this talk is about going to be about what are the storage choices in such an environment. So this is kind of Red Hat point of view, if you will. But I would give you, oh, I also have a generic description also. So what are the OpenStack storage use cases? We talked about Cinder, which is by far the best known I would say, which is your volume structure. And once you create a guest, once you create a virtual instance, you really need some virtual rock devices so that you can pay file systems on them and do whatever you want. You know, and so that's your volume service. You can do it with Cinder. It's a Red Hat tenant, block storage service will tell me to that. The other one, which is kind of not that much, doesn't receive much attention is local storage. So that is the storage, the thing about the disks, that's like the FML storage, the thing about them is disks attached to the hypervisor. And so the local storage tends to be LVM, but what happens really are local disks. But if you are looking for live migration, if you are looking, and if your hypervisor requires a shared storage, like KVM requires shared storage for live migration, then you will need some shared storage beneath that. So bear this in mind. The next one really is glance, which is the image repository. This is, like I said, this is where you put your ISOs and you could expect your, you know, your volumes, sorry, your annoy instances to go from there. So that's what you store your ISOs. And then you kind of have the object, so we talked a little bit about that a little later, but object books can be used for things such as backup and a couple of other things that we'll catch up on. So those are the kind of the, like I said, the current use cases. These are projects which are already within open stack. Now there are a couple of emerging use cases. One, which is, one is Manila, which is file sharing, and we are extremely excited about that because we are had, and we are competing with people like NetHack right here to basically create what we call a multi-tenant network address story. So, and that's product Manila. I will talk about that in detail also. And then finally you have something called Hadoop in open stack. I think about it as kind of elastic Hadoop, if you will. You know, that's how I look at it. And there you need storage too, and you need object share, object and file services, and that's product Sahara, which also Red Hat is contributing to. So, with that, I'm not gonna spend too much time on the Hadoop, as well as the local ephemeral, we don't have time, so I'll focus on Cinder, I'll focus on Manila, which is the file service, and the Custwift, which is the object service. So with that, we'll put a Cinder, which is the volume service for blocks. So, this is kind of a high-level representation of what Cinder is all about. From a user perspective, what you want, right? There are a couple of workflows you see in top. The first one is really, you want to request a normal instance, right? I want it, yes, I want a virtual instance. Then I want to request a volume or a virtual block device. And then what I want to do is that I want to attach that virtual instance so that then I can do whatever I want, I can do with it, right? From a user perspective. So that's one. The second workflow also is slightly different from that, which is, you know, I want to request a bootable volume, as opposed to just a regular volume, I want to request a bootable volume, and then say that boot directly from that, boot a NOAA instance directly from that volume. So these are kind of the two high-level use cases here. And in both the cases, what happens is you could do it through the Horizon self-service portal, or you could, they're all RESTful APIs, the NOAA API, the Cinder API, you could request it through there. But anyways, if you request a NOAA instance, it goes to NOAA, and there's scheduling and all that good stuff happens, but ultimately what you get is a guest. Let's say you get this zero here. Next what you're doing is you request a volume, so it goes to the Cinder API, and then Cinder basically, there's a queue, and then it gets scheduled by scheduler, and the scheduler is policy-driven, and you can have different kinds of policies based on filters and weights, and talk about that. And basically it's going to decide, okay, so Cinder has this concept of storage back-ends. People have different storage back-ends, and show here SAP, and cluster, it could be LVM, it could be other storage back-ends, right, other storage vendors. So basically that request for volume based on your filters and weights is basically scheduled at one of these storage back-ends, and the volume is created. The next thing you want to do then is then you attach that volume to a guest, and so that then it shows up as like for instance, slash dev, slash dba inside that guest, right? Now, that's a virtual device, that's one use case. Another use case, like I said, is what you would do is you request a bootable volume, again it gets created, and then you just say boot from that. And so those are the two use cases in Cinder API, at a very high level. The next one really is, now I'm going to talk, start talking about the different back-ends of the Cinder with SAP is obviously one of the most popular ones, and you know, UNSAC today, and Neil is going to go into that. I'll leave it to him, and then I'll talk a little bit about Cinder with cluster. Gluster is a file system, is obviously a distributed file system, but what we have done really with contributions from others in the market, is what is known as virtual box or file. So this is kind of a high level. For those of you who don't know cluster, let's start there, right? So it's based on, you start with your disk, you create an LVM, and then you create a file system, so there's access on top of it, and that forms a brick. You can take multiple bricks like this, and stitch them together with a global long space, and that becomes a Gluster volume. And so what you have done now is that you have exported the Gluster volume, and on this side here, there is no one, you can go through LibWord and QMKVM, and either you can access the Gluster volume I have fused, which is filing user land, file system in user land, and the Gluster client, or you can go through a contribution that IBM did maybe a year ago, which was known as the block device translator, which provides this translation layer from QMKVM into LibJFUKI, which is one of our pausics like library interface for Gluster. So either ways, you get back onto Gluster, and you can access a basically a block device on your file system. So that's how we do it with Gluster. So those are the two things. The final thing I want to talk about, Cinder really is tiering, one of the advantages, one of the things that I mentioned in Cinder really was it can have different back end, so the two advantages that that really is, A, it allows for scaling, so not only can the underlying storage system scale out, Gluster can scale out, Saph can scale out, but also you can think about Cinder also scaling out by having different storage back ends. The other thing that it allows you to do tiering, and I want to touch upon this real quick, because as you think about, I mean, you probably are familiar with Amazon, you go and you can get provisioned IOPS there, or you can get regular EDS volumes, either provisioned IOPS or not. So you've got to start thinking as you start implementing Cinder or storage, you've got to think about tiering. The one way to look at it really is, you can have, there is one way to actually tier your storage solution. This is the SLA that you could offer to your internal customers if you're doing a private cloud with OpenStack. For instance, you could have a performance optimized tier, you could have a throughput optimized tier, a capacity optimized tier, and the performance one would be based on all SSDs, and it has a certain amount of IOPS guarantees, whereas the capacity optimized tier on this right-hand side is probably more cost and capacity optimized. So you could do that with Cinder by just basically defining your storage tiers, then you choose your filters and grays, and this is currently, there are a couple of filters and grays available in Cinder, mostly capacity placed in, either you can say that, oh, you know, spread it around, load it up, kind of capacity for storage. But, you know, obviously, for the developers among you, that's the region of OpenStack Cinder which can use some more innovation, certainly, maybe you have an IOPS-based filter products and grays. And then you can create a corresponding volume tag and say, oh, I have a Cinder API, and then you just say, I create a volume of this volume tag, for instance, you could have volume tag, gold, silver, platinum, bronze, right, and then silver, and then the volume can then be placed on that tier so that it gets that kind of SLA. So, and then the Cinder scheduler does the rest, you know, when you create that volume of the desired volume type, it's going to filter all the other storage back-ends away, and only pick the ones that match that profile, and it's going to then base on a weighing scheme or a policy, it'll basically schedule that, and it'll speed this up. So, next up, could we talk about file, and the file really here is at a very high level, it's a, as I said, multi-tenant secure file shared as a service, think about it as a Cinder for shared file systems, and in fact, it started within Cinder, and then it came out of Cinder, and now it's an independent project, and right now the NFS and SIFS protocols are supported. Actually, let me go back and quickly illustrate what the file-sharing means, right? So think about all these guests, one, two, three, four, five, six, seven, eight, and they want to think about a marketing share, or they want to R&D share and a finance share, and they want to be shared by these different guests here, and so this is a service which enables that. So that's a product of Manila, currently by NFS and SIFS. So as I said, I mean this, why are we doing it? It's because customers are asking for it, there's a lot of file still out there, file application requiring file access, and a lot of storage is still based on files, and so basically we needed something, and doing anything which is, and this is true open-stack way, we wanted to do it in a platform agnostic way, right? And so who are the contributors? NADAC is leading the effort. Red Hat is a contributor to the NANDAC CMC IBM, and you can find it today on those places, and Wednesday we have a talk which goes into the details of the implementation of Manila, so kind of talk, so for those of you who might want to consider that. So what are we doing for cluster for Manila? Right now what we have is a, we had a distributed, I explain what cluster this, so what we are doing really is that we have a single tenant cluster FS driver today, which uses NFS V3, or cluster NFS, which is a user line NFS V3 implementation, and then in the future what we want to do really is, we want to expand this to be a more of a multi-tenant driver, so I think this is what I call the crawl walk, if you will, so we actually crawl please, just to illustrate that real quick, again we wanted these different shares, like sales marketing, and I'm using engineering here, and you create that volume, you create those shares under cluster Manila volume, as you can see the blue is cluster, and then what you can do is, you can use Manila P8, and the cluster FS driver, and basically that keeps the share, right, so some stash marketing for instance, and behind the cluster, and then when you say Manila access allow, it basically allows access to a particular guest, for instance 10.1.1 to marketing, and so it maintains that actual if you will, that marketing can be accessed by 10.1.1, for instance, and so it only allow access to that guest, so that's the file sharing business, and you could have, obviously, attach it to multiple guests so that you can actually share the files. The next one really is object storage, so Swift, for those of you who don't know it, like five seconds, it's a restful API, heavily available, distributed, eventually consistent order, I think the bottom line really, and I've taken it from, I think the Wiki, the Swift Wiki, which is stored data efficiently, safely, and cheaply, I think that's been at the bottom line with Swift, for object, certainly object is a very emerging use case, and so, what are the use cases within Swift for Swift in OpenStack? The main ones really are backup target, so if you have send it, you want to backup your volumes, you can use Swift, of course there are other, there is some, SEP also provides that, so you can use either SEP or, or you can use Swift, you could use a glance image store, you can use it as a glance image store, instead of it being a LVM or a file system, such as Blaster, or you can use it for Hulu, which is an emerging use case with Project Sahara, or it can be, as in OpenStack, you want to just, you know, bring up an object store theory, it rounds up so that you want to store objects on it, so it's another of the least-followed use cases, and so, what are the OpenStack Swift storage options that are available during, so Swift started off with XFS and LVM, as the storage back-end that it could offer, but when Blaster looked at Swift, what we did really was, we offer something known as Blaster Swift, which basically allows you to have something like Blaster as a storage back-end for Swift, so there's Swift on top, and you can have either LVM or you can have Blaster, and we called it, actually we moved it and called it Swift on file, because what we found is that, that contribution also can lead to any generic projects like finds to be able to use that tip, right? So we instead of calling it Blaster Swift, we're calling it Swift on file now, and then finally, the other work that we did was something known as this file, again, enables multiple storage back-ends and potentially even set at some point, it could potentially use something like this to do that. So the other exciting thing that is happening in Swift really that this is in-flight right now is what is known as storage policies, and what that allows you to do is, it allows you to do a policy-based storage, data storage for object, and so you could either, with storage policies, you could either choose a performance, so maybe you could have, again, like the way we talked about with Cinder, you could have an SSD tier, and you could have a tier based on Saturdays, and the SSD tier is going to be a performance tool, and the set of the scale is going to be a capacity of my school. So you could do that with storage policies, or you could choose different SLAs, such as resiliency, sustainability, and availability, you could have a three-way or four-way application on one side because you want a lot of availability with the ratio coding if you want durability, or on the other side, you may not, right? So it allows you to do these cheers. So with that, I want to summarize. I know I went very fast because I want to give a good time, but at the point, really, we talked about the highest-end price offering for OpenStack, and then we talked about the different storage choices. Walling with Cinder's set is definitely very popular here, although you do illustrate how we do with Buster, you know, and we talked about hearing and how to start thinking about hearing, and then we talked about file sharing with Manila, and Buster has a back-end for that, and then finally, we talked about object with Swift, currently XFS and LVM, and you know, it was soon with the help of this file that Swift and Pi would have multiple storage back-end. So with that, I'll hand it over to Manila. Thank you. We'll have questions a little later. So, what's the media product that you think, and I watched for right at, as of last week? How many of you know, do you guys know about that? Quick, I thought that you've used it to call it. Sorry if you all came here to talk to me, lots of great people about it. It happens to a child of Buster, but obviously, we get to the opposition, lots of questions about the set. I'll explain a little bit about background of technology, I'll explain a little bit about the product, which we have, which will be some point if I'm all right at the product. And I'll try to give some time for the question, and I'm not sure a lot of people have questions about what the future is, and I don't know how many answers, but I'll give you the best ones I can. Okay, so, Seth, as the technology or the upstream project, as Red Hat, is about 10 years old now, that's 10 years old in June, and it's very similar to Buster, it's also an open source, massively scalable, distributed storage system, very different architecturally, and it has obviously a slight difference of features. The core point here is we do object, rock, and file all of the single technology, not necessarily a single platform, we don't have to all run this in a single cluster, though you can, but it's a single-comp technology for running all of these different types of storage. So the object services, as we know, Amazon S3, it's a very necessary type of system, so you can go and do that for a lot of some public or private, and it's very heavily integrated into the entire ecosystem through, Keystone, and so on, so forth. Importantly here it is, it re-implements the Swift API within its set of pieces of code, so it's not just a plugin, as Tushar was indicating, it works slightly differently currently, but there are some experiments to get, Rados, which I'll see as our back-end system, working with the Swift Oxy. The rock storage, which is a thing that most people in the OxySat community knows for, is a very, very well integrated with the OxySat community through, obviously, the Cinder driver system here, and it supports what are loosely volentified features, so that's snapshot, cloning, it was copy and write, and other such features, and all the other kinds of use cases around that in the second demo. And then finally, the oldest part, but actually the one piece which we do prioritize here is the R-System, which is a part of the R-System, it's like last year, again, it's a slightly different architecture, it uses distributed metadata, but it similarly has a lot of storage to get there, so I'm just gonna use this. So a lot of features, it's a small snapshot. Architecturally though, the set is an object store at its most level, it has an object store called Rados, which is a very, very scalable, distributed object store, and on top of that object store, we then expose out the different, oops, not the different services you can get. So the object gateway, which is an ICS tree, the Swift API called IJW, sits on top of the R-System. The block device called RBD, which is put on use and the R-System again sits on top of the Rados object store and so does the R-System, it's data and it's metadata. So sometimes analysts get us confused, it's like, well, set this as an object store, but then people are using this as a block. Architecturally, we are an object store, but you can consume storage in any different place. And over 10 years, a lot of the development is really being focused on the Rados play, that's a bit of an interesting story, that's a bit of a fast, reliable resilient. It's a completely decentralized, no simple point of view system that's designed to run on commodity hardware. And so the two major components of the Rados system for the technical environment review are the monitors and the OSDs. The monitors are just there to see if we look at what's going on in the cluster, they see what's up, what's down, what's down, what's in the data and what isn't. So they're not in a data path at all, they're just there to keep an overview of the state and the cluster and the custom act. The actual data sits on what we call OSDs, which is just a process that will commonly be associated with this or a great set if you want to. And you can have the latest OSDs in the single chassis as you want, so we can just have more everything in the chassis. And those are the things which you just scale out and keep adding and adding service with these processes. And the system works probably more in common with BitTorrent, but it does with, say, F app. One of these OSD processes is more than one of the software processes, they just form to each other through a gossip protocol to just make sure that they know who's up, who's got what data. Automatically rebalance, if the nodes go out, they'll start moving data around to move it out of this distribution. So a very, very decentralized peer-to-peer based storage system. So we have all of it, so if we do that, we'll get the hard work over the past 10 years of being making out incredibly robust. Once you get that architecture right, you're going to have features on top. Come and watch some of that. So this came out from the last user survey, I think it's the Hong Kong Summit, it's showing that SAP, after LVM, which is basically using like the storages, is one of the most popular storage technologies in the community. So what's great for us, or I think technically our trade has, this is green for the public to the most part. You know, SAP is really good for the first time, whereas a lot of the other technologies, they're already in the business or it's already in the data center. Even if it's reburnting them, but here we can see that SAP has really been delivered throughout the stack for a very long time. We're very interested to see what the next user survey starts showing over the very long period. Okay, the product. So we created a product based on the technology, just like we've had to do with the rock stream technologies going into downstream products. So we call this product in the TANQ-70 product, so it will continue to be called in the TANQ-70 product, and I'll explain to you how a new version coming out very, very shortly. And kind of first of all, it's a subscription-based product, chargeable capacity currently, and it's just a single price for all your protocols with no particular timing here. And it consists of the universal technology at its base. Just from your project on the rock side, we don't include the aisle technology that we don't consider at the year yet. So we also have a management platform called Calamari, which provides trees and having become open source. Very excited about that, have been getting a community around that. We are going to have some enterprise plugins coming out soon, hopefully that's not going to be the case, which will allow you to put in those year-ware ecosystems to use the same storage back-end running over well. And then obviously the support cells as well. So the important thing to note about the product is it contains less than the upstream technology, not more than a hard one, so the pets, which should, and all the dependencies, all the things you need to run having tested it. So there's a lot of stuff in the staff community, which we do not bring into product because it's not the right level of security, so again, very similar to what we have in terms of how the product process works. So the front view, very familiar for you to read out. How does it work in open staff? Very heavy integration work. On the object side, as I mentioned, we support Swift API, and we're integrated with the Keystone. So if you want a paper basically for the file, Swift implementation, we're going to use it for a slight bit of performance, so honestly, you're going to need to run a single setup, which is a very advantage if you're doing a PFC, and that's produced, this is a common use case. But it's the box side which most people sort of know us for. Well, and again, we hook in to send us, so just give me a volume, leave that volume, snapshot that volume, that's all going through the Sender API. But importantly here, we hook in to both sender and glance simultaneously. So why is this important? Well, we do copy on the right chromium, which means if you want to start up 100 or 1,000 VMs, you press the button, and it just happens almost instantly and spontaneously. We're not actually provisioning 100 or 1,000 real images. You can take your base image and glance, and just say, I want to use that, and in order to be able to start using it, we're going to do a copy right. So as soon as the changes start to happen, that's when the right cap runs. It's really just a virtual sort of memory construct, I think, up until that point. So you get very, very, very fast boot times with lots of VMs. So that cloning capability of the copy right, very fast boot times, and then again, we do snapshots of the volume values. So you just want snapbacks, that mock device, specifically for a data volume, and then put that back into glance, or if you want to actually move it over to them, so the object store, you can do that too. And the nice feature now is that we support both the federal volumes and we support the data volumes. So you can actually get to the point where you are completely at this list, low work compute nodes on the front end, and is that all of your stories managed by step on the back end. So it's kept at both the boot volumes and the data volumes, as well as the library stuff going on. So single storage platform, everything you need to block. And then, kind of similar to how the KVM cluster if the traction works, the actual weight of the compute hypervisor talks to the storage back end again, we've coded one of our libraries directly into KVM, so it knows how to mount and unmount and so on and so forth. So I currently got KVM, it does work, which then, not then surveyed yet, it doesn't have preview, but for KVM users, it's all good to see this. So the good news is, before that position, we've already went out further conveniently, so the solution between the intake center to post-product and RAL-OSP-4 is all certified, so it's all being blessed in little kosher. And when RAL-OSP-5 comes out in a month or two, again, the work seamlessly works between hypervisor, Zynder, all works seamlessly, all running on RAL, and the intake center proposal would be supported on RAL-7 when it goes GA, quickly next month. Very quickly, a couple of other use cases of what I've been up for questions is, in addition to that, you're just doing a generic storage as a service or cloud storage offering and very common deployments, we are just essentially using the SWIRT-4DS-3 protocol of the object store. So those of you who are really running a high performance kind of web-scaled applications, then you can actually remove out the SWIRT-3 SWIRT-4DS-4 API and use our own native code, which is extremely fast. You can remove an element of abstraction, which is kind of convenient for developers and end users. If you're really trying to hard-code this into your application, this is a native protocol which will give you, through client cluster, a server, but once in many interactions between your application code and storage on the back end. And we have a couple of very, very nice customers and partners at the moment, which I'll do in a minute. Okay, very quickly, I'll stop. We have a new product coming out called Impact-7 Enterprise 1.2, hopefully out in the next month. We've got some good features here. We have a re-recoding and cache tiering as the two end-like features, I'm just going to say a little bit about it. So cache tier, for those of you who are running SEP within OpenSeph, it's designed to speed up your work performance. So it really works in your provision for storage on the spindle and for the storage on SSD, and in certain SSD ports, we're transparent-plated into the data part that the client doesn't load after it and hash the data. And that happens in two ways. We have the full write-back cache, so go green and write, it's great for cache. The write, it's the cache, first of all. It's on the client and brings it to backing or base pool. And if it goes cold, it's just flushed out the cache, it goes hot, it's all hot up again, so. Pretty standard caching. It also works in a re-loading mode here. So if you know that the play-touch is just going to be cold within this long, so there's more stuff that you need for compliance. You can just get it straight off the backing pool, but if you load it, you can get it up to a hot tier pretty quickly. So we're pretty excited about this feature, and obviously, you can use it with your back-to-back performance. But also, we have a range of coding coming up. I'm proud to be the first open-source product to run out of the range of coding. So this is a different way of doing nature integrity, instead of keeping graphics, which is expensive. You just don't have any bits, spread it out across the surface across the cluster. Initially, we're seeing this as useful for the object storage use cases. More adventurous of you might want to try probably even with the recommended production yet. But when you combine it with the cache tier, you're very running off the option for cold storage. I'll read the roadmap, which is, as the product manager, this is all, I can't make any guarantees, I guess the promises are perfect. We've got a whole lot of our class system. We've got RVVarrowing coming out, so very similar to the Snap Arrow from the NetApp world. So if you want to do the general box between multiple data centers, you can do that. You also need an RVV kernel module, which is not used with your own stack, typically, which gives you native access to what devices you're using, or through your host, through your kernel. And we'll hopefully be available with 4.7 support with the next release of our team. And I will skip all the other stuff, and you can train and slide up. Have you got any questions for me or? Yes, I think you may have been right here as we were asking questions. It was a microphone, yeah. This is a direct recording, so I'll do it. I don't know, I don't know what I'm on here. Here is your morning. What's the problem it starts and how it starts? So the problem it solves is those two have data integrity, where you can do this, but you still have the data available, where it's existing, or three existing data, is a replication stored in two parts of that file. So you actually have to do this, you store three categories in three copies. So it's stored, and it's constantly on the side of data, it's very expensive. It's going to be having 2x or 3x cost, this is the data parameter. The radio coding, you would now have number two, great parameter, it's very similar to RAID, it's simply looking at more than five or at six costs, and you have to save that more than a part of resiliency in the event that you use a desk or multiple nodes, or anything else, it's a guarantee that the data is stable, but it's reconstructed through an algorithm, rather than just taking care of it, you grab all these backup copies of the data. Right, so that basically means that at the time of recovery, you will have some data, compared to the traditional method of replication. There can be, but the cost of storage is in the emulation part. That's a great question, I think I'll ask you that. But yes, depending on your way of using it, actually, we have some interesting stuff that we use in the product, before this hit, can you remember this case? And you're not too bad, depending on the size of the image, you have very, very large images of all of the cases you use, it's a green place. I mean, cost of storage can be again debatable because if we keep talking about commodities, commodities, we're going to be cheap if we can do that, so. But I understand the point, so you can go back to the storage and stuff, that's a lot of value. Right, so maybe the whole point here is to make us a great app based on some of our data. I was going to put it, you said, raise your coding, quite small, raise your coding, I mean, what's being raised here? Industry, yeah, it's the industry side of it. Would you be human in that grand deal, why is it so big? Thank you. Some of us is generally not that of all the announcements, should we read between the lines on that one? No. So, in the past, we've had some results in terms of the sub-inducation that's been stopped, and quite the opposite, I think, and it's easy as it is to read how to continue to develop that, you know, I think the app position, in fact, was obviously part of the ZOKI stack, and all the ZOKI stack were a very good company to last term. We have a lot of options in enterprise storage beyond just, you know, that block-use case and the process is an important part of that. Remember, we don't have to go all in on big data as well as the best, you know, some interesting things going on with the view from the databases side. So I think technology-wise, though, I don't think I'm going to change product-wise timing, really, to products that are the best, that's the kind of things which we'll have to come up with later. All right, thanks. Okay, I think that's it, thank you very much. Thank you.