 Enterprise storage management for mixed clouds with Copperhead. Not working? Yeah. How's it now? All right. Happy morning to all of you. Very excited to see all of you. So welcome to my session, Enterprise Storage Management for Mixed Clouds with Copperhead. My name is Parasharam Halur. I work in Copperhead Development Engineering as a principal software engineer. With me, I have URA, who is a consultant software engineer, works as a part of the Copperhead Solution Delivery Team. And we have a Copperhead partner from Intel, Kurt Brands. So I have broken out this session in three parts. I'll run you through the worry of the Copperhead and we'll see how Copperhead can be used in a mixed cloud environment. Then this being the OpenStack Summit, we'll look more details into how Copperhead OpenStack interoperates together. And in the middle, we have the demo from Kurt Brands. For one of the solution, we have with the OpenStack. Then towards the end, URA will explain to you about the Copperhead community, how you can get started, how you can be part of the Copperhead community, and what's coming up in the Copperhead, so and so forth. Having said, let's move on. Yeah, let me ask this question. So how many of you have heard of Copperhead? Good number of people. So what's Copperhead is all about, right? So it's an open source SDS controller that discuss pools, automates, the management of a heterogeneous storage ecosystem, right? So we were in one of the keynote sessions, right? Jonathan mentioned there's a user code saying that OpenStack is all about providing the flexible framework for the compute, network, and storage. Copperhead is no different. So it provides the flexible framework for storage. Look at the wide variety of storage management it provides, starting from EMC storage systems, VMAX, XTMIO, VNX, et cetera, TUNETAP, IBM, SDS, whatnot, right? So all of this heterogeneous storage system is being managed with an intelligent layer, which is called a storage automation. It basically has two abstractions which are built within the Copperhead. It's called the virtual storage array and the virtual storage pool. Your virtual storage array is all about where do I, my storage is coming from. No, your storage, physical storage could be spying across multiple data centers, or your physical storage could be spying across single data center. With virtual storage array, you define where I picked the physical storage from, right? The next thing is virtual storage pool. It's basically Copperhead lets a way to define your storage settings. You have various application workloads learning. They could be mission-critical workloads or very least non-mission-critical workloads. For mission-critical workloads, you would want to have the high-performance storage for non-mission-critical workloads. You would just have the low-performance storage. Copperhead lets you define all of those things with the way of defining the virtual storage pool. You could mention the kind of storage you would require. Do you need the replication? Do you need the data production? All of those things can be defined in the virtual storage pool. So these are the two things you basically define when you want to use a Copperhead to manage a homogeneous storage ecosystem, okay? Going up the layer, Copperhead enables all of these services through a self-service catalog, which I would call as a shrink-wrapped service catalog. You would want to create a volume and then be able to give it a host. You would want to create a volume and then be able to create a data store out of it and then give it to the ASX data store. So all of these things, they are available as just blueprints. You just click them and make use of them in a very minimal number of clicks, yeah? Copperhead is a very open and extensible platform. It provides a rich set of REST APIs, which you could use for your management application to integrate with. It's not that all. You also have the various SDKs from Java, Ruby, and Python based on the languages which your management applications are being written. You could choose to pick the SDK which you are interested in so that you can integrate with the management applications which you are using, right? Now, we are in the era of the Clouds. The applications which are coming latest into the market, they have to be able to, you know, add it to the workloads which can be used in the Cloud. So in that sense, it basically provides the multi-tenant and multi-site management through Copperhead, yeah? So with all of these inclusions built in within the Copperhead, it makes the storage and factory management agile and efficient. Moving on, how Copperhead can be used in a mixed Clouds. Now, as I said, you know, we are in the era of the Clouds, right? It's right from my side to say that every CIO in the industry has their own Cloud story. If he does not have the Cloud story, probably his job would be shaking, right? Do you all agree with that, yeah? So let's take an example of an organization who already has a private Cloud, maybe built out of using the VMware, right? Now, a CIO comes up with a requirement saying that, hey, I want to have a public Cloud as well, but I have a budget requirements in terms of my storage expenses. I don't want to, you know, add up more budgets for the storage. I would like to use the same storage and then be able to build the public Cloud. So the problem statement here is using the same storage infrastructure, he wants to build the, he wants to have the private Cloud running and then he wants to have the public Cloud running. And maybe for the public Cloud, you know, he makes a decision to use the open stack, right? So in order to do this, there has to be an orchestration layer which understand all of your storage infrastructure and which can, at the same time, intercept with the leading Cloud stacks like OpenStack, VMware, and Microsoft. So by being able to intercept with all the leading Cloud stack available, Copperhead is, you know, providing a flexible price, you know, you can choose whichever platform you want to deploy your public Cloud, right? So it basically has a plug-ins for VMware, for VCO, and VRO, and for OpenStack. So we will get into the details of in what kind of integrations we have OpenStack and it has a plug-in for the Microsoft SVM as well. It's not that all, you know, we are continuously, you know, evolving and then adding up the plug-ins for more platforms. We have developed the integration with the flocker, which is basically storage management for dockerized containers, yeah? And the Kubernetes is coming up as well in the future. And this is where the opportunity for you all to contribute, you know, maybe if you're looking to use some new platform, some leading Cloud stack which is available in the market, you know, you could as well build the plug-in for the Copperhead and contribute to the Copperhead. The community can use it and you can use it. Well, so getting into the more details about, you know, how Copperhead and OpenStack can interoperate together. There are three ways, you know, Copperhead and OpenStack work together. The first one being is Copperhead as a Cinder driver, which is, you know, typically similar to writing a Cinder driver for a physical storage. Then the second one is Cinder itself can be used as a Copperhead driver, which enables the uses of the Cinder in the non-OpenStack environments. We have a latest state sent to the list of solutions which is direct support for OpenStack-compatible API within Copperhead. It basically enables the storage orchestration for OpenStack. So we have the demo for this, Kurt will run the demo for you guys. So this is our first operating Copperhead as a Cinder driver. As I mentioned, it's a Python driver. It's like, you know, any driver which is being written for any one of the physical ventures, right? What we offer with the Copperhead driver is that it has the FC driver, ISKC driver, and then the scalar driver through a single Copperhead driver. Yeah? Now, we support all of the core volume operations, like create, delete, attach, detach, expand, you know, snapshot, volume clone, and then snapshot clone, okay? So you might ask me a question saying that, hey, I have drivers from all of the storage systems which are available in the market. You know, why would I need the driver for Copperhead as well, okay? So I have an answer for that. Now, when you are using the Copperhead driver in OpenStack environment, you get the manageability of all of the storage systems which are being supported in Copperhead through a single driver. That way you are just getting away with adding the configuration in Cinder.configuration for any of the storage systems you would want to manage in your environment. You just have a one driver and you just have a single configuration being written into Cinder.configuration. So that is just a lot of your life, right? You know, when you want to configure a new Cinder driver, I mean, it's a manual Cinder.configuration edit. So it's very error-prone. So as well, you know, you just go with putting up the single driver information in the Cinder.configuration. So it's pretty awesome, right? Now, it's not only that, you know, there are quite a lot number of features which are built within Copperhead, like you would be able to pick up the port based on the performance, when I say the port based on the performance. The Ivo which is running on the particular storage port. So Copperhead has an intelligence built within to pick the least loaded port from the storage system, okay, which you get when you are using the Copperhead driver. And you also get the quite a rich set of features like high availability and disaster recovery, which you would get with the flexibility in Copperhead. As you know, these products are evolving, Cinder is evolving, Copperhead is evolving. There are quite a number of features which are getting added, right? So in that sense, you know, QoS, replications, we still have to support and couple more features which are coming on the Cinder, so we still have to support in the Copperhead driver. And it's being approved for the end release, and we are going to be upstream in end release. So where do you use this kind of a solution, typically in our traditional OpenStack deployments? The second solution, so Cinder itself can be used as a Copperhead driver. It's, you know, I would say quite reverse to the solution which we looked at first, where Cinder was consuming the Copperhead. Here Copperhead is consuming the Cinder. So here Copperhead emulates as a NOVA, which acts as a client for Cinder. What it essentially enables is to do the storage provisioning to the non-OpenStack heterogeneous compute, I would say. I mean, you want to give a volume to ESX, or you want to give a volume to a standalone host, and you want to give a volume to OpenStack. So it basically enables OpenStack as well as, you know, a non-OpenStack heterogeneous compute host storage provisioning, right? The way it is being built is by having a limited OpenStack installation. So when I say limited OpenStack installation, we have bundled appliance just having Keystone and Cinder service. Cinder is to, you know, get the manageability of all the drivers which are being present with the Cinder, and Keystone is for authentication and authorization, because, you know, all of the OpenStack services rely on the core service, which is a Keystone for the authentication and authorization, which we would need, right? Now, typically, you would use this kind of a solution when you want to get the expanded manageability of a third-party resistance in Copperhead, and, you know, to have the non-OpenStack compute storage provisioning. Yeah, this is the latest addition to our offering with the Copperhead and OpenStack, which we call it as SOFO. It's a storage orchestration for OpenStack. This essentially is providing a Java implementation of a block storage API. So what it enables is it kind of looks like the first solution which we looked at, you know, which is a northbound integration with the Copperhead. So this is also northbound integration, but this is a new choice which you can use. When I say new choice, if you're looking for a similar kind of a solution with Cinder as, you could use Copperhead there. So you just, you know, keep the Cinder aside, use the Copperhead itself as a Cinder. Yeah, and we have implemented support for the Cinder API version, V1 and V2, and we have a very close tight integration with the Keystone. So that's again, you know, to leverage all of the authorization and authentication features which comes with the Keystone. And we are supporting the V2 version of the Keystone, right? So typically you would use this solution when you want to use the direct block storage API within Copperhead. And, you know, when you want to use, you know, inbuilt HA, it's just the indicative list of features I have given here inbuilt HA. I mean, with this, you know, you are getting all of the rich features which are being built within the Copperhead. So when I say inbuilt HA, so if you have to build the HA for Cinder, you know, you have to do, I mean, it takes more time to deploy the HA configuration for Cinder, but when you're using the Copperhead, it just deployment of a Copperhead instance, you are getting the HA inbuilt. I think with that, you know, I'll hand over to Kurt Bruns. He will run the demo for Copperhead, so forth. Thanks, thanks Parash. Yeah. Let's bring this up here. Onto the right screen. It's there, all right, cool. So like Parash said, when you wanna use this is for, you have existing storage in your environment that's being supported by Copperhead, and you wanna bring in OpenStack, like a fresh install of OpenStack, and you wanna continue using Copperhead as your management control plane for all your storage. This is where you would use the Sofo environment. So I know you guys didn't wanna see me type live, so I recorded this demo, and we have a fresh OpenStack install. It's running DevStack, so you know, if it works in DevStack, it's gonna work everywhere. And first thing we have to do, I'm gonna show you that all the services are coming from this node, the 192.168.100.5. So you can see both the volume one and volume two services are coming from DevStack. So now we log into Copperhead. It's already provisioned with some storage assets, providers, virtual assets, virtual arrays, and now we wanna add Keystone as our service or our authentication provider. So all we need to do is point it at the Keystone instance inside DevStack, and there's an option to bring in all of the projects and tenants that Keystone currently knows about. So right here I'm adding Keystone IP, and now you can see we check this box here. It says automatic registration and tenant mapping. What this will do, we'll call into Keystone as the admin, because that's the credentials I've provided for the password. And it'll bring in every project, every tenant that OpenStack knows about. If you're familiar with DevStack, at the beginning it only has a few projects and tenants, but it'll ingest all of those automatically. And so then those will be available in Copperhead, so then you can provision storage against those tenants. So there's the call out to tell you what I just told you. So now we've added it into Keystone, and now we want to show you here all those got ingested. The provider tenant is the default one that's inside Copperhead. So now we have a couple of virtual arrays already set up, but we want to dedicate some storage just for OpenStack. And so what Parash was talking about, if you're not familiar with Copperhead, it has the, you basically aggregate several storage systems into a virtual array, and that's to provide kind of separation of storage. And so you can dedicate storage for certain tenants, and that way you separate, you're using multiple back ends, but you can kind of aggregate them together and then separate them based on policies for who gets access to them. So in this case we're doing five arrays, we're bringing in VNX block systems to this virtual array. Think I hit pause again. So now we have one dedicated for OpenStack, and now we create a block virtual pool, which is more of a quality of service or class of service for the storage in your systems. And since this is OpenStack, we're going to give it platinum, gold and silver or so outdated. So we select the virtual array OpenStack, that's where it's going to be available. The protocol, IceGuzzy is all that's available in these arrays and Solid State Drives. If you notice it says four matching pools, but we brought in five systems. The reason is it's done, it's figured out that there's only four storage systems available that have SSDs in them. The other one doesn't have SSDs, so the scheduler will know, hey, I can't schedule on that fifth system, it doesn't have SSDs in it. So then we go down and look at the storage pools that it discovered, these are physical pools and it says they all have SSDs. The scheduler knows it can then schedule on these. Now we've set up storage, we've set up a virtual array, a virtual pool, specifically for OpenStack, so now we can go back into OpenStack. We'll log out here and log back in to show that the services for the block storage provisioning are now pointing to Copperhead. So you can see the IP address has changed, that's when we integrated with Keystone to say, when you want to provision block storage, you can go ahead and use Copperhead. Now we go into volume creation. I'm gonna use the demo project under the admin, create a simple volume here and you'll see when it comes up with volume types and it asks about the availability zone, you'll see that it shows the platinum and the OpenStack that we set up before and notice the gold and silver weren't available to it because we didn't provision those tenants to be able to access that. So now this is creating a volume on those back ends that we ingested into that virtual array out of that virtual pool. So it's an SSD backed volume. You can see that you go into the volume types which we then suck in some of the parameters based on the virtual pool and you can see the drive type of SSD. We switch back over to Copperhead and we can go to the resources, volumes and you can see that it's created not in the provider tenant because that's not where we gave it access but we gave it access to the OpenStack demo tenant and there's our demo volume. So it's only available in the OpenStack demo if you went to say the OpenStack admin project it would not be available there. And that was when we selected the pull down in the horizon dashboard for the demo. So that's the end of the demo but I just wanna reiterate what happened there. So we created storage specifically for OpenStack. We brought in five arrays into a virtual array. So we consolidated, aggregated them together. Then we created a virtual pool. These are all semantics for Copperhead, right? And then out of that pool we specifically said this is an SSD pool allowed access to that for the OpenStack tenant and then we were able to create a volume in the OpenStack dashboard. So using horizon we were able to create that volume and we could see it in both the horizon dashboard as well as Copperhead on my mouse. So this is just an outline of what we just went through. And then now I'm gonna let Prash, are you gonna talk about the back since Tokyo what's happened? So just to give a worry off the talk we did it in Tokyo OpenStack Summit how you can manage using the both knockbound solution as well as the southbound solution. We looked at the two offerings, right? The first one, Copperhead as a Cinder driver and then Cinder as a Copperhead driver. It would essentially be make use of those solutions, two solutions in combination and then be able to provide the use cases like high availability and then disaster recovery by making use of the reflex, Cinder and any of the supported storage system in Copperhead, right? For more details you can just look at the YouTube link which I have brought here. Thank you. Go to Yura. Hello. Oh, these guys are really paying attention. Yes, yes. Yeah, it's exactly the same thing, yes. So what, there's one thing I may forget to say everything I'm supposed to say here today. There's one thing you have to remember. I would like everybody bring out your phones right now or your laptops, open up your browser and go to copperhead.org because that's really gonna be everything I'm about to say but in a thing and then you can look at it later. Okay, so this is kind of like the landing page for the community. From here you can access everything that we have to offer in terms of documentation, how to use it, how to download it, how to build it. If you just wanna download it to use it you're not interested in building it, that's fine too. You can get the links from here. On the right you're gonna see the weekly community meetings and in the weekly community meetings we do them in HipChat, kind of like an IRC. A lot of companies don't let people go into IRC and HipChat was the one that kind of every company involved at the time was allowed to access. And that's all there. Now the important part is under the get help section you have links to the Google groups so that's like all the mailing lists, et cetera. The HipChat, the FAQs and the videos. So the videos will point you to YouTube where there's a whole bunch of videos and eventually this will be in that video so if you're looking at it there and it's kind of like a weird loop around high. So in terms of getting information just remember copperhead.org. Okay, how do we change? So now we need to make this visible. Yes. So what's been the story of copperhead so far? Copperhead became a project basically a year ago next week. I think it's when we came out. We came out the first week of May of 2015. Since then we've had two Dev summits, developer kind of like this with all the developers getting together here in the US and we had two in India and one in China. And we've been able to go up to about 150 people, contributors giving code. We currently have Intel and OSU which is Oregon State University acting very active in the community right now and there's about seven universities in India also in the process of helping out. It used to be that in the beginning we just released Viper Controller, it was a single node. It was this EMC thing that we sold and now we have the copperhead community which has everything that Viper had, yes, but it's not just an EMC thing. You've got EMC, you've got Intel, working with Oregon State, we've got our repositories out in the open and we are currently in talks with other companies to see how we can all collaborate because at the end of the day, no customer will ever want to be just an EMC shop or an HP shop or an IBM shop or a NetApp shop. They're gonna wanna have a little bit of everything because otherwise you've got this problem of vendor locking. That's kind of the driver behind why Viper became copperhead because we realized that, yeah, it's kind of weird to tell other companies to just work with this other, like it's harder, but if there's an online community, that's not an EMC thing, it's just copperhead. That's what we would like to end up, that's what we would like to be and Intel is with us in that idea. As of right now, here's a little map of the world where all the different contributions have been coming from. You can see because Intel is a multinational company and so is EMC and then all the different universities all over the world. We have some universities in China we're currently talking to as well. So it's gaining steam and most importantly, it's in that initial stage where there's a lot of excitement and a lot of possibility and we can go essentially, you can have a very big impact into the direction of copperhead and therefore Viper on the backend going forward by being a part of this and our current work with the OpenStack community is part of that, right? I mean, we don't wanna, we're all in this in the same thing and there's a lot of extra functionality that you can get by just using copperhead under the covers and just letting copperhead do all the storage side things. So if you have any questions, we're gonna be here. There's a raffle that I'm supposed to give away, tickets and the tickets are coming and Alexa is here. Sorry, Echo is here but you call her Alexa. Yeah, we'll go to any questions. If you ask questions, you get an extra ticket. No, not really. Please use the microphones. Actually, I have three questions all interrelated. If you can go back to the first slide with copperHD as sender. This one? Yeah, no, not the so forth. Two slides before this. You'll find it. There you go. CopperHD as the sender driver. Two more to the back. Yes, so if my understanding is correct, this is the solution meant for an open stack cloud that is already there and where you bring in a new storage solution. So if this is the way to go, so I see the three dots. So I understand that would be the world of other storage solutions that come in but the understanding is they would have to provide a copperHD plugin for it to play in this space. Is that correct? Yes. Okay. Yes, we have a bunch of them, yes. So is there a certification process that goes with that or is that completely? Yes, one of the big projects this last year was what we call the Southbound SDK Project, which is precisely that. It's a very good, simple, ready-made process for doing that for, if you are interested in making a plugin that's a direct Viper plugin, sorry, Copperhead plugin, I'm supposed to pay five bucks. Yeah. If I have my own, my new echo, I wanna make it available to Viper, I use this SDK and I can do that. And then you just work with the community and yeah. So and that has all the options, the Java, the Ruby, Python, I mean, I don't know what. I think it does, right? Because it's all RESTful. It's RESTful, okay. And if there's a second option, the one after this which is sender as the copperHD, this one. This one. So this is not for the pure OpenStack play, is that correct? Yeah, well, you could use in the OpenStack as well as in non-OpenStack in one match. Okay. So how does this keep up with the OpenStack version? So does it lag behind a version or two? Because this would mean there is an active copperHD development that has to go. So which means it's one version behind OpenStack, correct? We are compatible with at least the two versions of OpenStack, so right now it's compatible with the Kilo and Liberty. Okay. So right now we have the Mitaka, so that part is going to be done in the next version. Next version is a copperHD or in a microcontroller. Okay, and here it would be any generic storage. There is no active development on the part of the storage vendors to play in this. Yeah, because the idea was, you know, let's say there's someone out there that wants to use copperHD and they want to use some specific piece of hardware that copperHD does not yet support, but OpenStack does support it and has a driver for it. Then by using just a generic OpenStack-based Cinder library to connect to it, then yeah, we could support it up to whatever functionality OpenStack supports through that driver. And then the benefit of it being on the open source side is that yes, it may be that it's available in the next release, but you can always just get the latest code that may already have it. So we'll get it much faster as Mitaka comes out and we can work on it really quickly and get it in a week or two or three or four, but not months, so that's the benefit there. Let's go to the other microphone. Thanks. You're referring in many cases to Cinder and block storage, the plan integration with the Manila-like and file share management as well with copperHD? Yes, I believe so, right? Because we already do a lot of the file stuff, so it makes sense for us to also integrate with Manila. Yeah, so we have that in the roadmap, so probably we would be creating the blueprint for the next release for Manila driver of a copperhead. Thank you. I imagine it'll work similar. I understand that copperhead basically is the Viper code base turned to an open source license, is that correct? Yes. So the question is, will now be Viper basically a result of new copperhead versions and you will build proprietary versions of copperhead as your own product? So it's, does this one work too? Okay, good. So it's the idea of, there's the open source product and then with the Viper version, you get like support and like Fedora and Red Hat. Let me rephrase the question. I mean, I think to answer your question, anything which is part of the copperhead is going to be part of the Viper controller as well. Okay, so what kind of governance model do you have and what are the requirements for me to contribute code to copperhead? Sure, so there's a, so okay. So to go into the governance, we have a technical steering committee made up of people from EMC, Intel, OSU, the different organizations. And there's a whole government document that explains the whole thing, a whole page on the website, copperhead.org. Remember that. In terms of contributing, it's pretty lightweight. There's a, what I would recommend is come to the Wednesday meeting, the weekly meeting, kind of get in there, say hi, I'm here, or send a message to the Google group. To clarify, do you need to contribute a license agreement or can you just send the code or? Go ahead. Do we develop our certificate of origin? Certificate of origin. Okay, signed off by, that's all you need. Okay, just one little file. To keep your answer short, just get hold of cut brands. They're actively contributing already in the community and you get all the answers. Thanks. Other microphone first. Hi, thank you for the presentation. So for copper HD, there are two legs. One, open stack based solutions to be part of. The other is non-open stack solutions to be part of. In both cases, in your partnerships, there are no open stack distro vendors or operating system vendors to be participating this effort. Meaning, if this thing to fly, it has to be part of and verify with the open stack distro vendor on the other side or operating system vendor, right? Are you guys partnering with Canonical or Microsoft? I'm not naming right. They will milk down the intact acquisition. That's definitely part of the process that we're looking into. We've talked to some of the distribution people. Nobody, I mean, we're working on it. I mean, the community is open to working with anyone and everyone who wants to help out and be a part of it. Absolutely. Yeah, that's definitely on the roadmap, but I can't give you the dates when we would get there. Yes, we are going to be part of the distros. Because we don't know them. It's not because they're secret. So when I last looked at the roadmap, you went only up to the next release. What's going to happen further down after the next six months for Copperhead? We'll make another one. Well, that's part of what's being figured out, right? We've only been here for like a year. So that's what the technical steering committee is going to do. And basically, so a lot of people have a lot of ideas of a lot of things we could be doing. And based on the community, what the community wants to work on, that's how we figure out what will be on the next release. So when can we expect to see an extended timeline and about your roadmap? We could bring that up and say, hey, we need more than just one. There's also, there are, excuse me, there's, part of the copperhead.org, there's a JIRA board and there's roadmaps in there. So there's design specs that people are submitting. And so you can see what's been approved for the next release, what's coming up. We don't have a single foil that captures all of that, but it's all in the open. Right, and through JIRA, you can basically see all the things that haven't been incorporated are the candidates for the next one, right? And if you want one, you can make one. You can say, hey, I would like you to do this. And you could do it yourself too. It's great. Yeah, so there are a couple of pages on the copperhead.org. One is like unapproved designs. Then you have an approved design for the forthcoming releases. So you could actually look into those pages. You would get what is coming up and what is actually planned for the particular release of copperhead. And again, copperhead.org. We'll take you to the Wiki. You have everything. For non-EMC storage vendors, if they want to get started with CopperHD, where do we start? Talk to us. We'll get you in there. So basically it's the same thing. We talk at the weekly meetings, get the technical steering committee working with you, and we just start working on it. There's the southbound SDK, which was made specifically to help this. So I would, if it's some specific array, hey, we have this array. We want to make it work with copperhead. You would come in and I'd say, okay, let's get you talking to this guy because he was the guy who implemented the southbound SDK and he would help you and we'd go from there. Okay, sounds good. It would be very easy. Thanks. Speaking of the storage back end, so far I only see the big commercial vendors, but what if I would like to use some kind of free open source storage solution as the back end? Do you have any drivers or modules in that respect, or do I need to buy some big storage silo in order to use it? No, no, they're coming. We're working on it. I mean, Ceph is in the process of doing that. And again, it's a community thing, so. Well, the question is something already available is this currently being worked on and will be available? Right, available. There's a brand. So there is an active pull request out there of Ceph support in Copperhead. So it's not merged in master yet, but it's in the design, well, it's in the pull request review process. So they're doing code reviews, making small changes, but definitely a Ceph driver exists. And you can download it. I have a vagrant environment that has Ceph support with it. I won't extend it too long. In your SOFOR demo, so you did not show about the HA features, right? Like, you know, there is a, you create virtual pool out of four VNX boxes. Correct. So do you do the HA on the virtual pool or you have an implicit dependency on the underlying storage? Yeah, so there's a couple pieces of HA, right? So there's Copperhead HA. So typically Copperhead, in my demo, it's a single node installation, but Copperhead has HA integrated and you do a three or five node installation to make sure that you maintain quorum and the raft algorithm, all that. But yeah, the HA for Copperhead comes from there, but then the backend systems that they have HA as well, right? They would, that's kind of under the control plane area. Just set it up in the lab, so far so good. But why the insistence on using IP addresses and not DNS for everything? It's kind of awkward. Yeah, it's something to work on, yeah. Yeah, I mean, literally errors out if I put in a DNS address, why can't it just resolve it? Yeah. Good feedback. Yeah, thank you. It's life, so. So are there, follow up with his question, are the drivers then, for example, if I'm one highly resilient storage solution, are the drivers multi-path drivers? How do you handle, is that all under the covers of Copperhead? How does that work? For multi-path support? So if, okay, so let's say you have a host and you have a switch and the arrays and you forgot enough cabling to do that. In the declaration of the V-Pool where you're saying what kind of storage you want, you can specify, I want a minimum of two paths and a maximum of four, or you could say minimum two, maximum two, and that you guarantee two. You can play with that. So it's in the details, got it. And it just, yeah, it'll do it for you. Any other questions? They're waiting for the echo. The next question is, are you gonna give away that echo? Yes. Thank you for those questions. Here, sir, you pull it, because then it's weird, you know. What? No. Okay, that was strange. Okay, ma'am, would you please come here and pull the number? You already put yours in, right? I'm not gonna do anything weird like that guy, yeah, I'm just mixing it here. Okay. Wouldn't it be weird if it's yours? That's gonna be so weird. So number, nine, seven, zero, which probably everybody's so nobody's excited. Four, a little more excited. Six, super excited now, because there's like 10 of you left. Two. There it is. Okay. You are the proud owner of Echo Device, but call her Alexa, because she won't listen to you otherwise. Thank you very much, everyone. And copperhead.org. Ah, and if you go to a summit, you get a cool shirt like this. Ooh. Thank you.