 Welcome everyone to the data services office hour. I'm your host Michelle De Palma, and I'm here with Daniel Parks, not a surprise. Hi, hello. So talk to me about what we're going to do today. So the idea for today is to try to cover a question that comes up quite a lot inside Red Hat and also from services and consulting about if we can deploy or if we can use Fiverr Channel and ISCASY store as appliances to deploy ODF, not to use them as the backing store for ODF. So it's a question that comes up quite a bit. So we can go through a little bit the recommended ways of actually doing that deployment. And then we also have a demo of one of the ways that we can follow just to give a guide on how to do it. Perfect. OK. So are you going to share the screen? You want to give me an overview of the details? Yeah, we can just have a chat about this in the beginning. And then we can go into the demo. It won't be too long, but just to give a little bit of context. So the idea is that really ODF, when we deploy the OpenShift Data Foundation, as you know, Michelle, we normally just give it a storage class. So you will tell ODF, OK, here's your storage class. And ODF doesn't really care if that storage class is maybe a thin storage class provided by VMware or like GP2 or GP3 and AWS. No, it will go ahead and use TVs from that storage class. So really, ODF is a consumer of the persistent volumes offered by OpenShift. Whatever is there, he's going to use or whatever we select in the deployment in the UI. In that sense, another option that is perfectly fine for ODF is, for example, if you have a storage class that is offering storage, that comes from an storage appliance using Fiverr channel or ISCASI, it's also perfectly fine. There are certain things that we have to take into account when doing that kind of deployment for the configuration of how we have to configure the storage appliance and how ODF stays that appliance. For example, you have to take care about multi-path that we will go deeper into that and what it is and how we can actually configure it. So certain things that we have to take in account. But in the end, it is supported and we can deploy ODF using ISCASI and Fiverr channel. Before going into the recommended ways of actually doing that deployment, what I also wanted to mention is a little bit like the benefits or drawbacks of when you deploy ODF of using a storage appliance, a Fiverr channel appliance or something of that kind, or going the other way, more of a traditional way when deploying Cef that is maybe using a bare metal note with just attached this note to that bare metal note in the traditional way. So going back a little bit into when Cef was designed note and the architecture was made for Cef quite a long time ago. Nothing that's more than 10 years now that we have had Cef around in the upstream project. So it has been quite a long time ago. It was always built since the beginning with resiliency of the data in mind. No data safety was always the first thing that was taken into account when designing the architecture. In that sense, if people have already played with ODF, we have to say in any case that the heart of ODF or the storage layer is Cef. Now, that's why we're speaking about Cef now, because when you do an ODF deployment, you're going to get under the curtains or maybe transparently Cef deployed on your OCP cluster on your ODF cluster. So that's why we're speaking about Cef. And as we mentioned, when you deploy Cef in ODF, you're always going to get a pull with replication. By replication, what we mean is data safety in the sense that if a user uploads a file or an object, that file, that object is going to get replicated three times, four times, whatever we have configured in the replication that we have by default in ODF is three. Where that object gets stored, no, those three copies of the object where it gets stored can be three hosts depending on your configuration, maybe three racks, three availability zones. So then really it depends on your configuration. So as we are speaking, safety is clearly or the safety of the data is really important with Cef. And the thing is, if you have an storage appliance and you want to use it for ODF, also enterprise storage appliances have also data safety. So they also have replication. They may be using write five, write 10, whatever kind of rate they are using. They have also data safety in place. So the only thing that you have to take in account is that you are using an storage appliance, Fiber Channel ISCACI, is that you are going to have the replica provided by Cef, whatever is there, by default replica three, and you also have the replication going on on the storage appliance. This really translates that you are going to use or you're going to need more raw storage, no more total raw storage because you are losing storage in the different places where you have replication. So that's one thing to take in account. Now that maybe you have too much data safety depending on the amount of the storage that you're actually using. That's one thing. And the other thing that they wanted to bring up is again, Cef, when the architecture of Cef is really built to be able to scale out not to a huge number of nodes. So being able to scale out without any bottlenecks. That's also like a fundamental part of the architecture in Cef that is very important for Cef. So what you can do is really increase the number of nodes that you are using to very high numbers. And that's really possible because with Cef, a client, when he wants to write an object or maybe read an object, he doesn't have to go through a controller, a storage controller like you would go through a traditional storage appliance where you always have your one or two controllers and every IO that you do, every write, every read has to ask the controller where is the metadata. So there's always a conversation going on there when he wants to access an object. And that's not the case with Cef. Not with Cef, you are able to now and then or when there's an update, go to the monitor and get the monitor update. But that's just a very small query. And then the client by himself using a thing that is called the crash map is able to directly speak to the disks, not to the OSDs. And then he's able to start the data part and do all the IO staff needs. So that really provides an architecture that is able to scale to very big numbers. Can I ask you a quick question? Yeah. So for the demo that we're gonna talk in the best practice stuff that you're gonna lay out now we're always talking replica three. We're not, I thought in one of the versions of ODF we could specify a replica two. Is that, are you gonna, so would you, my question would be if a data engineer sitting back and looking at just how many times they're actually replicating the data and the associated loss of storage volume because they have to replicate it so many times. Like is that, is anything that you're gonna do today is it always assuming that we're gonna have Cef doing replica three? Do we ever, do you see customers ever asking about, you know what, that's too much. That's too safe, too much data safety. Let's go to two and I, or I don't wanna lose that much of my storage, my rate array or whatever or to replica or something like that. So, but just wondering if you've ever, if people are doing that or customers are doing that and what are we gonna see today? Is today basically all replica three? Yeah, so that's a very good point. So going into that we get that question quite a lot. No, and let's say that by default, when let's say that the default pools that get deployed when you install ODF are replica three. But what we do allow now is that, well, at least with internal ODF because we can also have external ODF deployed and then we will connect to an external Cef cluster. But going into the standard internal deployment that you always get a default pool that is replica three but you are allowed to create a new pool with replica two if you want. So the minimum that we support currently outside of the default deployment and the default pool that goes with three is using replica two. So somebody could go perfectly ahead and say, okay, I want to have a new pool with replica two and then I will configure an storage class pointing to that replica two pool and you could have your developers, your applications also consume storage and only have replica two. There is also work for replica one, but replica one is really a little bit going against the architecture really of Cef but there's still work going on there. So because there are some applications as you may know like just an example MongoDB or the many more that have already a replication going on in the application. So if you add the replication on that you have on the application then the one that you haven't set and then if we also add the one that you have on the storage appliance so we're losing a lot of steps of data. So there is work going on and we're going to have news soon regarding ODF and replica one for this kind of use cases where let's say that the application is taking care of the safety of the data and on the replication. Awesome. So going into very quickly into the actual recommended ways. So when somebody, let's say somebody wants to deploy ODF they already have an storage appliance and enterprise storage appliance and to use it because it's there they have a storage and they want to make use of it. The recommended way that we suggest to follow is use the storage provider CSI driver. So each of these storage providers and just condition for example they have a CSI driver that will take care for you of deploying the CSI bits in all of the worker nodes. It will also take care of you of configuring the multi-path that we'll see afterwards in the demo how we do it manually but it will do all of that work for you. And it would also normally the CSI drivers are certified by the vendor not that you have to work with you're also going to get support for it. Maybe you have an issue with the multi-path in one of your workers you can always open a case with whatever vendor you have and he will also give you support. So you are really really going into a relation going through that kind of deployment. Once. So just so we're clear CSI driver talk to your vendor talk to your storage provider. Yes, yes, because it really depends on the storage appliances for example we just mentioned example Dell then you want to use Dell CSI drivers and you will deploy that operator from Dell. And another thing that you get is because we normally deploy with an operator you also get that is going to die cycle for you. So if there are updates on the driver the updates are going to get automatically installed for you. So also data to operation things are quite comfortable going through the approach of CSI drivers. And that's why if you use Fiverr channel why is Cassius appliances with audio that would be recommended with. But as with anything there are always edge cases or the situations where for example your storage provider doesn't have a CSI driver or just a specific model maybe it's a little bit an old model and it doesn't have a CSI driver so you can't go down that path. Then the next option that we have and that's what we're going to show today on the ourselves and then we can use it. So not the storage operator that the local storage operator is really an operator that makes use of a storage that isn't on your work. So that would work is that the storage team would manually present the lose from the storage appliance by using the driver to the work. They will take that. They would also configure the multi path. So once they see the disk they will take care of configuring multi path that we will just speak about it in a minute and then they will install the logical storage operator that logical storage operators going to create persistent volumes and those persistent volumes we use in the ending by ODF so we can also deploy in that way. When we choose this kind of deployment there are things that we're going to see in the demo that you have to take care for yourself. One is the multi path configuration in the OOS notes so in the worker notes that we're going to use then the path configuration has to be done by you. And the other one is that they should going to get when doing that deployment is from the OpenShift storage team because there's a team in charge of LSO so they could have an issue with the multi path. The only requirement that we have right now if we want to go down this route, no, if you're using LSO with an instance appliance, Fiber Channel or ISCASY is that we request opening a support exception. So let's say that the support exception is a way in which also Red Hat can see the configuration that you are doing check that everything makes sense all the options that are in our right and that we won't have support if you go into Prolacier notes. So it's just double checking your configuration and checking is fine. Okay, that's awesome. Yeah, so that's the introduction and if you like we can go into the demo. I will just share my SPM on it. You can do everything from scratch or like the local storage and then you can do multi path or okay, hang on a second, let me share. There you go. Yeah, there are certain things that I'm going to have to that I already have in place and I will explain just in a moment. So the only thing that I wanted to show or to mention just in case somebody hasn't heard of multi path before and that's why I have this image and this is explain a little bit why it is so important to actually configure multi path. So with multi path in FireChannel why is Cassie, what we are providing is different paths to the drives to your storage appliance to the disk that are going to be consumed by your server. So we are working with databases with production applications is never lose access to your IO, to your drives and multi paths provides availability and removing single points. And the example that we in this diagram is simple. As you can see, we have a server that in our case would be like an OpenShift Worker node that has two HBAs. By HBAs it's just the interface that gives us access to the FireChannel. So the server has two HBAs we also have two different sand switches. So we have two different switches with different ports and also the storage appliance has two different controllers. And what this provides us is that we could lose in our server an HPA, for example, we see a failure and you lose the full HPA the server is going to be able to go through the other path and still have access to its data without even noticing it. Maybe you have this whole loss or blocking it in your app but the application is not even going to notice it's just like a sort of interruption what the failure is going on if you're working with passive. But you could also, for example, fail a full sand switch or maybe you're just doing an upgrade to the firmware of your sand switch and you need to very briefly switch off the sand or even if your controller, the ports on the controller and the actual FireChannel cables are quite dedicated so maybe one gets broken, you get an issue on your controller you're still always going to be able to provide access to the storage through the other path. So that's why it's so important to have multi-path configured because if you don't use multi-path and you go straight even if you have to HPA, but you only use one of them then it doesn't matter how many HPA you have if you lose the path you're going to lose access to your storage. That's why it's important. And I'm going into the demo that we're just to give a little bit of the layout that we are using. Let's say that the demo that we're going to use is just a lab so I am not taken into account that this deployment is perfect or generalable so let's say that the payment or something that would be more production ready it would be something like this where each of our working nodes as we said have to mix two interfaces and each of them goes to a different switch and also our storage appliance is connected to the switches so everything is redundant so you could support the failure of almost any of the components and you will still have access to the storage. But I'm going to the actual lab that we have that is a little bit simpler because the main goal of what I want to show today is how to configure multi-path and then how to use LSO to consume that multi-path device that we have configured. So this is our demo, our diagram what we have is three worker nodes that are going to be our storage nodes and what we want to do is assign a learn to each of them because this is what later on ODF is going to consume. A learn really is a drive and we are going to a Red Hat Enterprise Linux so this is really a real eight running on a virtual machine that we have already configured and this has been configured as a ISCACI target server. So in this server we have 3100 gigs we are going to map each of these loans to one of the servers. So this one is going to have access to Lynch 0, 1 and 2 so we are going to have in total 300 available for ODF in this example. Why are we going to have multi-path? Because as you can see here our enterprise Linux Pog that we're using as a SCACI target has two nicks not interfaces connected to the switch. So each servers are going to see the storage through two different paths that we have here and that's why we need to configure multi-path. If we have time at the end we would also fail one of these nicks and we will see how the storage keeps working without interruption. Is that more or less clear? Michelle do you have any questions before we begin? Yes, okay can you flip back to the one that shows the HA pairs and everything, the other slide? Okay, just to see, just I just want to go make sure we go through the difference ones again. So the worker storage ODF nodes only have one nick in our case because we're in the lab. We only have one switch but our switch has controllers and if we have time we get to fail one over. The end result will show one line exposed to each matching like one zero will go to the one that you just like you have in your diagram. And that's perfect. I just wanted to make sure like just to go through it one more time. Okay, awesome. Yeah. The good thing about this more production ready let's say now is that you have really two nicks in your work and you have two switches. So you are using single points of failure here this is a single point of failure. Each of the nicks is a single point failure. The only thing that is really high available is the discharge appliance. But it's good enough for our example because it's going to give us the chance of configuring multipath for both of these paths that we have here, not for both of these nicks that we have. Okay. Very great. Perfect. So let's go a little bit into the demo here. Here I just have, let me show you an OTP cluster. We have three master nodes and three worker nodes. Let me also go into one of these worker nodes. So we can also run commands from inside the worker node. So what I'm doing here is that I'm going into the operated system node of one of the worker nodes that you can see here. Now this is a core OS. So I really have just logged into this one. So we have, we will leave this ready. And then we can go from here. The other thing that I have in ready in another, let me just go a bit bigger, is the ISKT server that I showed you know, this would be like the storage appliance. And I would just want to show you the configuration that have here. I don't want to go into the details of the ISKT configuration, it's really very simple because it's using open ISKT, you know that it's available in ready, but I won't go too much into the details. The only thing that I wanted to mention, so you can see is that we have this IQN that is really like a unique identifier for one of our worker nodes. And I will show you right now in the worker node how we can find out where it is. Mapped LAN 0. For the second worker node, we have LAN 2 on the first one, we have LAN 1. So really just to show you the configuration very similar to what we have here. I'm really mapping this LAN to this server, this one to this one. So we're doing a one-to-one relationship using the unique identifier that this is typical of ISKT. And if we go here into ISKT, this is our worker node that entered before. We have here the unique identifier name. I just want to show you that this name, let's just get the last node 7575E, actually here is this one. So really what we are doing is we are mapping to that worker node, this LAN node, that's the relationship that we're doing there. In any case, I wanted to mention before going on that I have a document that I have in GitHub that we will share at the end with all of the steps that we have, even for configuring the ISKT server, just in case someone wants to go through the full demo, I have all of the commands and all of the outputs. So if someone wants to later on give it a test or maybe go a little bit slower and do the demo, we are going to share that at the end. Okay, so let me just show you then. Come on. Okay, that font's really small. Well, you know. Yeah, this is just forgetting the command. I'm not going to run anything from there. So what I have already done is I have from the worker node so all of my worker nodes have already logged in to the ISKT target, to the ISKT storage appliance and they are already seeing the LAN nodes now that are configured. So what you can see from this command that is the ISKT ADM command is that I'm seeing through two different paths that is what is called the portal here that you can see that is two different IPs, one hands down, one hands down one. I am seeing LAN zero, that is what we have configured here as you can remember. We just mentioned that we have LAN zero. So what we are checking right now is that it's actually working. The configuration that we have done is perfectly fine. The thing is an important thing to note is that you can see here that this is called disk STB is a ASCII disk and SDC. So the important thing to take here is that the operating system, the Linux operating system is going to show you each path to the storage as a disk. And let me also show it here with LSB. So as you can see here, we have SDB and SDC of 100 gigabit. What's the biggest mistake if you go ahead and for example, you would say, okay, I want to use ODF with SDC. Then you are not using multi-path because you are directly going through one of the paths only we have to do and why we need multi-path is that we are going to configure a virtual multi-path device on top path that we have. So really we don't want to use SDB, we don't want to use SDC, we want to use a multi-path device when we actually configure so an ODF. No, that's the important thing to take into account. So that's the thing that is a little bit confusing, no, and that can lead you to an error is just going ahead and using these two disks. This disk, if you run a command on this disk, it's just exactly the same as this one because as you can see here, the backend for both of these drives is long zero. It's just a path to each of the two different paths to the same disk. Okay, so what we want to do now as we just mentioned is configure multi-path. So we actually have that virtual device mapper ready to be configured by LSO. So let me just also show it from here. So this is really not dependent on CoreOS, this comes from RAIL, from Linux in general. When you want to deploy multi-path or you want to configure multi-path, you have to use a multi-path.com file. The one that I have here is just an example. It's almost a default example, not with the default user-friendly name. So I really won't go into the options because what normally happens is that the storage provider that you are using is going to recommend you multi-path options that you should set in this file. So for example, you are using HP, and a certain storage appliance from HP will have one or two different options. Then you could go to there to whatever storage provider they are going to recommend you what to set in this file. That's why we're just using a very basic default and just taking into account that you should ask your storage provider what to set in the multi-path.com to get the optimal connection or configuration in your case. So, okay, we have the multi-path.com file. How do we configure it in our working nodes? No, we have our three OpenShift working nodes. How do we configure that, that Confile? From OpenShift 4, the thing that we normally recommend is that you don't make modifications on the operating system. So don't go here, for example, into this node and then use VI and do a modification into the file or change whatever because those, once you do an upgrade, once you do a modification, are going to be lost. So they're not permanent. There are some recommended ways on doing modifications to configuration file or OS. Worker nodes in OC is using what is called the matching config and the matching config operator. The matching config is really an object that is going to take care of implementing whatever changes or whatever files you want to introduce into your operating system. So it's going to really take care of having the same configuration files on all of your worker nodes. So there's really no drift between them. If you make a change in one of them, it's going to take care of always having the same configuration on all of them. So that's why you have the matching config object and why it's so nice and easy to use. A good thing using the matching config operator and the matching config object is that it will roll out the changes in a rolling up fashion. What I mean by that is that it will go node by node, upgrading or making the modifications that you have implied in your declarative Jamel file. And it will take care of coordinating the node, draining all the ports and then making the changes and rebooting that worker node. And then it will go into the next one. It will make sure that everything is running and it will go into the next one. So it really takes care of all our ports and that we don't lose any, we don't have any application downtime while we make these changes. Are you going to show us how to do that now? Like we'll actually see how you make these changes. And I was just going to ask you, but don't we need certain things to be specific to the node? Does the machine config operator handles specialty? Like one's going to go to loan zero, one's going to go to loan one. They're not 100% identical, right? Okay. Yeah. Yeah, so the configuration with, with multi-part.com, so the good thing is that the modifications that we have to make are not specific per node. So the only configurations as you can see here that we want to implement are really these ones. And this is generic for all of them. If you have specific cases, you can also do matching config per node, but then it loses a little bit of its power because you have to go specifying or making profiles for each node or pools for each node that is a little bit of an inconvenience. But what normally happens if you use MCP, that this is machine config pool, you can see here that we have two pools, no for our masters and our workers, but you could make more pools. This is configurable. So for example, you could have a new pool that is called Franotes or storage nodes or whatever there. And then the changes would only be applied to those nodes. Here we're doing a simplified configuration and we're only going to make modifications to the MCP related to the worker nodes. Okay. So let me just clear this out and what I will do is just a second because this takes a while. Let me apply the file and then we can just cover a little bit of what we have applied. Okay. So we said we have a machine config object. We are going to apply only worker nodes. So the road is worker that we just mentioned before. So we are only going to apply these changes to all of the machines that are considered workers. This is the name that for a machine config that I will show you in a moment. And the first thing that we want to add is a file. We want to add the multi-path.conf file that I just showed you a moment ago. How do we pass the components of the file? Because this is using ignition. That ignition is also from CoreOS and it's a way that we use an OCP for deploy configuration of the OCP nodes. It really uses page 64. So what you're seeing here, this stream is really the multi-path.conf file. The only thing that is in page 64. So as you can see, this is exactly here. I have just run this command, this part here. That's how the ignition files are passed. Now we have to make this easier and that will come from this change for you. So you don't have to really be involved. This has really improved a lot the user videos. But just for this example, just so you know that here is our file and it's going to be placed in all of the worker nodes in this path. And then we are also enabling these different system D units. So we are really making sure that if we reboot the CoreOS server, we are going to be able to run ISKACID and that we need for the connection and also multi-path D. Okay, so this is our matching config. We now do an MC. What we can see here is that the worker nodes, we have a new render config. The matching config, even on each worker node has detected that there's a new hash, there's a new version. And then it has started updating the nodes. As you can see here, it's just updated through. So the update is taking place. Okay. And it has already done one machine. Here you can see that we have three and one is already ready. One already has the new version. And if we do OCKET nodes, you will actually one of this scheduling disable. This is the one where the updates are happening right now. So as we said, it will go one by one doing the upgrade of the different. While this is going on, what we can take care of just to move forward is actually installing the LSO operator. So this is the UI of the cluster that I just showed you before. And let's just install the local view operator that we have here, local storage. This is version 410. This is just normal procedure. You know, when you would normally deploy the LSO operator. This is just the operator, it's not the actual operator. So we are not in any configuration. This is just to get ready. Another one that we can also deploy to move a little bit forward while the updates are going on is actually ODF node. You can do here data found patient. And we can also leave this version 410.4. I mean, make this a little bit bigger also. And we can also leave this deploying. We could deploy the operators via the CLI. I'm just following the UI way of deploying the operator. That's why this is going on here. To the machine. So we can see now. Okay. We have two. So we're just missing one. So let me go back to my worker node. Let's go back to one of what the changes have already happened. So what I would really want to check now is that actually the multi-path configuration is working okay on this node. So let me just very quickly run the command. The command, no, multi-path minus LL and do the configuration of the multi-path device. So what we have here is that now we have this virtual device mapper. Virtual device or virtual discord. That really is the one that we want to use to configure LSO. So when you want to reference a drive that comes from Fiber channel, that comes from my SCSI, you always have to use this name. And as you can see, what it has underneath is two paths to our storage appliance. We can see here that it's drive STG and STB because our storage appliance and what we're using is really active passive. What you can see here is that one path is active. The status is active. And the other one is just an echo. It's really waiting in the case that this path fails, it will do a failover and then the other one will be active. So it's ready to act if needed, if we have a failure here. Okay, so this is looking great. We have MF-A configured. Let's see if it finishes in all of the nodes that should be getting there. So the virtual device on each node is going to have the same name. Even though they're involving different names. Empath, okay. Yeah, you're always going to, when you saw that in the configuration file, we used user-friendly names. If you go with user-friendly names, then we're going to call it Empath-A-D, no, in order as you add more drives. Okay, so let's just check MCP. This is matching configs just to mention it, that we have here, this is the one that we have deployed. But what I wanted to check is that everything is updated. So as you can see here, the three working nodes have been updated. Everything is looking fine. Empath-Multipath, what we can do now is actually deploy LSO. So let me move to the LSO namespace and see if our operator is okay. So we can see here that our look again, everything is looking good. Next thing that we need to do is actually label our nodes. And now I will explain to you in a moment why let me just get them. You thought we were already... I mean, we had the worker label. You're going to give them an additional label? Like, okay. Yeah, let me run it here. Here I'm running the commands. So yeah, so this was this label. And I'm sure that this will make sense now. I want to explain it. What we are adding is this label, the OpenShiftStorage label. If you remember when you deploy it, if you do it through the UI, this gets automatically done for you. But you're going through the CLI. You have to really use the label. So ODF nodes, which worker nodes are going to be your OpenShiftStorage nodes. So that's really what we are doing now with this label. So we are really not making it a worker. We are really saying, okay, I want to use this label to deploy ODF. And here we can see that both... Well, the three of our worker nodes now have this label. Now that we can also do big nodes, labels, I think, and just by the label that we have just set. And we should have three. So as you can see, we have worker, worker, our worker, three worker nodes, now have this label. Okay, so what we want to do now is use a local volume YAML. So what I wanted to mention here is that when doing ODF and using LSO, you can do everything, when you're doing a normal deployment that you're not using multi-path. When you're not using multi-path, you may have someone on a permal node or on AWS or whatever. If you want to use LSO, you can just go through the UI and the ODF UI will really guide you through, install the LSO operator for you and then it will do all of the configuration of LSO for you. I think to take an account is that when you say multi-path, you can't use that standard approach. So you can't use the UI. If you want to use the multi-path, you have to use the local volume object and then show you here the local volume. So normally what you will get with LSO is that you will normally just tell it, okay, go to these three nodes or whatever nodes and scan all the storage that you have and then you will set filters. So for example, I want to use all of the drives that are bigger gigs or I want to use all of the drive from a model of HIDACHI or whatever. So then it was really a lot of the work for you because you don't have to go disk by disk, especially when I want to use this drive or the other drive, you just set generic filter and it makes the deployment easier. That's when you go through the UI or when you use the discovery approach that is the default now in LSO. So just so I'm clear, so you're saying that as I remember, LSO will go and do a little discovery and then you put a filter on that discovery to see which ones you want. But in this case, because we have MPath A, which is virtual, we have to specify. It won't discover it on its own. That's right, that's right. So the current implementation will change in the future of LSO. It's not able to discover, let's say, multi-path devices. It doesn't understand them just to give an example. So really, we have to go and use this local volume app and give it the specific device. So it's able to consume it. Otherwise, the discover side of things right now is doesn't understand multi-path devices, no? So that's the issue on why we can't use it. In the future, it will be implemented. But right now, if you want to go down this path of using storage appliances with multi-path, you have to create this logical volume. So just let me just run it and I will not play in the second one we have in the file. Okay, so in this file, what we are really setting is the local volume object. And what I'm telling it is all of the nodes that have this label, if you just remember a moment ago, we set this label in our working node. So we are really telling it to all of the working nodes that have this label and consume or use for LSO the device M-path A, that is our multi-path device. And as we explain, that's the one that we want to use for ODF and then we are telling it with all of these devices that you find, create a storage called local block. So if everything's correctly, when this process of the local volume finishes, we will have storage as called local block that we can use on our ODF deployment. So let's see if we have, okay, so this is perfect. Let me just also run with white so I can explain you a hint here. So as you can see here, we have a maker pod for each of the different working nodes. So really this maker pod, what is going is a two hour node and it's doing the configuration for that multi-path device. If now I do, let me just do a key here so we can see it better if I do the system volume. As you can see, we have three system volumes available and they are 100 gigs. That if you remember, it's the size of the lines that we have come from the sublines. And just to make sure that we are actually using the multi-path device, you know that it's very important what you can do. It's for example here, let's see if this works. Yeah, so what you can see here is that the path that we're using for this persistent volume is our M path A, no? So that's perfect, it's exactly what we needed. And finally, we can just list the storage class for the storage classes that we have. And we now have a new storage class that is called local block. One that we want to pass to ODF. So let's go into ODF and now just follow the deployment. They have done that part that I just showed we have already done the hard work. Now really we just have to go into our installed operators. If you remember, I already installed ODF, the operator. And now we want to create a storage system. So we want to... Here, we're not going to do any modifications. We're going to do a full deployment. The difference is that the storage class that we are going to use is local block. So here I'm telling ODF, okay, go ahead and use my LSO devices that are actually our multipath devices from our storage appliance. And the rest is just the same. Here you can add raw capacities 300 gigs, which makes sense because we have drives of 100 gigs that I just mentioned before. So just go ahead. This is also asking if we want to use encryption. This is just to show that it actually deploys now. So we can go ahead with all of the defaults. And now ODF is going to get deployed for us. Choose it, the persistent volumes that we have created. This is normal when deploying ODF. So when it just comes up, we can just go refresh the browser. And what I'm going to do is go into the native space where it gets deployed. Open. Shift. And let's see if this is going ahead. Okay, so as you can see, this is ODF getting deployed. So what is going to happen here, what should happen is once the OSDs that are the ones that consume are drives, we should see that the persistent volume claims are no longer available and that are being used by ODF. This maybe will take a while. Okay. Yeah, it's still deployed. The other thing that we can move forward and do while this gets deployed is actually taking a look if we have the test application that I wanted to run. And another thing that I wanted to check, I don't know how we are doing all the time, Michelle. Oh, we have like 10 minutes, but... Yeah. So we can just... I want to do the full demo with the application and everything because it's going to be too long. Just to mention what I wanted to do is that we have like a test application that we would also leave that link so people can use it if they like it, have repo. But that application really, what it does is that it does an IO or does certain IOs each second. So you can very easily see if there's... If you lose IO, you can very easily see that you have lost IO for certain seconds. And then what I wanted to do is just throw down one of the ISACI paths and see how the IO wasn't... It was not interrupted, let's say. But we're not going to have time and maybe it's too long. But the only thing I just want to wait until ODF is deployed so we can see that the actual PDs have been claimed. Okay. So while this is the point... So just to recap at a high level, because you mentioned that this is a question that you get from customers about how to configure it with multi-path. ODF is just from the ODF point of view, it's really... It's another LSO configuration, right? Like that's nothing really changed at the high level. Underneath LSO, you have to have the multi-path configured. And a confusing point, at least when you were talking before, is that when you go into your worker nodes, you actually see both... You see what is really the same disk show up twice because you've got two paths to it. And you have to not use that and actually create the multi-path disk properly. And you are doing that through the machine config so that it actually gets applied the same way across each worker node. And we just went through the detailed steps to get there. But that's... I'm just thinking, like, as a customer thinking about this, like, if that's how you come in, ODF is basically the same. You're just going to choose the right storage class. Underneath that, you got to make sure LSO is using the virtual device that is the multi-path device. And underneath that, you want to make sure your machine config has been... Pool has been configured properly so that you are using the correct disk and it sees it as one virtual disk. So, okay. Multiple steps. Multiple steps. You know, trying to... Yeah, that's perfect. The only small thing that they would add to what you have just said, that perfect is that you have to take an account that you can't use the UI for deploying LSO. That's the only thing. Know that you have to go with this local volume through the CLI. But otherwise, what you just mentioned, it is perfect. So, really, this is... Do you know... Yeah. Do we have this in the training docs or is there a knowledge-based doc? Maybe I'll look for that after. And if there is one, I'll post it in the description just so that someone can see the steps laid out. Like, don't forget, can't use the UI in LSO with this step or something like that. So, I bet you we have it as an example. We might. Okay. Awesome. Go ahead. There is like a knowledge-based being in written with all of a little bit with these instructions. So, we also customers have access to it and taken account of these little details that we have. So, we want to configure LSO. In any case, again, the preferred route to go through, if possible, is with the CSI provider of the storage appliance. That's going to make it easier because all of the steps that I showed here today, then you will just deploy that operator, the operator from the storage provider will do all the work for you and then you will just deploy all the effort normally. Just pointing to the storage class that has been created by that operator. And that's really easy. If you are in the case like we just showed here that you can't use that path, then you would have to follow these steps. That as we mentioned before, we will have them with all of the details on the notes for the show. Just in case someone wants to go to them and they can help. Okay. So, just let me show you here. If we have our CF cluster created successfully, health and just the final thing that I wanted to do is remove one path. So, what I'm going to do here is really, I'm going to move one IP. And I just want to show you how this shows up in the multipath. I did a moment that we need to open. This is just the final command. I just want to show you how now we have in our multipath configuration but the IOS is still fine and everything is still working. We do here. So, we're going to see everyone is active, right? Like we had one that was active and then one that was passive. Let me just forget to actually apply the change. No, because if you just run them CLI, now we have applied the change. And you'll see, as image of the path, I don't know which one of them, but we should see one of these paths that have paid. So, as you can see here, the only thing that we failed the passive one. So, we've seen the failover. Just to show you that the path is faulty, but this one is still active. If you try to write to M path A right now, the application will be, and let's say that CEP and ODIF will be working early without even noticing that you have had this issue to show why it's so important really to use multipath. Here, we have just dropped an interface, but this could be a switch, it could be anything in the path of the data. And that's why it really is so important to have multipath when you're using FiberTanel and AISCASI storage appliances. Okay, well, that was fantastic. Wow, so lots of detail. Maybe you wanna refresh that screen. Oh, here we go. Lots of detail there and we are definitely gonna apply. I'll put some stuff into the description so that people can see step by step, maybe a knowledge based article. And we wanted to give the repo that has the application so you can see like rights being done and then maybe you can fail one over and stuff like that. So that was really wonderful. Thank you so much. As always, pleasure. So take care everyone. Bye.