 host of the data services office hour, and I'm back with Daniel Parks. Nice to see you. How are you? Hi, hello, doing okay. So tell me what are we gonna do today? Yeah, so the idea today is to cover a little different topics, but try to put them all of them together. So we're going to go through a little bit, what is single node OpenShift? That is our like OpenShift offering with just one node for what we want to work on the edge. And we don't have a lot of place for let's say, compute power that we have a reduced space where to put our servers. And then on top of the single node OpenShift, we, let's say that we have the need for having storage for our stateful applications, we need storage. So we have also come up with the logical volume operator that we're going to show today how to deploy it. And this is really offering dynamic provision storage for single node OpenShift. No, that's not to give us access to block storage with a CSI driver, like your normal experience that you will get with PVs and PVCs offered on with this logical volume operator. So this is like micro OpenShift and Open Data Foundation. Everything has collapsed onto a single node just like, did I hear you correctly? Yeah, that's right. Wow. So when you want to, let's say that this requirement comes, well, now edge covers a lot of things. Now edge is almost everything, but this requirement also is very powerful in the telco side of things where you want to have that far edge where you need like your radio access network for the 5G network node that we are moving forward to now. And then you have very little space for your compute, for your racks. So you can really work with this really reduced footprint regarding servers. And then you still have your, let's say, your OpenShift experience where you have your API, that's exactly the same. The only thing that the master and the workers are everything is running on a single node. This has been available, I think, since OpenShift 4.9 and it's really, really nice. Now in ODF 4.10, we have as a tech preview the logical volume operator that is really a way of consuming the local storage that we have in this single node, OpenShift node, just consuming that storage on an easy way with our useful PVs and PVCs. And the idea that, yeah, so adding on top of that, that is going to be a little bit long, but adding on top of that, what we can also make use of is that we also like it a lot, not like the multi, let's say that we have a need for object storage. So what we can do is once we have deployed our logical volume operator, we can also deploy the multi-cloud gateway, stand-alone deployment. So we can have also like offering dynamic block storage for a stateful applications, and we can also offer an S3 endpoint where we can work and have object storage. Now with the help of multi-cloud gateway stand-alone, that because it's not the full blown ODF deployment, it really uses a small amount of resources and it could also be a good fit if we need to use object storage. Fantastic. And just finally, if we have time after all of that, we will just run an application on top of that. No, a demo application that we have, that is a photo album where we can upload a couple of photos and see how they get saved using the S3 endpoint and that application, we can see how it gets saved in the end into our local storage for any single load openshift. Awesome. So, okay, so let's get into it. Do you want to show a demo? Do you have slides you want to go through? Yeah, we can just go into it and because we have to wait now and then for the deployment to finish, we can speak about certain things as we go along. So I will share my screen just a moment. Entire screen, share. Okay, hang on. Okay. Great. Can you type something? I just want to see the font size for something. Yeah, that's a good idea. Fantastic. Is it okay? Yeah. Okay, so here, today, just to make it a little bit different, what I wanted to do is in the last show or a couple of shows, we went through a lot of the terminal and also we did some automated deployments. No, using Algo CD and the GitOps approach. Here, we're going to go a little bit the other way around and we're going to use the dashboard as much as we can, so the openshift dashboard and we can go through the UI and do the configurations and the deployments there. So... Can I ask you one question before we begin? So given that we're doing this edge, completely collapse, openshift and solve with ODF, is there any, at this stage, if you were talking to a customer, what are the questions, any questions I have to answer now? Like, are any design decisions, like I know with MCG, you can do stuff, for instance, like you're going to have it running on the edge, taking advantage of caching, but it still needs a backing store. Like, are there any questions that have to be, because that, you know, Open Data Foundation can be like that. There are upfront design questions you have to answer. So does anything come to mind or do you want to go through your example and we can talk at each point? Yes, we, just very briefly, that the main, the biggest thing that we have to take in account is that with this kind of architecture, everything is a single point of failure, in the sense that, because this is a single node openshift, the applications that we deploy here, all the object storage that we're going to consume, all the storage is local. Well, that's the configuration that we're going to use today. It doesn't mean that you could also use as a backing store, another thing for multi-cloud gateway. But in the example that we're going to show today, you have to have very clear that this single node openshift, if the node fails, you're going to lose everything because everything is local, everything is running inside. Now, so really, you don't have the HA features that you will have when you're running OpenShift M3 nodes or maybe ODF M3 nodes. Now here, you're working with, without HA, and that's something that you have to know that if that node fails, you have to have some kind of backup system where another system can come in from another part or whatever DR kind of way of working that you have for that farage that you can assume the loss of this HA node. That's like the biggest thing to take in account. But as we do the deployment, we can speak about other things that also we have to take an account, for example, regarding resources of the nodes and all of that. Okay. So the only thing that I wanted to show here about single node openshift is we already have it deployed because it takes like around 50 to 20 minutes. So we don't have time to wait for that. So I just wanted to do an OCE Get node just to show you that we have a single node and that the roles are masters and work. And also it's doing both of the roles. So we're going to mix in the same node our control plane workloads and also the actual application workloads, everything mixed up on the same node. Just to get the cluster version, this is OCP-410 that we can see here. And one final thing that I wanted to show is if we do an OCE Describe node of SNO4 that is the name of our node. I just wanted to show the resources that we have that if we go a little bit up with the Describe nodes, you get loads of information about your actual node and all of its resources. Here we can see that in this deployment that we're going to use, we're using 16 CPUs, 16 cores and we have 64 gigs of RAM available. So the minimum that we support normally is half of this node. So we could also support eight cores and 32 gigs of RAM is like the minimal support that the minimum supported to get things working with single node openshift, let's say. Okay, so that's single node openshift. We have a deploy, we're already logged in, everything is working. So moving into the logical volume operator, not the logical volume operator. As I said, an ODA410 is tech preview and for the time being in this release, we need to deploy the operator in the OpenShift Storage namespace. So it's the same one as we have in OpenShift. So that's the only thing that I'm going to do here from the UI. I'm just going to show you a second, a YAML definition. It's just creating a namespace. In the labels, we have the cluster monitoring enabled now and then we have the name of the actual namespace or project that we want to create. Okay, so at this point, your cluster is new. It's new, it's small. You haven't deployed ODF yet. You're creating namespace. You're going to install the logical LVM operator. Did I say that right? And then we'll deploy ODF after. That's correct. So at the moment, we don't have anything, as you can see here, operator-wise. We don't have any of them, just the life cycle manager. No, that's the standard that gets deployed. So it's a completely clean cluster, just deployed with OpenShift without anything else. And the first thing that we have to do is create this namespace for the... OpenShift storage. Yeah, for the logical volume operator that is actually OpenShift storage. At the moment, this is a requirement because some of the service accounts that are used when the operator is deployed are hardcoded into this namespace. That's why we have to use this namespace in the future releases. This will be removed, no? But for the time being, it's here. So, okay, we have the namespace created. Now we can do OC project. I'm going through this namespace that there's not going to have anything here, but we can see that we don't have anything. Okay, so this is ready. Let's move into our dashboard. So this is the same cluster dashboard, the UI that we have available. And we're just going to follow a little bit the standard procedure that we will do. Now go to operator hub and look for the LVM operator. We can just type in here LVM and we will get the ODF LVM operator. It's going to be version 4.10.1 and we can just hit install. There's nothing special here. A stable version 4.10. Install namespace. Here is where we want to be careful and we want to select OpenShift storage that is just the one that we created before. Okay. So let's choose it from the list, OpenShift storage and the update approval is just a standard automatic or manual, whatever we want to use. And now the LVM operator is getting deployed. So while this is getting deployed, what I wanted to mention, speaking about the ODF LVM operator, really what it uses is the logical volume manager. So LVM is for logical volume manager and this has been like for a long, long, long time in Unix servers, in Linux servers, and it has been used quite a lot. And what they have done now is using this logical volume operator is we have LVM as a backend that is able to use the local storage of our nodes and then using logical volumes, it really would make a slices of our volume groups with logical volumes for each PV that we ask. So when we say, okay, I want the 10 gig per system volume, it's going to go ahead and make a logical volume of 10 gigs and it's going to dynamically provision it. So it's available from the local storage. That's a little bit what the users know of the logical volume operator. Let's see if it's done. Almost no point. This is just the operator that gets deployed that will take care of the creation of the container and the pods. Another thing with them, with the logical volume operator is a stack preview. If you go like to the main documentation site, the official site that we have in Red Hat, you will see instructions on how to deploy it, but the instructions are for deploying it with ACM. Because as you know, the advanced cluster management ACM that we have is really also focused in managing very huge fleets of clusters. So it really boils really well with a single node openshift and the far edge and the edge where we can, from a single point deploy all the clusters that we need. And let's say that ACM, the advanced cluster manager works with policies and we can enforce policies. So what you get in the official documentation is, okay, here you have a policy to deploy and configure the logical volume operator. And then you are going to get in whatever clusters you select, you're going to get it deployed for you. So that's what you have in the official documentation. Here we are going to go through like a manual deployment so we can also cover the different topics that we have. Let's see if it's, oh, okay. This is running, let's check with here we, if the phase succeeded, know that this is a good sign that the operator has finished deploying. Okay, so let's go back to our UI and now let's say that we have deployed the operator. Now we need to deploy our operand node inside the operator that this is the LVM cluster that you can see here. So I'm just going to press create instance. And here we have a couple of options to select. This is just the name. I'm going to use LVM CD, just whatever name will be valid. This is the actual name for the LVM cluster and inside storage. We have a thing called device classes. Where let's say that the upstream project that is called top LVM that we will cover just in a moment has a lot of options that you can use. But the downstream part of the logical volume operator currently the only thing that you can really do here is select the name of the volume group that is going to be deployed. This is not really important. The only thing where this takes any importance is that the name that you set here is also going to be the name of the storage class that we want to use when we request a persistent volume. But it doesn't do anything else. So right now, so okay, so you're creating the volume group MCG. Are you going to go back and create another one for just for general use or like where? And I have to say it's been a long time since I've done anything with LVM. So you're going to have to refresh my memory in a while, so. Yes, yes. The good thing is that it does almost everything for us. No, but speaking about LVM, you have like this volume group that is an agroupation of persistent volumes, let's say of disks, so really you create a volume group and then you attach whatever. You attach everything to it. Okay. The volumes that you want. And when you have those physical volumes, then you have logical volumes that is really the slicing of your volume groups in the smaller parts. Everything is like a logical separation that you have there to have the sizes that you need. Okay. But at this point, the only thing that we are doing is really setting the name for that volume group and nothing else. We don't have to touch anything else. Regarding your question about if we can have more than one volume group at the moment, we can only have one, no? In this release in the LVF410 that is tech previous we spoke about, you can only have one volume group. So you have to use this volume group for everything. And I think, I mean, we're on the edge how many volume groups do you really need? Right? It's one node. Yeah. Create one big volume group. LVM is going to sit on top of that and everything will just attach to that. Okay. Yeah. So we have like the bits and the balls that we need for CSI getting deployed here that I will show you just in a moment. But I also wanted to show you going into the actual node, not using the bug and going into our worker node, no? Where we have our local storage. I just want to show you a moment there. The disk that we are going to use. So as you can see here, we have two drives, SDA and STP. SDA is already used by the core OS deployment node that is currently being used. And then we have free this 200 gig disk that you can see here. So another important thing to take in account currently in this tech preview release, when you deploy the LVM cluster that I just showed you, it's going to go into our single node OpenShift node and it's going to use all of the drives that are empty. So it's going to go looking for all the drives that you have and the ones that are empty are going to be used for the logical volume operator for this volume group that we are created. In the next release node in ODF 4.11, you are going to be able to choose which drives you want to use for LVM and which you don't want to use. So currently it would use all of them just another node there to take it account. All right, see how this is going. Everything needs to be ready and running. Just a brief mention here, what you see with Topo LVM, as I mentioned before, Topo LVM is the upstream project. Now the upstream project has been going for a while and as I said, there are all kinds of configurations that you can do with logical volume manager, mirrors, you can do a lot of things. For now with logical volume operator, the thing that you can do is what we just saw that is creating a volume group. If we take a look at our storage class, now we have a storage class. Before we didn't have any way of working with dynamic provision storage. We didn't have any way of using storage now. We have this storage class available for us. And just as I mentioned, the only really important thing about configuring the name of the volume group is what actually is going to be the name of the storage class that we just mentioned. So we have it added there. And going back to what you mentioned before, Michel, regarding you don't have to touch or know anything about logical volume manager. But now that we are in this node and just to refresh a little bit, it's true that here we have, sorry, let me just go in a second. Here we have our logical volume commands are going to be available again and we can just run them. Let me just run this here. Okay, so, but just to restate for people listening. This is upstream. This is Topple LVM, right? And currently we are, it selects all the disks that are free. And right now, like in the future, it would support someone going in and saying, let's create a mirror at the LVM level and then consuming that, right? But not at this time. That's a future, the Burntech preview. So, okay, is that correct? Okay, all right, good to know. So in this ODF-410, just as you mentioned now, it's like a little bit in the bare bones that everything is working and you can use it and we can see, and we're going to see now that everything works great, but the options that you have are limited, no? A couple of things that you're limited to and that are coming soon is that right now, when we create the logical volumes, that is really one we asked for a persistent volume, no? We are really creating a logical volume that we're going to see now how that works. The logical volume is always thick provisioned. So currently we also don't have thin provisioning. As I said, this is also something coming in the next release, no, in 411. And also the biggest thing that is coming in 411 that is not working right now is a snapshotting and cloning, no? So we are going to have CSI snapshots and close, just like you will have within any other CSI driver available also on the next release, no? So there are a lot of things coming our way, but right now we can use it and it's available to test out the tech preview and we're going to see now how it actually gets created. So things that I wanted to show you here this is the actual physical node, no? We are inside the node and here we can go back to the logical volume commands that we have been working on in Linux and maybe in Unix for long, long time. This is like the volume group list, no? And here we can see that we have one volume group created and it has 200 gigs in size and it has 200 gigs free. Because we haven't used it, okay. Yeah, and we can also list like the physical volumes that we have and here we can see that we're using STP that we just checked before. It belongs to this volume group, no? Volume group MCG and the same information as before. And finally, we can check what logical volumes have been created and we can see that currently we don't have anything, no? Because we haven't really requested any PDCs so we don't have any PDCs. Okay, so with this, now we have logical volume operator ready. We could ask for a persistent volume and it's going to get created for us. So what we're going to do now is deploy ODF and really the part of ODF that we're going to deploy is multi-cloud gateway, a standalone. So just the multi-cloud gateway side of things. So again, here we follow more or less what we normally do. Now when you want to deploy an operator we will go inside here and maybe type data foundation. Yeah, so here we have it, the OpenShift data foundation operator. Just click install. Again, where you got to use version 4.10. We're using the OpenShift storage. So it's really the same namespace. There's no issue so it can be in the same namespace as the one that we stole the logical volume operator. And here everything by default we don't really have to make any modifications or any changes. And this is going to deploy our typical ODF operators. So we know that we get on a normal deployment. This is the whole thing, correct? No, yeah, well the operators, yes. In the operators you get the full operators but we are not deploying the operands per se. And also here what we're going to see is just the ODF operator, the Rook operator. You can't get rid of them. Not so at least you need to have all the operators in place. And then we are going to decide that we only want to deploy a small part of ODF that is going to be multi-cloud gateway. Okay. We can take a look here at the ports just a second, let me clear the screen. And we can see that as I said, like the operators are getting deployed. No, the OCS operator, the Nuva operator but nothing else. Not so safe as not being deployed or nothing like that. We're not going to configure it. Okay. Okay. Yeah, so another thing that I wanted to mention here is regarding Nuva and multi-cloud gateway just to give a little bit of information on that. And Nuva is the upstream project regarding multi-cloud gateway. And it's going to provide us an S3 endpoint that we can use for S3 compatible endpoint. No, we could say. And then it's going to have, we can configure different packing stores. No, by packing stores, just to make things simple we can say that we need a space where Nuva is going to save our objects. Well, we save the objects in the package. We need some place to actually store them. And in this deployment, because we are using single node open shift and the logical volume operator, the place our backing store that we're going to use is actually a logical volume that comes from our local storage provided by our logical volume operator, if that makes sense. So it's all embedded in this system. But if we use MCG this way, we could do a nice combination of things. So you have your LVM backing store here. You can then go on and create like federated buckets for information you might need. You'll take advantage of your cash on the edge. It's like, there's a bunch of stuff that we can actually do with MCG in this situation. That's kind of nice. Can I ask you one question? Because while things are still getting, when you went through the UI, when we were installing the operator, the ODF operator, it had console plugin, which I don't, do you know what that is? I haven't seen, I don't remember seeing that. So if you happen to remember, it was the previous, right before you, so what's the console plugin? Yeah, so in, I think that is also an OpenShift 4.10. There's a new thing that came out, a new feature that is called like custom dashboard plugins, by which you can develop your own plugin that you can insert into the dashboard, into the OpenShift dashboard. So now, as you saw there in, I can't go back, but as you saw when we were deploying, we have the option of enabling it for OpenShift data foundation. And that's why you can see on my current screen that we have web console update is available. And it's telling us to refresh the web console. What is going to happen when I refresh the web console is that this plugin is going to start up and then we are going to have it available here. So really, when you go to storage here, this data foundation, this is the custom plugin that has been enabled that you were asking. Let's say that if you disable it or you don't accept the enablement, you won't see this here. So it's really providing us with a dedicated custom data foundation plugin for our UI. Okay, let's check if this has finished deploying. Seems so. Let me just double check with, I normally like to just to see that the phase is ready for our, so the phase has succeeded for all of our things that get deployed here. We just deploy one operator, but let's say that it's a huge umbrella that ODF underneath deploys other things, not like a new operator and the CSI add-ons, et cetera. Okay. Okay, so we have the operator ready. Now we need to deploy what I normally call the operand. Now that is really what is provided by the operator, the special CRDs, the customers and the five resources. And in this case, you know, the F is called a storage system. So we create, we select create a storage system. And here you can see that we have full deployment that this will be like the full blown ODF deployment with a fruit, a new by everything on multi-cloud everything, or we can select multi-cloud object gateway. But when we select this option, this is like a standalone multi-cloud gateway. So we will get all the bits related to Nuba to MCG deployed, but nothing else. Nothing else, okay. And then we're going to tell it that the backend storage that we are going to use, so where we are going to store all of our objects is going to be the logical volume operator that we created, not the storage class that was created. So everything is going to be reside on this node. Okay. And that's really all that we have to configure. Then we also have here the questions about encryption. In this case, especially when we're speaking about multi-cloud gateway, as it says here, we have encryption on the wire and at rest, especially if you're using like the data buckets node, so everything is provided for us. So we just can select next and then create the storage system. Okay, so we can see this going to take a little while. It doesn't take long because it does the Nuba bits that you can see here, that is deployed. So ODF normally, when you do the full deployment and ODF it can't take for a while, but this is quite quick. And a couple of things that I wanted to mention, Nuba for it to work, it uses a database and it uses a Postgres SQL database that you can see here, this DB is for database and this is for Postgres. So if we do a node ticket, well, let me just clear a moment and do a PDC. So we can see this better. So if I do a node ticket PDC, you can see that the Nuba database has already asked our storage class, that is the logical volume operator storage class that we just created for a PV where it's going to create the database where all of the metadata for Nuba and the multi-cloud gateway is going to reside. So if I do the same command that I just ran before. Yeah, you see it, okay. Here we have our logical volume. Nice. It's from this volume group. And as you can see, it's 50 gig in size. Now that's really what you ask for in the persistent volume. And that size, I've never changed the size of the database for Nuba, is that configurable? I don't think it is, right? Just, it's not working. At least out of the box or doing any, I haven't also never tried to increase, to resize this persistent volume. Maybe you can resize it if you're running out of space, but I have never tried. Okay. Let's see if everything is running here. So at least the Nuba kind of bits are there, but now we need also the back in store. Now, so when you are using persistent volumes as a back in store, you are going to get a new pod created here. So we have to wait a while until that gets created. Okay. And while that's getting created, just a quick shout out for what I have here, because we are doing everything through the UI, but just in case someone wants to take a look, we all would also add to the description of this video, how to do everything that we're doing here today, but following the GitHub's approach, and using Argo CD. So I have a repo that we will share later on now that is called MCG standalone, that inside that repo, if you do a git clone like I have done here, you have like the Argo CD Helm chart that really helps you get the bootstrapping of Argo CD running. Then you have another chart that is for deploying the logical volume operators, and then automated fashion using Argo CD. And then you have the multi-cloud gateway, also a Helm chart, just in case if you want to take a look and use it, it will help you deploy. Only thing that I wanted to mention about this chart, that is a little bit important, is that the values that are used here, especially regarding limits and requests, for each of the Nuva components, is like for a production, no, sorry, for a demo kind of use case, because let's say that the resources and the limits have been reduced. So even if you are working like we are currently known, a single node open sheet that maybe you don't have enough resources, these are, we have lowered the default values that normally would be used for Nuva. But if anybody wants to use it in production or whatever, you can just use this same Helm chart. The only thing that you would need to increase the limits to whatever you need, because as you can imagine, if you give more resources to Nuva, it's going to be able to give you better performance, especially the Nuva endpoints that are the ones that are doing all the heavy lifting, no? And another quick mention, Nuva also uses like horizontal scaling, and especially the Nuva endpoints. So let's say that by default, if you have a high CPU usage on your Nuva ports, Nuva is able to scale horizontally the number of ports, no, and increase the number of ports available, so you don't lose performance and you don't have a bottleneck or an issue there. To make that work, it uses HPA, that is available for OpenShift, that is a horizontal called Outer Scaler, and how that works is that you set like a threshold for CPU usage, that is by default 80%. And if the Nuva endpoint port goes higher than 80% for a certain amount of time, new ports get also a spin, no? And there you have like a minimum and a maximum number of ports that you want to get created as you can see here, because this is, as I mentioned before, only for demo purposes, I have also limited the number of endpoints to one. No, so we don't want to increase this, but if you wanted to have the horizontal scaling automatically working, you could just increase the number of the maximum that you have here and you will get everything working. Random question. So I know in this here, you're like down Nuva endpoint under memory, obviously that's for the whole thing. Is there a way to impact cache size? So I mean, that would that be the Nuva command line? I probably have to, if it exists, I haven't done it. So I'm wondering in an edge situation, you may want actually quite a large cache, but I don't know if that's something that you can configure at this point. So, but out of hand, do you know? Yeah, so I imagine that really, when we are speaking about the cache that we have in Nuva, that we want to use caching and the cache buckets that we have, really, so all of that information doesn't go into memory. It goes actually on the PV, the persistent volume or whatever you are using as the backing store for the Nuva cache. So that's really the important side. Here, we could maybe speak about, as you said, no, the Nuva database is a Postgres SQL and Postgres SQL has a cache for the database to work better. So increasing the memory that you have for the Postgres is going to increase the size of the cache and you're going to have better performance also in the database. So that's also related. But if we are speaking about Nuva caching, no, and the Nuva cache that you have buckets for, and then really the important thing to take into account is the size, no, of the actual bucket that you share as the cache. Okay, okay. Thank you. That's an awesome explanation. So let's see if everything is working here. So we have, we can see that we have a new poll. Oh, there it is, okay. That is our default backing store. And this seems quite good. So final check that I always do with Nuva. This is the Nuva CLI that is available for download from access. So from the Reha CDN, you can download it using DNF, no, the DNF installed, or you can just go to their website and download it. And it's quite good because you just do a Nuva status and you get a lot of green or red flags or checks. No, that is quite easy to follow in case you have an issue. Here is really checking all of the objects that the things it should have and it's compared them with their ideal state. And it's, as you can see, it's marking all of them in green. This is good news that everything is in place. If something fails, you will see here like a really big red cross and then you should take a look if you have any issues. And also you get a lot of information if your backing stores are ready. Okay. As you can see here, we have a persistent volume backing store. It's ready, so this is good. And also our bucket class is in phase ready. So everything here is looking really nice. Another thing, if you don't want to download that Nuva status, I also like the OC describe Nuva. That is really nice to describe Nuva. And this also will give you detailed information on the status that we can see here with the phase ready and the versions that we are using of Nuva 510. And also I really like the getting started steps that you get, no further describe that I think that is really helpful. The first one is actually how to connect to the Nuva UI to the Nuva manager. And then we can also have an easy way of testing via Nest 3 client, if we can access the S3 endpoint and if everything is working. So let's do a quick test with this just a second. I will let me paste it and I will just walk you through this just a second. Okay. So the idea here is you could also use the external route, but what we are doing or what this example gives you is port forwarding. So really we are mapping the port 443 of the Nuva service to our local computer, not to our local laptop or whatever in this port. Okay. And then once we have that port forwarding done for the S3 service, the Nuva service, we are actually getting the access key and the secret key to authenticate against that S3 endpoint. So all of the S3 endpoints normally work with an access key and a secret key to authenticate and just to check if you have access to whatever bucket and to write whatever now or to read. So this is actually our authentication. And by default, when you deploy Nuva, you get a Nuva admin secret with a user that you can just use to see what's in the Nuva buckets. So all of this is right. Let's also create an alias here. Let me just move this out of the way. So here with the alias, what we are doing is we are setting with environment variables, the access key and the secret key that we set here. And then we are using the AWS CLI. So this is just the normal standard AWS CLI that you can download from AWS. And then we are using the S3 sub-module. Now if everything is working okay, we can just do an S3 alias and it's going to list. First bucket or yeah, there it is, okay. That this, as you just mentioned, Michelle, this first bucket gets also created by default. But a nice way of checking that everything is doing okay before we let's say hounded out to our developers or start using it. It's an easy way. This is going to be empty now, but at least we can see that we can access the buckets and we can see no errors, no, everything is working fine. Okay, so we have multi-cloud gateway and just to finish it off, let's deploy an application just a moment. This is again, we will leave, this is also a repo, a GitHub repo that is a photo album application just for demo use cases. And we will leave the link to GitHub repo also in the description. And this is as simple as just doing a batch of the demo.sh and this will get deployed for us, no? What it's going to do first, if you have anything left from another installation, it's going to use OC delete just to get rid of it. And now it's going to do the deployment of the application. Why I wanted to show this is just to speak very quickly because we don't have that much time but just about object buckets and object bucket clays. So I think that they're really nice feature. So if we go here into our app definition, I just want to speak about this object that we have here. Now that there's an object bucket claim. So in the same way that with persistent storage, now we have persistent volumes and persistent volume claims with object storage. We also have object bucket and object bucket claims. Now when we use an ODF and we use a Nuba. So this makes our life really easy for the developers, for somebody that is running applications. It makes it really easy to consume the object storage because normally when you work with object storage, you have to take an account or be aware like two main things. First one is the URL of the S3 endpoint. So where you have to connect to get to your buckets and then your credentials, not to authenticate that it's going to be the access key and the secret key. And the really cool thing about the object bucket claim is that automatically for you it's going to create a bucket for this application. And once it has the bucket created, it's going to create a config map inside the namespace that is going to give you the S3 endpoint. And it's also going to create a secret where it's going to have the access key and the secret key. And this is really a nice, nice feature because then it's really easy to get implemented or to get deployed inside our deployment, inside our deployment config, via environment variables or whatever you want to use to consume them. But it's really making our life easier to use object storage. So quick question. So in this case, if you're developing for an edge application here, if you use object bucket claims, this is all temporary. You're not concerned about it being backed up anywhere. Like it's going to go away as soon as the claim goes away. Just like if we have time, it would be nice to talk about how to use the different buckets that MCG offers. So depending on what you need and how important your data is. But in this case, this is totally throw away, right? Like the photo album. Okay, just making sure, right? Okay. That's completely right. Now, as we mentioned, everything that we are building here today is really a single point of failure in the sense that if the node goes, all our application goals, all our object storage goals, but just for this deployment, because as you said, we could very easily in this same scenario, let's say that we have access to AWS S3, we could use bucket replication, we could use a different backend store and then use data buckets and do mirroring. So there are many different ways that we could make our data available. But this kind of use case is where you can, if you lose it, you don't worry, you're able to completely, automatically deploy everything again. And it's more or less, let's say in a read only mode that you can tolerate the full failure, you know, of what you have here. Okay. Or maybe just demo use cases. No, you want to do a demo and you need something where you want to have object storage and you don't have a lot of resources. It's also a very nice way of having this working and making use of this. So you would have to, anyone planning on doing this and using MCG on the edge of this way, would have to just sit down and think about, what do I need, right? How often am I, how, what kind of tolerance do I have for failure and am I pulling in data? Like I keep thinking of data federation in some cases and then you get to decide where you want to write to or you're going to write locally or you're going to write far away and all of the other kinds of buckets that Nuba offers. So it's just, it's just like a, you have to sit down and figure out what you're going to need from your data first. But this is the disposable version. So that's completely right. This is where you, you can say, okay, I can just recover from maybe, like you said, no, from a local resource where you have all the actual data that is being used. And you can just recover everything and put it into the place, put it on the far edge. So all of the things that you have us near to that far edge can consume that data, but they're not actually going to write data there because otherwise if you lose that single note, no, it's going to, it's going to get lost. So that's something to take there into account. And as you can see here, the photo album has been deployed. Okay. If we do an OVC, we can see, this is the object bucket class. We can see that it's bound. This means that it should have created a new bucket. We were doing the command here, so let me just do it clear so we can see it a little bit better, but we should have a new bucket in, this is a streetless. We should now have here a new bucket. As you can see here, now we have this new bucket called photo album and it's really ready to consume. We can also, if we have time, we can just check and do a route, go back to our browser and just try logging in here to see if it's working. Okay. So this is our great application now where we can select a photo. Let me see what I have here. Just try and upload this as an example and it will just get uploaded into this front end and we can see this kind of thing here. And the only thing that the workflow has this goes, no, this is really going to be uploaded through the S3 endpoint, through Nuva, and Nuva is going to use the persistent volume back in store that we created with the logical volume operator and it's going to store it there. So very quickly, and we are almost done, I would just like to show just a second how you can actually see that this is stored. How you can check with your back in store PV and also if we go into, and this is the default back in store pod that I mentioned before that gets created when you are working with persistent volume back in store. And I would just wanted to show here how, what you get here is the logical volume. This is our top of LVM logical volume that is 50 gigs inside as you can see and inside this mount point that is Nuva storage, you are going to have all the bits and blocks of all the objects that you upload to Nuva because this is a data bucket, as we mentioned, it's going to be encrypted, it's going to be de-duplicated, there's a lot of things going on there with Nuva, so you're not going to be able to access the objects through the file system, no? But just- Through the endpoint only, okay. Yeah, through the endpoint only, but this is just to show how everything is tied together and it's true that you have a small file here that if you do a cat, you can see the amount of files that you have on the bucket and also the size that they're using. But anyway, this is just, so you have an idea of how a PD back in store looks like when it actually is used for multi-cloud gateway. So Nuva, so MCG on the edge, it's the same, this is there, nothing has changed, like you wouldn't even know if you were edge or in a massive business. No, no, no, exactly the same, this is just instead of MCG in the edge, this is really MCG standalone, no? In the sense that we are not deploying all the Chef bits or Rook and everything, no, we're just deploying MCG standalone. But no restriction of features, nothing's changed just because we're on the edge, it's nice. Oh, no, no, you have everything and as you just mentioned before, that's a very good point. Maybe you could use a namespace bucket with a data foundation that you could read to a certain center location, where you have all the data and then only write to your local storage or so you could do all kinds of combinations that you can think about. So you have to sit down, see your use case and I'm sure that you're going to find that multi-cloud gateway in some part of all the features that it has, it's going to fit really nicely. You know, maybe that could be a good show. Just putting that out there, where we actually go through the different scenarios and how we use the buckets in different ways, since you have this edge cluster set up so nicely. Yeah, yeah, yeah, something that we could. All right, awesome. Okay, so is there any more, do you think we should, we're getting close to time, so anything else you wanted to get in here, any more thoughts that you want to leave the users with? I think we covered a lot actually. Yeah, just a lot, just I will say that if somebody wants to try out and test the logical volume operator, go ahead just knowing that is tech preview, but let's say that new bugs are welcome or just request for new features, everything is welcome and all the feedback that we can get is really helpful on getting this deployed and GA, knowing the next release. Fantastic. Well, as always, wonderful show. Thank you so much. I really appreciate it Daniel.