 And this demo is going to demonstrate how to install and configure the Koku metrics operator in a disconnected network. I'm also going to show how to download those reports from a persistent volume claim and upload those to cloud.redhat.com. So what we're looking at here is some operator life cycle manager documentation on restricted networks. This gives a good overview of essentially how to install an operator on a disconnected cluster. So this first section gives you an understanding of the different operator catalogs. And the Koku metrics operator is found within the community operators catalog. So when working through this document, you'll want to use the community operator index. After disabling the default operator hub sources, your operator hub should look pretty much like this. There should be nothing here. So if you come back to this page, we recommend pruning your index image because there are quite a few operators in the community repo you don't necessarily want to build your mirror catalog using all of those operators. So when you're pruning, you'll want to make sure you're appointed at the correct community operator index. And for the packages to prune, you'll want to add Koku metrics operator. And this target will be your mirror registry. So once you prune your image index, you'll then want to mirror the operator catalog. Here are all the different steps for doing that. And then below that, we'll get to creating catalog from an index image. So here they give you a catalog source that needs to be created and it needs to point to the mirror registry that contains the operator that you want to install. So I'm going to go ahead and create this catalog in my cluster. You'll make sure that was created correctly. So if I refresh this page, I now see the Koku metrics operator. So if I click on this, here's some documentation listed here. A good section to read is the limitations and pre-rex, especially the storage configuration section. And then further down below, there is a section dedicated to using the operator on a shriken network. So I'm going to go ahead and install the operator. I'm going to leave everything here the same. The operator needs to be installed within the Koku metrics operator. I don't have it created, so I'm going to let OLM do it for us. So I'm going to install. OK, so after a few moments, it installs. And then we'll go click on installed operators. And we now have the Koku metrics operator namespace. And you'll see the operator installed here. So next we need to configure the operator. So I'm going to go ahead and click on the operator itself. And I'm going to create an instance of the Koku metrics config. I'm going to scroll down real quick to the documentation. Just give a quick overview of how this needs to be configured. So the first thing that we'll want to do is configure storage. The operator is capable of creating its own storage. But again, it's good to read the prerequisite section first before you let the operator do this. But essentially what the operator is going to do is going to create a persistent volume claim that is similar to this, except it'll just use the default storage class name. And the name will be different. It's listed above if you want to know which one is. Next step is to specify the desired number of reports to keep. So the default value is 30 reports. This will equate to approximately one week's worth of data if all the other settings remain the same. One other key thing to change here is the upload toggle. This needs to be set to false. Or else the operator will try to upload reports to cloud.redhat.com, which of course it should fail if you're in a restricted network. Okay, so we're going to go ahead and create a cuckoo metrics config. And I'm going to change this in the YAML view. So I'm going to leave this max reports to soar. And I'll leave that 30. A good thing to do would be to remove this whole source to section and just replace it with an empty bracket. The little warning here is fine to ignore. And set the upload toggle to false. Okay, so after you do that, click create. Now it's been created. I'm going to come in here and then click on YAML view, scroll down to the status section, which take a minute, but the status will show up. Okay, so now reload. Scroll down and see the status section. So it looks good. And if we look at the packaging section, we'll see that the last successful packaging, it just occurred. There is currently one report in storage. This is because when the operator first spins up, it's going to collect all the metrics for the last hour and then it's going to create this package here. So this tells you the full path to the package that's in storage. And as more reports are added, you'll see all of them listed here. The Prometheus section, this one's good to look at. As you put it in information about, you know, when the last query was started and when it was successful. Another thing to look at is the persistent volume claim section. This tells you exactly what persistent volume claim is in use by the operator. Okay, so everything here looks good. The next thing you'll want to do is retrieve all the reports from the PVC that you want to upload. So if we go back to the installed operator, we can scroll down to the reshorter network usage section and below we have the download reports on the operator including the PVC. So one thing that we have listed here is a pod that you can spin up. It'll just install busybox. So this will give us shell access to the PVC itself. So we're going to go ahead and create this pod. Okay, and so within the pod's workloads, we'll see that we now have this new pod spinning up called volume shell. So now what we want to do is copy the reports from the PVC to a location locally. And we can do this using this ocrsync command. So I'm going to copy that and I'm just going to save it to local reports directory. What this is going to do is going to copy all the files that are within the upload directory. Okay, so this will give you a warning that can be ignored as long as you check the upload folder that you just downloaded to make sure that the report is there. So just to make sure and turn up the report that is within the PVC is now local. So now that you have this file, what you want to do is you'll want to remove what is in the PVC. So we can do that by creating a remote shell and then we're just going to remove the file that's within the upload directory. So the command that's written here will remove everything that's in there, but it would be good to make sure that you're only removing the files that you have stored locally. Just to make sure we're going to look in the tent directory, just make sure it was cleaned out. Sure enough, the report is now gotten. Okay, so now that we have these reports locally, we can exit the shell. One thing you can do is you can leave this pod here. It's not really going to do anything or you can just remove it, but the next time that you need to gather the reports, you'll need to spin it back up again. Okay, so now with that, what we need to do is we need to create a source in cloud.redhat.com. So real quick, login to your account. Okay, so I'm going to click on the settings button, from the sources, redhat sources, and I'm going to add a source, and call this, here in the app to demo. Next, I'm going to ship container platform, select cost management, next, and then we'll need the cluster ID. And I copy that from the cluster, and you can get this from the overview page, but we'll need this cluster ID. Paste that in here, next, and then add. All right, so now that we've successfully created our source, now we can upload our documents to cloud.redhat.com. So if we go back to the installed operators, look at the operator documentation here. I'm going to scroll down to the very bottom, and we have this curl command listed here. So what we'll want to do, I'm going to change the directory to where the report is stored. Okay, so I'm going to copy this curl command, and I'm going to replace the file name with the actual file name, including the file extension. Okay, and then username and pass, these correspond to your cloud.redhat.com credentials. Okay, so if you looked at your output here, we are completely uploaded and fine. That upload was accepted. Okay, so that is how you install an operator in a disconnected environment and gather the reports and upload the cost management.