 Hi, my name is Juan Manuel Parrella and this is a second demo about a CTP factory workflow. This time we will go through the cluster deployment using OpenCIP pipelines. Let's do a quick resume about what we did in the last demo. First thing we did was check the cluster, the hub cluster, looking for any possible issues. Second one was bootstrapping the OpenCIP pipelines operator and all the artifacts necessary to perform the CTP flow. Then we executed the pipelines on top of the hub, going through some stages that basically performed the hub configuration to be capable to deploy edge clusters. And last one was check that all was working fine, the hub cluster, CTP, internal registry, etc. So the plan for today's demo is basically go through all the stages on the edge cluster deployment phase. Ok, what we will do right now is create some virtual machines to host the edge cluster. For now, we will use the KCLI command to create 4 empty machines, with 4 disks, 64GB of RAM and 24 virtual CPUs. We have a script that does this for us, inside of the hub folder. This script also creates the DNS entries for the Spock cluster, which are basically the registry routes, Nuva routes, API, API ends, etc. After that, we can see some virtual machines that are already up here and are locked down and ready to be provisioned with the pipeline. Let's take a look to the edge cluster file. It's the same as we used before, but this time has a different shape. Now we have filled the Spock section with the node details. And from here we can see each of the nodes that are part of the edge cluster and inside of it, the external interface name that will be connected to the factory, including the MAC address. The internal interface name that will be connected to the Glossier switch, with also the MAC address. The VMC URL, which starts with RedFish, with the proper credentials. That is where the operative system will be deployed. And the last one, that is that will be used to create the storage cluster under OCS. Now we can trigger the violent execution command to deploy the edge cluster. This command is mostly the same, but the violent name will be different. And once the pipeline is triggered, we can see that this pipeline now refers to a violent run in the browser. Inside this violent run, we can see the different stages. The first one will validate that the hub cluster are working properly. Also verify the DNS entries, asking to the DNS server for the API and the ingress of the edge cluster. Then the edge deployment will start. Initially rendering the custom resources for ACM and system installer based on the spokes.jammel file you provided as a parameter. Now the edge deployment has started and we can see some log entries asking for things like user entry required and so on. So no worries about that. You don't need to do anything, just wait until it finishes. Now we will fast forward this part during the edge deployment that takes approximately 30 minutes. And we will go through some edge statuses but not relevant for this demo. Ok, looks like the edge cluster deployment has finished and it started the message stage. Let's explain what is this for. With the edge cluster already accessible, we are creating an image content source policy and a catalog source to point the content image sources to the hub registry. With those changes, a spoke cluster will be able to deploy the rest of the things because, remember, it is fully disconnected. After that, here comes an important stage on the pipeline where Metal LV and NMState operator will be deployed. This stage will allow the factory to access the API and ingress of the spoke cluster using a standard interface. NMState will load a profile per master node where it's the external interface for the HTTP and auto DNS and then the Metal LV is deployed creating two custom resources, another spool to hook both interfaces as one, and a service for the API and the ingress to be accessible at the opposite level. One of the steps that comes after the deployment of this Metal LV will check the availability of this API and ingress from external accesses, so we will ensure that this step will finish correctly. After that, here comes the deployment of local storage operator and OCS pieces that will be necessary for the Quay deployment and post-invester. Local storage operator will consume the disk mentioned in the spokes.jambles file and local storage operator also will transfer the disk into a pv's using a custom resource called localvolume. Then OpenCIF storage will consume those pv's generated by local storage operator to deploy the full OCS stack and it will form our storage cluster. Perfect, now it's time to deploy Quay and perform the container image sync between the hub registry and the edge cluster, including the OpenCIF and OLM images. After that, we will apply the image contents of policies and the catalog source to point to the edge internal registry making this edge cluster autonomous. Perfect, now that Quay registry was successfully deployed, it will stop the OpenCIF and OLM container image synchronization. It will take more or less one hour depending on how many operators we have in the pipeline. Okay, looks like the OLM sync was finished and now it's time to update the pool secret and also, as I mentioned before, make the export cluster autonomous. Another task that is working in a parallel process is the CTP factory workflow configuration UI deployment. This one will help you to configure the enclosure when it's already relocated. Once we have set the internal registry and have the ICSP and catalog sources in place, we will deploy the worker node into our enclosure. This worker node will join the cluster as a typical OpenCIF node. Hope you enjoy the demo and thanks for watching.