 Hi, my name is Juan Manuel Parrella, and I want to show you what kind of project is CTP Factory Warflow. CTP Factory Warflow is basically a meta-inclosure that the factory of a manual factory will prepare for the final customer with an embedded open-sift in a CTP pipeline. CTP stands for Theodos Provisioning. This pipeline is basically separated in two parts. The first one will ensure that your half cluster contains the proper elements to deploy some edge clusters. The second part is the edge cluster deployment itself. So let's get started with a bit of context. In order to introduce you in a CTP Factory Warflow context, we need to look at this logical diagram. In the factory side, we need to prepare some prerequisites. Open-sift cluster deploy that we would use as a half cluster, some persistent volume already set on this half cluster. TNS configuration to reach the API and the ingress of the half cluster and also one of these two entries for every edge cluster we will deploy. Enough DACCP addresses to host how many edge clusters we want to deploy and an NTP ready to be seen between all of the clusters deployed on the factory. First thing we will do is basically check the status of the half cluster and ensure all the needed components are present. We need to check the nodes, cluster operator, cluster version and also if there are enough pbs to work with the CTP Factory Warflow. Another thing you need to prepare is the spokes.jammel file, which is the definition of the CTP configuration plus the edge cluster node definition where we will set the versions of open-sift, ACM and OCS. In this part of the workflow, we only care about the config section, so the spokes part, it's okay to be NP for now. To establish a process of CTP Factory Warflow, we will need to execute this pass script you see in the screen. This process will clone the CTP Factory Warflow repository, download the necessary binaries in case you don't have them present on your machine, create the proper permissions on the half cluster to execute the necessary commands to complete the pipeline flow, deploy the open-sift pipeline operator from Overland Catalog, and deploy the tasks and pipelines associated with the CTP flow. This bootstrap script execution can be done as many times as you want, no worries about re-executing it if you see any failure. As you see on the right side, you will have the pipeline section on the open-sift console, so you can control the execution via browser or CLI. Among the permissions created, we also add a new namespace called SpokeDeployer, where we will store all the artifacts for open-sift pipelines, so ensure you are set on that project when you are looking at the open-sift console. Okay, in this session we will cover the half pipeline execution all done via CLI. As you can see in the screen, the command has multiple arguments, so let's explain any of them. The first one is the take a n pipeline star, which is the action we want to execute over the half cluster and with a hyphen n, which involves the namespace where it will be executed. Then we need to pass our edge cluster file to the CTP factory workflow pipelines, so we need to execute a CAD command over it. Now we will use the panlocation of our cube config using it as a parameter. Then we will set our pipeline's workspace as CTP, meanwhile we will use a previously created PBC called CTP PBC. The two last arguments are basically a timeout and the other tells the pipeline parameter as a default once. We will follow this way of execution in the right side. We will have the console open and in the left side we will have the log execution of the whole pipeline, so this way you can check the pipeline's statuses and also check the ongoing actions. I will talk about the different stages of this pipeline execution during this fast forward. First stage, we will execute some preliminary checks over the binaries and also over the half cluster. The second one will deploy an HTTP server, exposing it as a route. Then the code will download all the Red Hat Coreos isos and these isos will be used by a system installer during the edge cluster deployment. The next step is the most time-consuming of the pipeline. It will deploy a container registry and also it will handle the openshift and OLM container image synchronization. After that, the pipeline will deploy ACM operator and the multi-cluster hub openshift object, which will provision the whole Red Hat Amia ACM product. Then it will apply the image contact source policy and a catalog source that will point to our own registry instead of internet, in order to work in disconnected mode. The last step, it will be the assistant installer podcreation, which will be the component to deploy the edge clusters. Ok, now that the hub pipeline has finished, we will perform some checks to validate that all is fine. And to finish the demo, you can take a look over the pipeline run, using the CLI command. You can check how much time the pipeline has consumed, the parameters submitted to that pipeline, or if there are any issues with any of the tasks executed.