 Hi, my name is Rhys Oxnum. I'm the Director of the Customer and Field Engagement Team here at Red Hat. In this short video, I'm going to give you a quick demonstration of Red Hat's OpenShift Assisted Installer and the work that we've been doing to integrate with a VMware vSphere-based platform. Over the next few minutes, I'm going to walk through a deployment of OpenShift on top of vSphere with provider integration, orchestrated by the Assisted Installer. For those that aren't familiar with the Assisted Installer, it's a SaaS-based utility that helps with the deployment of OpenShift, simplifying the provisioning of a cluster from the ground up with minimal external dependencies via console.redhat.com. For the purpose of keeping this video as short as possible, many sections have been heavily sped up. To get started, users need to navigate through to console.redhat.com, select OpenShift, select the Data Center tab given the solution is typically suited to on-premise clusters, and you'll see the Assisted Installer option at the top. I'll select Create Cluster, and it'll take me through to the cluster creation wizard. It'll first ask me for a name for my cluster, which I'll call QE1, and then fill out the base domain, which matches the internal environment that I'll be deploying into. Now I can select the OpenShift version, where a range of releases are supported, but I'll go with the latest 4.10 release. I then have the usual options to deploy single-node OpenShift or whether I want to deploy an ARM-based cluster, but I'll skip over these today as they're not relevant to the cluster I want to deploy. There's also some additional options for manually updating the pull secret and enabling encryption for my nodes, but I'll also leave these for today. When ready, I'll hit Next to proceed. I'm then presented with the Host Discovery screen, where I'm able to provide further configuration. Provisioning nodes within the Assisted Installer couldn't be easier. We need to only generate an ISO based on our configuration, attach it to our target nodes, and boot them up. They'll then appear in the UI for our use. I'll select the Add Hosts option, where a new pane will show up. I can optionally configure a HTTP proxy. I'll need to pull in my secure shell key for troubleshooting before having the choice of a minimal disk image or a full disk image. And for my example, I'll use the full disk image, as I will import it into the VMware data store for VM attachment. Now I'll select Generate Discovery ISO, and one will be made available for me to download. I'll copy the link to this image and jump over to a machine that has access to the VMware data store. I'll first download this image and then upload it to the data store using the Gov C tooling. Once it's up on the data store, I need only attach the ISO to some VMs I've already defined, and when they boot up, they'll appear in the UI ready for me to use. I'm going to run through this section really quickly for the purposes of the video, and all I'm doing is attaching the ISO that I just uploaded to the data store to three VMs that I've already defined via the vSphere client, and then powering them on. This is the extent of the work I need to do via the vSphere client to get these machines registered via the Assisted Installer. These machines will boot up into a core OS disk image, start up an agent, and will allow us to remotely configure, monitor, and deploy the OpenShift cluster with the chosen options. After a few minutes, each of the hosts will report in, and we can view the discovered specifications of the systems, including CPU, memory, disks, and their networking configuration. The Assisted Installer will have detected that the platform of these machines is VMware, and a previously graded option to enable vSphere integration will become available, which is intentional as we are trying to hide the implementation details if they're not relevant based on the insights about the nodes through discovery. This sets up the cluster ready for post-installation integration with the VMware cluster and things like node scaling, dynamic storage, provisioning, and so on. I'll select this option now. Each node has to have a role assignment, and we have some logic in the Assisted Installer to help determine which roles align best with node specifications. But given that we are deploying a three-node compact cluster, we'll leave this at the Auto Assign option, as all nodes will be both masters and workers. To proceed, we'll select the Next button. On the following page, we're asked to configure networking for our cluster. This will automatically detect the single subnet we have available, and is defaulted to the DHCP-based virtual IP allocation, which has failed due to environmental DHCP limitations. So I'll fix specific addresses that I want to use, as I've pre-configured my DNS with specific IP addresses. I'll skip advanced networking for now as the default suit my environment. If I scroll down, you'll see that the UI is complaining that my cluster is insufficient, primarily due to networking access in NTP. So I'll add an NTP source for my nodes, and it should resolve this issue when we can proceed further. I'll click Next, and it'll take me to a review page. The UI shows a couple of warnings. Firstly that NTP synchronization has failed, but this will remedy itself during deployment now that we've added the clock source. And secondly that we need to modify our platform configuration post-insulation. This is key to VMware integration. Whilst the Assisted Installer sets the cluster up to integrate with vSphere, we need to add login credentials post deployment, something we do not ask for during installation for security purposes. The UI provides a link to the documentation that describes how to do this, and will refer to it once the cluster has been deployed. Other than that, the configuration all looks good to me, so I'll select Install Cluster. Now the cluster will be deployed. This process takes a little while depending on many environmental limitations, so this section is not recorded in real-time. As always you'll notice that one of the nodes has been selected as the temporary bootstrap node. The deployment process simply deploys a temporary two-node cluster on the other two nodes, where the third standard bootstrap machine helps establish the cluster. Then when the two-node cluster is up, it pivots and gets deployed as a full third cluster node. A detailed cluster event log is available to see exactly what's going on, but I don't show it here in this video. Okay, so we're almost complete with the installation at this point, and the UI has presented me with the OpenShift console URL, the Kube admin password, and the option to download the Kube config. Let's quickly download this Kube config and ensure that we have CLI access to this cluster, as we'll need it for some of the post-deployment changes we need to make for VMware integration. Yep, all work in and my three nodes are shown. Let's leave this for a few more minutes to finish up. So there we go, installation is complete. I'll copy the Kube admin password that it provides and select the console URL, and then validate that my cluster is up and running. After a couple of self-scientificates, I can log in with the copied password and verify that things look good. Here you'll see that we're shown that the platform provider is vSphere, as we expected, and that we can also see the same three nodes witnessed via the CLI. By default, we also configure a storage class for VMware, but as we've not yet configured authentication, this won't yet function. In addition, there's no machine sets deployed by default, as we've deployed a compact cluster, but one can be configured for automated node scaling should you wish post-deployment. So let's move on to some of the post-deployment configuration changes and refer back to the documentation that we were previously linked to. The first thing we're instructed to do is take a copy of the existing credential definitions for vSphere and provide a configuration that OpenShift knows about, as we will modify these definitions with the correct credentials and push them back in. The documentation walks you through making a copy, base 64 encoding your actual credentials, applying them to the files, and then replacing the configuration on the server. But for the sake of security and not showing our actual credentials in the video, I've already crafted these files using the documentation off-screen. I'll now run the replace operation that the documentation describes, which initially fails due to a version conflict, but if I retry it, succeeds. Now I need to force a redeployment of the kube controller manager pods to take the credentials into consideration. Next up, just like before, I need to modify the cloud provider convict, which is also done off-screen to avoid the leak of our credentials. But the documentation walks you through the modification of the copy that you initially made with the correct configuration, linking to the vSphere credentials we applied in a previous step, and then you simply apply the configuration, which I'll do now. OpenShift should now be set up to integrate with vSphere, and we can test this functionality by creating a new storage class that refers to the storage domain we know about on the VMware platform. I'll copy this definition and update the datastore parameter to use the NetApp filer in our lab. Now that we have the storage class, let's create a persistent volume claim that relies on it. I'll first create definition, apply it, and then list out the available pvcs. And then we go. It has dynamically created a pvc and bound instantly, proving that our VMware integration is working successfully. We can also view this pvc in the UI, noting that the uuid starting 9cc8, and we can verify that it has actually been created on the VMware datastore by navigating back through the vSphere client, going to the NetApp datastore, selecting kubevalls, and sorting by the modified date, here validating that the uuid matches 9cc8, and that its current size is 0 kilobytes, as it's just a thin provision volume. That concludes our demonstration today. Thank you for watching. I hope that it gave you a better understanding of the assisted installer and the vSphere integration we've recently made generally available.