 My name is Rhys Oxnum. I'm the Director of the Field Product Management Team within the Hybrid Platforms Business Unit here at Red Hat. Over the next few minutes I'm going to be providing a demonstration of the latest version of Red Hat's OpenShift Assisted Installer, and we'll show you one of the key new features, the ability to provision an OpenShift cluster comprised of just a single node. This is useful in those situations where the entire deployment needs to occupy the absolute smallest possible footprint, just a single server. For those that aren't familiar with the Assisted Installer, it's a web-based utility that helps with the deployment of OpenShift, simplifying the provisioning of the cluster from the ground up via a web UI hosted on console.redhat.com. On the screen you can see the console.redhat.com landing page, the gateway into deploying and managing a wide variety of Red Hat-based platforms. I'll select OpenShift on the left-hand side and it takes me to a list of clusters that have already been provisioned. Next I'll select Create Cluster and it will give me three different options, Cloud, Data Center and Local. The default option is the Cloud section where I can choose between utilizing Red Hat-Managed Services or a Run It Myself option within a chosen provider. We're going to choose Data Center, which allows me to select an On-Premises option. At the bottom it shows the wide variety of options, from bare metal through to virtualization platforms, but at the top you'll see the Assisted Installer option, which is the one that we want to use today, noting that it is currently marked as Technology Preview. I'll select the Create Cluster button underneath to begin the deployment. The first thing I need to do is select a cluster name. I'll give it the name SNO, which stands for Single Note OpenShift, and add in my base domain, as it will then match the pre-configured DNS entries that I have inside of my environment, so that it will just be easy to route through to my cluster once it's up and running. Although it is worth noting that you do not have to pre-configure DNS to start using Single Note OpenShift, I've just added it for convenience. Next is the new option to select that I want to install Single Note OpenShift, as typically the default is to deploy a multi-note cluster. The UI will then give me some warnings that Single Note OpenShift is Technology Preview, it has some availability limitations, and that there are a few scalability and lifecycle management limitations at present, given that the technology is currently Technology Preview. I'll agree here before moving on. Single Note OpenShift is only available on 4.8 and above, and hence the only OpenShift version it allows me to pick is 4.8.2, the latest release currently available. As always, it will give me the option to override my pull secret, which I don't need to do, but it is conveniently able to pull it for my Red Hat account automatically. I'll select Next to proceed forward. The Assisted Installer process makes provisioning of OpenShift clusters really easy. You need only attach an automatically generated ISO to your target systems, and the Assisted Installer drives the cluster deployment without any further touching of the target system. All of the configuration and control is driven via the user interface. The next step for us is going to be generating that customised ISO, a Red Hat Enterprise Linux Core OS boot image containing all of the necessary credentials, tools and API endpoint information for the nodes to be consumable by this new cluster, and they'll eventually appear below when ready. I'll select Generate Discovery ISO. We now have the option to choose between two designated boot options. The first is USB drive, which contains a full ISO image. It's typically a larger disk image and has everything we need to start the process. Or secondly, a virtual media option, which builds a much smaller ISO that speeds up the initial deployment when utilising virtual media on physical machines. When the small ISO boots up, it dynamically pulls the rest of the content that it needs over the internet. We'll select the virtual media option here, as that's exactly what we're going to be using. We then need to provide a secure shell key for troubleshooting purposes, which will be injected into the ISO for us, so I'll copy this in now. I can then configure a proxy server if I need to, but in my environment I don't need to use a proxy. Finally, we'll select Generate Discovery ISO, and after a few seconds this image will be available for me to download. I now need to attach this to my target physical system, but before I do, I'll copy the link to that ISO. For access to the target system, I'm using a jump host that I'm accessing over VNC, primarily because I need to attach this Discovery ISO to my bare metal hosts over a virtual media interface. And given the geographical distance between myself and this system, it makes more sense to attach them from another system residing in the same data centre. From within the VNC session, you can see that I've got access to the management platform for this Dell FC430 Blade, and that it is currently powered off. I'll open up a terminal and download the Discovery ISO using the URL I previously copied. Once downloaded, I'll now attach this to my physical machine via the out-of-band media interface. First, I'll set the machine to automatically boot via the virtual CD-ROM, then I'll attach the ISO via virtual media, and then I'll power the system up. This section is sped up considerably to keep the video short, but you'll briefly be able to see that CoreOS boots up, starts the necessary Assisted Installer Agent service that will help report it into the UI, and it will download the rest of the disk image it needs over the internet. I'll now wait a few moments for it to appear in the UI. There we go, it's reporting in. Behind the scenes, the Assisted Installer Agent service will have conducted a Discovery process and will report information about the system, so CPU count, memory and disk availability, and any identified network configuration. This will temporarily show a warning about NTP configuration, but this should be automatically remedied itself in the next few seconds. The machine is now showing is ready to move into the next section, although before we move on, I'm going to select the Install OpenShift Virtualization option, which will configure the installation process to automatically deploy OpenShift Virtualization as part of the rollout, allowing us to leverage the bare metal nature of the single system to maximize workload options for both containerized and virtualized workloads within the smallest possible footprint. We'll select Next to move on. At this stage we're asked to specify some networking options. We first have to select the subnet in which we want our OpenShift cluster to run on, noting that in my environment I only have a single network active. I can then optionally override the networking configuration for the cluster network and the service network to avoid any clashes, but the defaults work just fine for me here, so I'll stick with the basic configuration. At the bottom of this page it will show our node and its current network configuration, again highlighting that there was a problem configuring NTP via the Validations Framework, a set of validations that help with pre-flight checks and to minimize deployment failures. However, this isn't fatal to the cluster deployment and as I said, it's going to be remedied in the next step automatically. We'll then select Next and we're taken to a review page where we can just make sure that we're ready to go and that everything aligns with our expectations. Everything looks good to me so we'll select Install Cluster. This will now start the deployment of a single node OpenShift cluster on our single physical machine. Here you can see that it takes us to the next pane where we can see that it's moving from a ready state to a preparing for installation phase. As we've only got a single node, this will act as our bootstrap and resulting single cluster node. The high-level workflow is that the cluster will first be bootstrapped whilst it's still running the ISO. That cluster configuration and CoreOS image will then be written to the disk, then the machine will be rebooted so that it can bring the cluster up properly from the disk itself. We'll then watch these phases over the next few minutes. As always, the cluster events tab shows us exactly where we are if we need further insights, again highlighting that the NTP problem has been resolved for us. Bootstrapping the cluster can take a good few minutes, so I'll speed up the video considerably at this point. Now that the cluster has been bootstrapped, the CoreOS disk image has been written to the disk as witnessed again in the cluster events log. This machine will now be rebooted and we can validate this by looking at the virtual console. We'll see this machine is being rebooted and start up from the disk, no longer reliant on that discovery ISO. We'll see this machine sit in the rebooting phase for a good few minutes as the cluster is established, where the cluster will be initialised with OpenShift Core operators, including the OpenShift console, for the cluster pods to start up and for us to be able to fully interact with the cluster. Whilst that's happening, I'll show you how easy it is to grab the log files for the installation, which is useful when troubleshooting problems during deployment. We can simply download a tab or write from the UI, where we have the Assisted Installer Agent logs, the Bootstrap logs, the OpenShift Installer logs, and insights into what the storage configuration looks like for our nodes. So after a few minutes, the UI will have updated that the host has been fully installed. At this stage, we move into a finalisation phase, where we wait for the cluster to settle and deploy the additional operators we requested. In our case, that's the OpenShift virtualisation and the local storage operator, which does actually get deployed alongside OpenShift virtualisation for just providing basic storage needs. Whilst that's doing that, we can see that the UI has updated to show the URL for the console. It provides us with the username and password for the cluster, and offers the kubeconfig for download. After another few minutes, we'll see that the cluster deployment was successful, and that our single node deployment is ready to go. Let's validate that we're able to interact with the cluster by accessing the OpenShift console. We'll first copy the kube admin password from the UI, and click the link for the OpenShift console. After a few self-signed certificate warnings, I'm presented with the familiar OpenShift console home page, and we can log in with that password that we copied previously. We can easily confirm that the cluster is fully functional, and that only a single node is available when we look up the nodes. As this is a bare metal environment, and we requested that OpenShift virtualisation be deployed as part of the deployment process, we can see that virtualisation is available as an option under the Workloads tab. Let's deploy a virtual machine and make sure that works as expected. It's possible to select a template that will walk you through the deployment via the UI, but I'll create a really simple VM via the YAML input, which is pre-populated for me. This will just deploy a really basic RHEL 8 virtual machine. I'll instruct it to start, and it will pull in the necessary resources to start it up. We can now see the virtual machine booting via the console, and we can log in using the automatically generated and injected password within our virtual machine. You'll see that it has networking connectivity just like all other containerised workloads would have, as virtual machines on OpenShift utilise the same Kubernetes native storage and networking concepts. Last but not least, we'll confirm that the CLI access is working too. Via the assisted installer UI, I can download the kube convig for the kube admin user. Here you can see that it's also confirming that we have a single node, that it runs version 4.8.2, and we can also see that we're able to see our virtual machine instance running. So that concludes the demonstration. I hope that it was useful for you and clearly demonstrated the single node OpenShift deployment concept within assisted installer. As a last step, we can go ahead and delete the cluster within the UI, but you can always keep it around as a reference for configuration, passwords and the kube convig if required. Thank you for watching.