 Okay, great. Well, thank you very much, folks, for joining this overall community gathering and for joining the sessions later. For folks that are just tuning in, there will be sessions broken up for different sessions, actually for the different types of installations, basically, that you can do. So what I'm going to be providing right now is just a quick walkthrough of an installation on vSphere with user-provisioned infrastructure. And what does that mean? Well, so user-provisioned infrastructure means that instead of the installer configuring a load bouncer within vSphere or configuring the IP numbers or any of that, that this is all done with infrastructure that the user provides on the outside before they run the installer. And so the prerequisites for that are basically handling DNS, DHCP, load balancer, and optionally a proxy. And Joseph is going to get more into the details of doing these specific things. But in short, you're going to need some DNS entries for the bootstrap machine. You're going to need three entries for your master nodes, OpenShift clusters right now support three master nodes in the control plan, as it's called. And then you're going to need an entry for each of the desired workers. And also an entry for your endpoint and for your API endpoint and your API internal endpoint that the nodes use to connect with each other. And then a wildcard entry, a wildcard DNS entry of the form like this, so that once you've deployed apps on there by default, they would have the app name dot apps dot cluster name at your domain. And to give you an example of that, so for user provisioned infrastructure on my end, I'm utilizing the DNS that is provided at the University of Michigan, which is a system called Blue Cat running on Proteus. And so this is a way of very easily configuring DNS and DHCP. And so you can see for my demonstration cluster that you'll see more of in the session that I'm doing. Basically, you can set up your DNS. And this is what it would look like, right? So you've got your masters and your worker nodes at set IPs. And DCP didn't fill in the details there. This is something that you'll want to do in most cases. So the way OpenShift clusters work, you can do static IPs or you can do DHCP, but you cannot do both. And this is, let's see if I can find it here. I don't have it in front of me. But basically, when you have chosen to go one route, you can't go back to the other. So if you're going to do static IPs, you can do that by setting some kernel parameters with something called afterburn. And this is something that you pass to in the configuration of your nodes. You pass a string, a configuration string that's handed to the kernel with your static IP. Or you can rely on just the DHCP on your network and any address that's handed out. Alternately, I took a third path and I'll get into more details in my session of using reserve DHCP and setting the MAC addresses on the nodes. And there's some advantages to that for UPI that I'll talk about. And you're also going to need a load balancer outside so that incoming requests to the API and the ingress get passed to the respective machines. So in terms of a load balancing, we've got a load balancing proxy that is called a big IP from F5 networks. And so in my configuration, I use a big IP which allows you to define pools of machines. So here you can see this is the API pool and this is the worker pool. And so this will load balancer requests to their respective pools. And there's one thing you don't see here but I'll be showing in more detail is you can also do some checks. So for those of you that are familiar with the internals of Kubernetes, you know that there are health Z and ready Z rest calls that you can make to get the status of your cluster, of your nodes in your cluster. And in the F5, you can actually define those types of checks as well. So be performing those checks externally. And advantage of this is that if your entire cluster goes down and the internal notifications aren't working, you have an external source of notification and monitoring to see that. And I'll get into more details of that in the other session. Another thing that you would need is a proxy if you're going to be on a private network. So this is something that OpenShift has been growing into when it was originally in versions three, I should say, unless there wasn't as much focus or support for private networks. And that's been increasing. But if you're going to be doing a private network, you will need a load balancer for or a proxy for your calls out of your containers. Once you have your cluster up, but also for the installation process as well, pulling down those containers that are part of the installation process. So in terms of a proxy, you can use Squid. Squid is a freely available proxy that is very easy to set up and has a simple configuration file. And I'll be providing some examples of that in the session that I have. And if you look at the documentation on the OKD website, there is a link to installing and then subsections. And so here is the section installing on vSphere. And then there's another subsection under that, installing on vSphere with user provisioned infrastructure. And that is what I've been working with. And this has a lot of great information. I would encourage folks when they're trying either using the standard install or the user provisioned install functionality, either one, check out the UPI documentation for the platform that you're using. The reason that I suggest that is the UPI documentation shows you some of the things that are needed and some of the underlying details of a OpenShift install. And it can be really helpful for understanding overall how the process works. And it's sectionized quite well and shows you what you'll need in terms of your nodes and about creating the user provisioned infrastructure and ports that you'll need and whatnot. So definitely check this documentation out. And one of the things that they've done is they've broken it out into several sections with the more levels of detail that you want to control, the higher resolution of detail that you want to control in your install. So there's a section installing a cluster on vSphere with user provisioned infrastructure and network customizations. And so that one, for example, will provide you details about setting static IPs and disk partitioning and some of the other stuff that is a higher resolution of manipulation of the install process. And the install usually takes about about 30 to 40 minutes. And in the session that I'm doing, I'll be talking about how you can automate that process literally to be able to just run a script and configure, generate the necessary install files and whatnot, and load those into newly created VMs and then kick off the OpenShift installer so that you get a very near to non-UPI installation experience and actually some extras. Let me bounce over here to provide an overview of some of the files that are involved in a UPI installation. So what you'll see is after you've generated what are called ignition config files, you'll see a bootstrap ignition config and ignition is the basically the metadata that's used that you put into the metadata of the node to tell it to connect to the bootstrap server or in the case of workers to connect to the control plane to download the necessary components to join the cluster. And so you'll see multiple ignition configs for the bootstrap, for the master, for the worker. After you've run the installation, there are some hidden files, an install log, and a state file that says the state of the cluster. Now there's one thing that I want to point out for UPI installations that is true across the board and it's something that sometimes surprises folks is that the installation, the OpenShift install binary actually ingests and deletes your install config. So you'll have like a general install config that you'll configure the parameters for for your cluster. And they reference that in the documentation what you need to have in that. One thing that happens though is when you run the installer, it actually eats that file up. So you'll want to always make a backup of it. Trying to find an example of it here. You'll always want to make a backup of it so that you can control, or set, you can duplicate the process again without having to do a lot of work. And the tool that I'll be demonstrating that I wrote actually allows you to have a template and then it duplicates that template and then goes from there so you don't have to do anything by hand. And that is the overall process of installing with vSphere. Basically you generate your files and you deploy your infrastructure. You generate your files and then you create the nodes with the metadata from those Ignition Config files and then you run the installer. So that's a general overview and again if you want more specifics of that and you want to see an automated example of that then please check out the session that I'll be hosting with Joseph. And with that I'm going to stop sharing myself and then we'll move on.