 Hello, my name is Nir and I'm here together with Freddy and we are both software engineers at Red Hat OpenShift KNI Edge Group. Today, we'd like to share with you a new web-based solution called Assisted Installer. Assisted Installer simplifies and reduces the amount of prerequisites needed to set up an OpenShift cluster, making it even easier to deploy on your bare metal servers and virtual machines. But first, let's lay the ground with the basics. What is OpenShift? OpenShift is a platform powered by Kubernetes that allows users to run containerized applications and workloads. It offers additional features such as more advanced networking, security, UI and more. OpenShift is derived from an open-source project named OKD or Origin Kubernetes Distribution. It comes in several flavors and may be installed on various platforms. So what are our options for installing OpenShift? First we have the cloud. You can install OpenShift on AWS, Azure, IBM or Google Cloud. The solution is called Hosted OpenShift. You can also choose OpenShift dedicated where Red Hat engineers in support manage the cluster. You have the option to choose to install at your own data center on top of an existing virtualization infrastructure such as Rev, OpenStack, VMware or on top of bare metal machines. The Assisted Installer comes under this category. You can also choose to run locally on your laptop with the help of CRC. By the way, there was a CLC talk back in DevFront 2020. Check it out. Bare Metal Installation page is giving us three options, including the new Assisted Installer. With the user provision infrastructure, UPI, the administrator is expected to set up an environment with DNS entries, a load balancer for cluster APIs and set up a network connectivity between the nodes. In OpenShift 4.6, we introduced an Installer provision infrastructure, or IPI, used in MetalCube to deploy OpenShift on bare metal infrastructure. This end-to-end automation requires simple prerequisites such as node BMC address and credentials and an optional network for no deployment to be provided. Now we're introducing Assisted Installer, currently in developer preview, which aims to achieve fully automated bare metal cluster deployments at scale, with a very low barrier of entry. Assisted Installer simplifies the deployment experience for OpenShift clusters with the very minimum external requirement. So why do we need yet another installer? The main goal of this project is to make bare metal OpenShift installation as simple and achievable for a lot more users. The requirements are as minimal as possible. You'll need at least three bare metal or virtual machines. Those machines will create a three-master setup, and you can add more workers if you wish to. Assisted Installer will validate the minimal hardware requirements are met. You need to be able to boot those machines from an ISO. The host will need to communicate to cloud.reda.com either directly or via HTTP proxy. Both options are supported. The host should be able to obtain an IP address from DHCP or manually configured and also connect to an NTP server. Assisted Installer has smart defaults for NTP settings in case you don't have one. You don't need to set up any bootstrap machine or any BMC credentials. Once those requirements are met, you'll be able to easily install OpenShift at your own backyard with the help of Assisted Installer. Now, let's look at the high level flow. We will meet all of these steps in the following demo. First, the user customizes the discovery ISO and downloads it. That ISO is generated dynamically per cluster. It cannot be used for different clusters. Then, each node that the user wants to be a part of the OpenShift cluster needs to boot from that ISO. The agent, which manages the installation locally for a given host, will use a call home mechanism and start reporting environmental inventories such as RAM, disks, and network interfaces. Once the nodes are discovered, pre-flight validations are made, including, for example, network connectivity between the nodes, harder requirements, and so on. The roles are auto-assigned to the hosts based on their capabilities. Note that the user can change the roles, and we'll see that in the demo. During the installation, progress is monitored and reported. Let's install a cluster using Assisted Installer web interface. Select Cluster Manager. Create cluster. Read that OpenShift container platform. Select run on bare metal, swallowing by Assisted Bare Metal Installer. Now that we are in the Assisted Installer web interface, let's go ahead and fill the required details to deploy a cluster. Cluster name is required for later and will be a part of the DNS domain name. Your pull secret gets filled automatically based on your user, but you may choose to use another one. Now, we'll generate a discovery image, which will need to provision the hosts. Your public key gets baked into the image. This is done to enable SSH connectivity in case you'll need it. You may also configure an optional HTTP proxy. Click Generate Discovery ISO, and you'll find a temporary S3 bucket URL to download it from. To provision your host with the discovery image, you may either use Bixi, a virtual media interface, or a USB drive. For the purpose of this demo, I'll be doing that in the background with three hosts while speeding up the process. We can manually assign a role or let the application logic determine which host is most suitable for each role. For this demonstration, I'll choose automatic. You may also see that the inventory includes detailed hardware specs, including disks, network interfaces, and more. This may assist you in making an informed role assignment decision. Fill a base DNS domain to be a part of the cluster name. Note that you cannot change this once the cluster is operational. Available subnets are automatically discovered, yet some users might require custom networking configurations, and we support that. You may want to override the default networks either, or the API and Ingress Services will listen. Additionally, you have the option to specify an NTP server, but if you don't have one at your disposal, Assisted Installer has smart defaults and will use a public one for you. Hosts finished discovery and now entered an insufficient state. That will change as soon as network connectivity checks are done. I'll speed up the recording once more as we expect the status to get into ready to install. As soon as we get such indication, we can validate our configuration and start the installation process. Status is now changed to start in installation. For the last time, I'll speed up the recording. You may now notice one of the nodes get selected as a bootstrap node. This is one of the key elements of Assisted Installer. We do not require users to provide a separate bootstrap machine to run the installation. Instead, Assisted Installer will initially deploy a two-node cluster while the bootstrap machine manages the deployment. Nodes get provisioned with Reddit Core OS written to their disk and reboot and start the installation process. When the two nodes are up and running, as indicated here, the third machine pivots and gets deployed as a third cluster node. And by that, we get a quorum to the cluster. Okay, all done. We now get a direct URL to our cluster's console as well as a kube-config file for us to use with CLI tools. You also have the option to download and inspect the cluster installation logs. Click View Cluster Events during or after an installation process to view important events and filter out per severity, host, or free text. That concludes the demo. Now that you see how easy it is to deploy a cluster, I hope that you'll try Assisted Installer. Freddy will now take us through an in-depth look at the architecture. Thank you, Neil. That was a very cool demo. Let's talk about the architectural system. We have three different topologies available, cloud, self-hosted, and on-prem. Let's take a look at the cloud architecture. First, we have the web application, the Assisted Installer service, that it is used to orchestrate the installation process. It is hosted in cloud.radar.com. Assisted service provides a REST API used by the UI, but it is also available directly. The UI code is integrated with the UI of OCN, which is the OpenShift Cloud Manager. In addition to the service, we have a database for storing data, which is PostgresDB, also in AWS. We also need an AWS S3 bucket for storing ISOs and rendered files. And for authentication and authorization, we integrate with AMS, which is an account management service, which is part of OCN. On the live ISO, that will run on the bare method, there is a process called agent that communicates with the service via REST API. And eventually, under newly created OpenShift cluster, there is a job called Assisted Controller that approves notes and reports fine and progress to the Assisted service. It is needed because, after all host boots from the disk after the installation, the agent is not running anymore. Since the Assisted Installer is running as a SaaS, we are getting a few advantages. First, we are able to release features more often and fix bugs faster. Then, we also collect metrics about the usage and analyze installation failures for future improvements. Here are skin shots of some of the Grafana dashboards that we use. With a self-hosted flavor, here we are not running on the cloud, but on a local existing OpenShift cluster. The same basic components are used with the following changes. The UI standalone and not integrated into OCN, files are stored on the file system, other than on an S3 bucket, and we use a local database. Note that the cache and authorization are performed on this kind of setup. We have also the ability to run in a standalone way. Here, the needed components are running together inside a single Linux container. And again, using file system profiles, standalone UI and no authorization or authentication. A few words about the agent-based installation method. Using this pattern gives us a few advantages that are important in the space of distributed system installation. First, the column mechanism. It makes it easier to communicate with the SAS, getting out to the internet from the bare metal, even for epoxy. It's much easier than trying to access from the web into the machines. Having the bare metal pushing information into the SAS makes it more scalable, as the information is pushed only when needed, but it's for pulling lots of machines from the SAS side. The agent running on the machine allows to make validation that otherwise would not be possible. For example, what could we know if all the hosts on the cluster are about to communicate with each other? Also, on the inventory side, having the capabilities reported allows to decide if it is sufficient for installation, and even what role it can take. This pattern is already used in other projects at Red Hat, like telemetry, where OCP clusters report statistics to the cloud, and in Insights, where real installation reports the state of the packages installed and subscription information to cloudredhat.com. There is a lot more customization available than what we saw in the demo. Some of those are not exposed in the UI, as we do not want to add complexity when most of the users won't be using these options. These customization are available via turret calls to the rest API. For example, we can make changes to the Ignition files. Ignition is a utility used by CoreOS to manipulate disks before doing the first boot. This includes writing files, configuring users and more. We have two types of Ignition files, the discover ignition for the ISO, which is common to all the nodes in a specific cluster and the pointer ignition, which can be customized for specific hosts. The pointer ignition reads this configuration from the assisted service and applies it before putting an additional configuration from the matching config server that is running on the bootstrap. So, what can the user do with this? Well, actually, write anything to the files. For example, maybe adding some special certificates needed for pulling container images or configuring the console with a console password to debug the nodes. The install config and the installer parameters are related to the configuration of the OpenShift installer. Through these parameters, advanced customization also are available. For example, you can disable hypotrading in the control plan using the install config or pass kernel configuration through the installer parameters. For example, for static IP configuration. You can also provide Kubernetes manifest emails that will be applied when the cluster is starting. For example, configure a machine config to load a specific kernel module or configure advanced network. We are actually using this mechanism to install additional operators, for example. With the help of this possible configuration, we can also achieve the ability of running the installer in a disconnected environment that is not connected at all to the Internet. As a dead-to-operation, we also have the ability to add an additional walkup to an existing cluster. This option is available on any cluster or dotredacted.com. Here are a few of the future features that the team is working on. On the networking side, we will support IPv6 and static IP addresses. We are planning to add pretty fun profiles that will check the hardware requirement accordingly. And also, we are adding support for the installation of a different operation version. Currently, we support 4.6 and very soon, we will be able to support also 4.0. Also, we want to give the option to the user to install additional operators, for example, installing OCS which is OpenShift Container Storage that will give you a safe cluster on top of OpenShift. And also, the local storage operator to provide persistence volume using the local disk. And also OpenShift utilization to be able to run VMs in your OpenShift cluster. In combination of these operators, the user will be able to get a full IPv6 setup on their bare metal servers. Soon, we will be able also to install a single node OpenShift cluster giving the possibility to have a functional OpenShift on a single bare metal machine. In addition to the rest API, Kubernetes API are developed to be able to use CRDs that will allow more integration in the future. For example, we are using ACM, which is Advanced Cluster Management, a solution for multi-cluster management. And of course, we are always trying to improve the user experience by analyzing the installation features and finding ways to prevent them before they happen. As you can see, we have a lot on our plate and yes, we are hiring. Here are some useful links. Of course, the link to the Assistant installer is www.retailer.com. Just go and try it and give us feedback. Then, the GitHub repository, everything is open source, of course. So, you have the service, the installer and the agent, and also a testing infrastructure repo that can help you a full setup with the Assistant installer and VMs that will actually play the role of the bare metals. This session is pre-recorded but we will be available for question in the chat area right after it. Thank you for listening and you are welcome to reach out and share feedbacks and we hope that you will try the Assistant installer soon by yourself. Thank you and successful installation for everybody.