 Thank you for joining me for this demo. This is a demo about OpenShift on OpenStack auto-scaling. To do this, we deployed an OpenShift 4.3 environment. We used full stack automation as our method of deployment. You might know this as IPI. This is great because it gives us the six nodes, three masters, three workers, very easily. These nodes are all actually deployed onto OpenStack using a dedicated tenon. This is really cool because the OpenShift installer creates the OpenStack instances directly for us. We can see them here in Horizon, but it does more. It also builds the networking and everything else you need to get a working OpenShift install. So let's see how OpenShift sees this. Here we've got our nodes, the three masters, and the three workers. These nodes are represented as machines. It's the usual install. We also get a single machine set for the workers, and that's important because that's what we need to go ahead and scale our cluster. Let's start by creating a cluster auto-scaler. You only set one of these, and as it says, it governs the whole cluster. It's easy to edit. You can do it right in the console, and this controls the global settings for the entire cluster. Then we need a machine auto-scaler to scale the machine set for the workers. This was the same machine set that was created at installation. You can see the worker up there, and we are able to create some limits for this. We have a max of eight and a minimum of one, meaning there must always be one and a replica, so a minimum of two workers, which is actually less than the number of workers we deployed. That's really it. It's time to give it a try and scale OpenShift on OpenStack. Of course, to scale, we need load. We're going to do this in a unique project. We created an auto-scale example project for our testing, and we'll use that project to run a special job that generates a lot of high load by creating a lot of busy pods. In this case, we're going to just bring up a pod that uses up a gig of memory. We're going to run 84 of them. They're going to run in parallel, and then they're going to die off after five minutes, and that means that once complete, the pods will terminate to allow us to see a scale down as well. As you can see, we don't have any pods running. We are in our special project, no pods running. We need to generate this load. Let's run the scale job. We run that job that you just saw, off it goes, and then we can go ahead and look again at the pods. Look at that. We've got 84 load-generating pods being created. Let's see how that's affecting OpenStack. Instantly, the machine set auto-scaler is requiring more nodes. We're looking at an OpenStack view, and we're seeing OpenStack instances get created. Workers are being built in the underlying infrastructure as a reaction to that. Then we can see this in OpenShift as well. Have a look at the machines, and we see more. They're building, they're provisioning, and they're growing. As the load continues to increase, the machine set continues to grow the machines up into the limits that we set. Here we go. More OpenStack instances are being created. Our 84 jobs are going nuts. They're really covering everything. Back in OpenShift, the process continues. As you see here, they're provisioning, they're building, and they're matching in naming. The naming is consistent across platforms. There's no difference. You can see that OpenShift is talking to OpenStack. These are the same instances. The infrastructure is scaling. Our usage is going up. This is Horizon from OpenStack showing VCPU up. We've got more workers. We're scaling. We have many, many instances, but of course all good things come to an end, and the scaling job ends. It begins to remove nodes from the machine set. It disables them. It takes them out of circulation. Remember, our jobs are terminating. There they go. Disappearing. The nodes are going away. The machines are going away. Same things happening in OpenStack. The instances, of course, the same instances, are also being removed. The OpenStack administrator is seeing it. The OpenShift administrator is seeing it. We can track this process across the platforms. Again, we see a worker instance in OpenStack, and then it's gone. And then the system returns to the state that the autoscaler wants to be in. We deployed with three worker nodes. We now have two to match the replica set request we made. And then we have a total of five nodes. And eventually everything settles, and we're done. That's it. That's OpenShift on OpenStack Autoscaling. Thanks for watching.