 All right, let's get started. So this talk is about value-driven clouds powered by OpenStack and SAF. It's about a journey about how we have seen the value of clouds like that in the current economic market. It's not a very technical talk, but I hope it gives you some ideas and courage to try out OpenStack and SAF-based clouds. My name is Yuri. I'm an infrastructure and platform automations engineer. I'm one of the original engineers for OpenMetal. And I help people understand the value of our platform. OpenMetal is the company I'm with. Our mission is to basically automate and make highly complex open source systems available on demand in a turnkey solution and increase accessibility for the smaller teams. Fun fact, we're a silver member and an infrastructure donor to the Open Infra Foundation. So one of our clouds is actually used for CI-CD for Zool to develop OpenStack. So what makes a cloud a cloud? In 2023, there's bare minimums now. So it's got to be highly available and failure tolerant. You need the distributed storage to be able to mount the data from any location. You need to be able to create and destroy assets like networks and VMs or whole Kubernetes clusters at a whim. You need CLI tooling. And the UI component is important for visibility. Obviously, you also want to support all the infrastructure as code automation. So you need the terraform and ansible and heat and all of those components. But fundamentally, you need to be able to run VMs. You need to be able to run containers, live migrations, support load balancers, firewall as a service, network file systems, a variety of storage, user management, and complex networks. And that is basically a bare minimum. So we want OpenSecondsF to be adopted by organizations of all sizes. So how do we do that? Well, we listen to what they're saying. They want to be able to know what they're signing up for up front. They want to be able to use these clouds for dev, CICD, staging, production. They're looking at alternatives to the public offerings. They're very security aware. They want performance. They want to be able to monitor things like CPU frequencies if they need to. And they want the uncomplicated billing mechanics. So there's quite a few ways to learn OpenStack. The keynote yesterday or on Tuesday, they said about 50% of the people are new. So if you haven't checked out OpenStack, please do. There's definitely self-driven solutions, kind of like the DIY, like DevStack, Microstack. There's definitely official training sources. If your company uses OpenStack, you can find those at the OpenStack marketplace. But people want learning environments that will match the production experience. So they want to be able to test live migration, disaster recovery, development, all those things. Certain interesting things that we found are some tough concepts for cloud native users to actually understand, because they don't have access to what happens under the hood. So things like virtual limits versus actual real usage is not well understood. The ability to provision really big VMs and support burst stability without having to actually use those resources. Instant sizing, everybody is shopping by public cloud instant sizes. When you own your own cloud, you can create your own instant sizings. And the over-allocated resources, the ability to run 400 vCPUs on a 100 CPU box, something like that, is also kind of a trick to explain. So some of the challenges that we've seen with landing an OpenStack deployment is usually, if you're building it out, it has a pretty high upfront cost. You have to probably take a month or two to properly deploy it. So there is time to market delay considerations. And sometimes you need consultation and engineering experience. But you need a safe path to land, figure it out, scale and grow. And you want the access and ownership of your own data. So we've created kind of this solution called the soft landing, which is a private cloud core, kind of a default landing footprint that people can use right out of the box. It's basically to get faster OpenSec and stuff adoption and propel interest in this community. So we want a turnkey solution accessible to mid-size and smaller teams that don't have a big upfront budget. So this is kind of the private cloud core. It consists of three hyperconverged nodes. They basically run all the control plane services for both OpenStack and staff, as well as some of the storage. So we deploy the control plane services and they consume some resources. The control plane usage is fairly static to begin with. And the quantity of the remaining resources depend on the unit size that you selected. If you have three servers and they all have a terabyte of RAM each, you're going to have a lot of leftover things to play with. We also set some fair reserved limits for the control plane so that it operates well. But all of those remaining resources in the white, well, those can be used and they can be used immediately. So the green here represents basically workloads. You can run on that default landing footprint, whether it's VMs or Kubernetes clusters, complex networks, load balancers. The core is full of features that make a cloud a cloud. Now, once you reach a point where you need more, you just add additional servers and grow your cloud on demand whenever you're ready. And they're also converged so they can expand storage and so on. And then you start running workloads on the compute nodes, on the converge compute nodes, and you can slowly start moving away some of those initial workloads from your control plane because it is a critical component of your cloud. So as your cloud grows, you'll want to move those out. And then eventually you reach a mature cloud where most of the compute resources are moved off of the cloud core of the control plane servers. And you're basically running most of your workloads on the compute added nodes. And the nice thing about the extra resources still available on the cloud core is they can be used for as a failure tolerance mechanic. So there's quite a bit of value in that. So if you have an experience over second stuff, we strongly recommend exploring them. You can get most of the features that you probably need from a cloud like that. And there is no cost to try new things on your own cloud. So it reduces the barrier to innovate. You can try new Kubernetes cluster versions, I don't know, new software, and so on. There's a flat predictable billing model based on bare metal servers, since this is basically what we're deploying to. Whether this is your on-prem deployment or utilizing a company like us, the billing is radically different. The other thing is it's becoming fairly easier to deploy these private clouds. So the question is, do you still need the one big cloud monolith, or can you have many smaller clouds VIP clouds, project clouds, department clouds, and so on? And the goal is that you land on OpenStack and stuff safely and then expand. So it looks like I have some time. We're going to kind of show you how we do it. So this is basically our control panel where we can spin up new clouds. I'm going to spin up a little trial, select a location, and select one other cloud cores. Again, very different sizes available. And we'll do that. I tried this before at a trial. And our OpenStack cloud is ready. So we can check out the hardware. We use the smallest unit possible. This is a pretty good environment for learning, so they're fairly small servers. We can go ahead and open Horizon. Now we do deploy a self-signed certificate. Obviously, everything's customizable and configurable with OpenStack. Let's go ahead and get our password. All right, so we're in our OpenStack deployment back by self. We actually do preload all our clouds with some images and flavors of the like. Horizon Cache has to generate a fresh new deployment. So we preload some images. So the feeling is very natural to most cloud users. And we can explore our hypervisors. Everything's ready to go in a highly available configured production-ready format. That's it. That's all I've got for today.