 Okay, cool. So I want to start. Welcome, everyone. Everyone doing okay? Good? Yeah? So I'm here to talk to you about how we can get Improvision OpenStack to run on Kubernetes all the way from a network booting your bare metal. The funny thing here is that a couple months ago when I applied for this talk, I realized that it takes a little longer than two minutes to get OpenStack on Kubernetes from PC booting your bare metal devices. But still, 15 minutes is not too bad, right? Still pretty cool. Before I start, I want to spoil this a little bit for you. Apart from this presentation, I'm going to get a demo. Not just any demo, it's going to be a live demo. I'm actually going to demo getting OpenStack running on Kubernetes all the way from bare metal that hasn't been Improvisioned or booted. But I'm going to do this a little differently. Normally, you have your presentation first, and then after presentation, you have your demo. But just because of what I said previously, it takes a little longer to get OpenStack on Kubernetes from bare metal running, I'm going to do this in a different way. I'm going to do the presentation and the demo simultaneously. And I hope I don't confuse you too much. And the reason I'm doing this is because I want you to show you at the end of the talk, I want you guys to see the whole flow completed because I think it's really cool. And I really hope I don't confuse you. So without further ado, I'm going to start the demo first. So I'm going to exit this. So I'm going to be switching back and forth from my slides to my terminal. So let's see. Can you guys see this? No? The font is actually pretty big on my computer here, so let me know when to stop. Is it good? All right, cool. So I'm actually connected to one of our data center via VPN. And I have three bare metal hosts assigned to me. And they're powered off. So just in case you guys don't believe me, if I ping castle10.root.com, I get no response, right? Same things with castle11.root.com also no response. And finally, castle12.root.com, nothing. So they're powered off. And what I'm going to do right now is I'm going to use IPMI to turn them on. So I'm going to turn on... I'm going to turn on castle10 first. Power on. Castle11. Turn on. And 12. So I'm going to tell this log that's going to tell me, you know, what happened when the machine boot up and how it gets all the images and all that. But while I was doing that, I'm going to switch it back to my presentation. And just to make it a little more clear, I'm going to give you a summary of what the demo's going to do. So it's going to power on the machine, which I just did. Once the machine are powered on and it gets the IP from DHCP, it will request an image. And then it will get the correct image and configuration for it. And I'm going to tell you how it's doing that later during my presentation. Once the machine is bootstrapped with the OS image, it's going to install Kubernetes. And once Kubernetes... It's going to install a 3-note Kubernetes cluster. And once Kubernetes cluster is up and running, it will install OpenStack on top of it. And hopefully you enjoy that. Before I move on, let's get over with the boring stuff first, right? Let's talk about me. My name's Steve Leon. I'm a software engineer. I work for CASEL, which is a research division under our parent company, Quantum. At CASEL, I work for Rook. If you guys don't know what Rook is, it's a pretty cool project that we started. Basically, it's an open source project that provides software-defined storage for cloud-native applications. It's natively deeply integrated with Kubernetes, and it's based on Cef. So it's pretty cool because you can take all the complexity of Cef, you know, package it up, and just use Kube CTL to deploy a Cef cluster on your Kubernetes. And that will deploy your OSD, your monitor, your IGW, right? Yeah, for Kubernetes to consume. So please do check it out if you want to know more about it. I also have been involved with the OpenStack community since 2012-13, so I contributed to projects like Trove, Horizon, Low Balance in Octavia, Q-Tempest, and others. So just before I continue, just out of curiosity, let me show you how many of you have deployed OpenStack? When I say OpenStack, I don't mean desktop, okay? I mean, a production grade, multi-hose OpenStack. All right, cool, a lot of you. I think most of you will agree with me when I say that OpenStack is really hard, right? To deploy OpenStack is really hard. I mean, it has so many services, and each of those services has, like, many different components, right? And each of those components, you know, you can configure them and tweak them in many different ways. Like, you will know what I'm talking about if you open up, I don't know, like the Nova Compute Confi or the Cinder API Confi, like so many things that you can configure. So it's really hard, and to deploy OpenStack, you need to pretty much be an expert on OpenStack, right? I know you guys probably have your secret sauce, you have your secret sauce to make this more manageable. You probably have your, I don't know, your puppet manifest or your chef cookbooks or Ansible Playbook, SoulScript, even OpenStack created a triple to make this easier and manageable. But still, like, to get all these tools right and to work consistently reliably, it's really hard to do, you know? Not just anyone can do all that and make it happen. And for you to do that, you have to do a lot of things. Not only you have to be an expert, you have to, like, I don't know, go to IRC and ask a bunch of questions. You have to discover Internet for documentation, manual, blogs, scripts, hacks, right? The bottom line is that installing, deploying OpenStack is really, really hard. Let me just go back again to my terminal to see what happened to my servers. And let's see. Oops, okay. Okay, so it looked like my three servers that I turned on, I think they're up and running. So if I ping them now again, I have... Oh, not yet. Okay, so let's just wait a little longer. All right? Yeah, so deploying OpenStack is really hard, right? But let's say that, you know... Well, let me just... Let's say that you, you know, you were good. You managed to deploy OpenStack, production could openStack, you're feeling sad and you're feeling happy, you know what? Yes, life is good, right? You do your Nova boot, your VM becomes active, right? You give it to your boss, you know, hey, I'm done. I did it, right? Yeah. Not so fast, right? The challenge of OpenStack don't stop after you deploy it, right? That's just the beginning, you know? You still have to manage it, right? You have to install your monitoring and alerting system, right? To make sure that the state of the OpenStack cluster is what you expect it to be. You know, you need to install a logging system to get your analytics and debugging information when something goes wrong, right? You need to deploy a load balancer, right? To load balance your APIs. I don't know, you also need to deploy your billing services just in case if you're providing any public services. What I'm trying to say is that, you know, you still need to add more stuff to it and by adding more stuff to it, you need to have more script tooling, you know? And this just makes it a little more complicated. Scaling, right? What if I want to add a new Nova API? What if I want to grow my Galera cluster? Like I said, this is not trivial, right? You need more script, more tooling, more ansible playbooks to do all that stuff. What happens when things go south? What happens when your server crashes? Your VM crashes, your node goes down. Also, you know what happens? You know, you're going to get a phone call or a pager. You know, you have to run to your laptop. It's saved into your environment and fix it, right? How do you fix it? You try to restart your service. Hopefully that fixes it. And sometimes, you know, things are not recoverable in which you have to, you know, delete the whole VM and run your playbook, your cookbooks and recreate a new VM and deploy your service. That's hard. So you thought that deploying OpenStack is hard? Try updating it. It's a nightmare. You know, not only you have to upgrade your packages, right? You have to make sure that your config works, right? You have to make sure that new configs have to be applied. You have to pray that your database migration, you know, actually succeed, right? And God forbid if you didn't create a backup of your database before you started this upgrade. Also, what happens if you... How do you patch vulnerability? What happens if you have your new... What happens when the other... when a hard bleed or a dirty cow comes in, right? What do you do? You know, you have to patch your host, then you have to patch all your user workload VMs, and then you have to go to Glance and update new images with the fixes, right? It... Yeah, it's OpenStack is difficult. Just hold on that and let's just go back to the demo again and see what it's doing. Okay, so let's see. Okay, it looks like Ping is working. So if I ping castle 10... Oh, okay. I get our response. 11. I get our response. And 12. I get our response. Okay, so I'm just gonna... What I'm gonna do is I'm gonna copy all the Kubernetes config to each of those machines so that they get deployed up correctly. So... I just have our descript here. So all I'm doing here is just SCPing, like the certificate for the API to configure the configs and started my Kubernetes deployment. So if I go... If I do a kubectl get nodes, I need to point the client to do the right API. So if I do that, it's still not running. So what I'm gonna do is while Kubernetes is getting up and running, I'm gonna watch it. All right? So after a while, after a minute or so, we should see the cluster coming up. So let's go back to the... Well, that's happening. Let's go back to the presentation. I hope that this flip back and forth is not confusing, and you guys still can follow this. So where was I? Oh, yeah. So, yeah, open stack is really hard. But I'm wondering if there's something out there that can help me with all this. Something that will make my deployment of Kubernetes, of open stack easier, or no open stack, any application for that matter. Something that can help me manage it. Something to have self-healing mechanism, right? Something that can just scale easy. Or something that can allow me to upgrade with confidence. Is there something out there that can help me do all that? I think there is. Right? I know. I knew you guys knew that was coming, but I want to make it a little more dramatic. Yeah, so Kubernetes... I'm not gonna spend too much time talking about Kubernetes or the basic of it, because this is the third day of the summit, and I'm sure you guys have been to a lot of Kubernetes talk, and you guys are probably tired of getting information about what it is and the basics and all that. So I'm gonna assume that you guys are, hopefully, familiarized with, at least with the basics of Kubernetes. But having said that, Kubernetes is pretty much a platform that manages and orchestrates containerized applications. And the good thing about it is that it addresses all the issues that I just mentioned. It allows you to deploy application a lot easier. It allows you to scale them easier. It has self-healing capability so that when this goes out, you don't have to worry about it, right? Kubernetes will actually automatically bring it up. And upgrade. You know, you can do rolling upgrade easily as well. You don't have to be scared about it, right? So yeah, let's go for Kubernetes. Life is good again, right? Let's go back again to... Oh, okay, so it looks like it's getting there, so at least it detected it. Okay, one more to go. Should we wait or should we go back to the presentation? Let's wait 10 seconds and see. Okay, I think we can go back to... Let me see. I think the cluster is up. I mean, two clusters enough for me to start deploying things. So I'm going to start deploying OpenStack right now. So I'm going to use my cheat sheet to start deploying OpenStack. And I'm going to tell you later what I'm doing here. I'm using actually Kubernetes, OpenStack, Kola Kubernetes to deploy OpenStack. Probably familiar with this. Okay, so I just did this to set up the R back so that Kola can start creating Kubernetes. And it uses Helm, so I'm going to use Helm in it to do this. And I'm going to run all this together in one shot, and I'm going to kind of explain what this is doing. Yeah, so in here, what I'm doing is I create a new namespace. I have three node cluster, right? I'm going to assign the first cluster to be the OpenStack controller. That's where all the OpenStack control plane is going to be, like the Nova API, the Cinderella API, the Keystone API, the, you know, all that stuff. And Castle 11 and 12 are going to be the compute node. I'm also using tooling from Kola to, you know, generate default password for your MySQL schema. And also it's creating the Kubernetes config maps to store those passwords. And at the end, I'm using Helm to deploy my MariaDB database. And so if I wait a little bit. Yep. Okay, so let's go back to the presentation. Well, that's running, let's go back to the presentation. I hope I still don't lose you from the back and forth, but bear with me. I promise it's going to be a pretty cool ending. All right, so Kubernetes. So Kubernetes, so far live, right? You know, we can make OpenStack deploy easily, scale, have self-healing capabilities and update. But, you know, Kubernetes is also known for the light-harder. You know, there's some component within Kubernetes that is that you have to deploy and get right. It's not as complex, in my opinion, it's not complicated as deploying OpenStack, but still there's, you know, it's not as trivial, you know. But the good thing is that there's a lot of options out there that will help you, you know, make this easier. Perhaps too many options. There's CubeADM, BootCube, COPS, CubeDeploy, Kubernetes Anywhere, Cargo, CubeAnsible. And if you're in cloud, you know, you can use GKE and Azure. And if you're development, you know, you can use things like MiniCube. There's a CoroS vagrant script that works really well, too. And your local app cluster script that brings up Kubernetes from your code, actually, builds up, you know. And that's good if you're deploying and developing against Kubernetes itself. Okay, so let's go back to the terminal and see what, where am I now. Okay, so if I do CubeCTL, that's encoder, get part. Okay, so my MariaDB is still in Initialization mode. So I'm just gonna watch it. It should be running. Oh, it is running already. Cool. That was fast. All right. So now that my DB is up and running, I can go back to my cheat sheet and deploy the rest of the services. So I'm gonna copy and paste all this here. And I know I'm doing a little menu step here. And the reason I'm doing this is for the sake of the demo. But you can easily bootstrap this and make it more automated by either, you know, by baking it on the image or creating a system via script. Why can I copy and paste here? Okay, copy and paste. I'm used to using a mouth, which I don't have here. So, hmm. Okay, I need to press the track. All right. Highlight. What do you... Nope. Nope. All right. Cool. Woo! Copy and paste work for me. All right. So I'm using help to deploy the rest of the services. So this is deploying your Keystone, your Node API, your Glance, your Neutron. So while that's running, we'll go back to the presentation. And hopefully you're... I haven't made you crazy yet by doing all this flip-flopping. Cool. So where was I? Oh, yeah. So there's a lot of ways to succubinate it. So, you know. And like I said, it's probably too many ways. But, you know what? I want to take a step farther, right? I want something that you can operate simply, consistently, reliably. Something that is more, you know, plug-in place. Something that is more like turnkey where you can get from your bare metal all the way to running OpenStack, you know, with the press of a button, right? That's what I want here. I mean, yes, OpenStack is hard, you know, and the Kubernetes makes it better. But, you know, I want something more. I want to be able to, like, get my server, press a button, you know, get a cup of coffee, and then when I come back, OpenStack is running for me. That's what I want. And that's what I'm demonstrating to you guys here in a live demo. So, PixiBoot in bare metal to OpenStack, to run on Kubernetes. So for me to create this environment, I had to have PixiBoot in a good environment. So this is a machine that typically has all your services in package that you expect to create a PixiBoot environment. That's your DSP server, your DNS, TFTP, Apache server, right? That's, you know, nothing secret here. I'm also using this matchbox, which is a really cool project from CoreOS. Basically, what Matchbox allows you to do is it matches bare metal host or server to this thing that they call profile. And it matches based on labels like Mac ID, right? So a profile is pretty much tells a particular machine, you know, how and what needs to be configured and provisioned. So it tells, okay, what OS images need to be provisioned, how it's going to configure, how it's like the network unit, the system D unit, you know, configuration file, like how it's, and I'm going to use Matchbox to provision my Kubernetes cluster. And I didn't show you, but I set up so that Castle 10, one of my machine, is going to be the master, and 11 and 12 is going to be my Kubernetes now. Castle 10 is also Kubernetes now as well. You can sketch your pods. For Kubernetes itself, for the deployment, I'm using BootCube, which is a really cool way to install Kubernetes. It was actually, it's incubated, and it was actually also initiated by CoreS. And it's self-hosted Kubernetes. Do you guys know what self-hosted Kubernetes is? Yeah, no? Self-host is a pretty cool idea. It's, what it means is that you're running all the components that make Kubernetes on Kubernetes itself, right? Triple of Kubernetes, if you want to think of it that way. So pretty much I have my Kubernetes API, my control manager, my scheduler, KubeNet, right? All that's running as container on Kubernetes. And this is pretty powerful because you can apply all this, you can apply all the benefit that Kubernetes provides on Kubernetes itself, you know? So you can, you know, you can use Kube CTL to scale your Kubernetes API. If your Kubernetes scheduler or your control manager, you know, crashes or goes down, Kubernetes will bring it up, right? I mean, this is, I think this is pretty powerful. So I'm using that. And like I said, to install OpenStack, I'm using, you know, your truly OpenStack Cola. And what OpenStack Cola is pretty much is, it allows you to have all this, all the OpenStack services that you know about, like the NOBI PI, Keystone, and all that, RabbitMQ database, all containerized and ready to be used on Kubernetes. All right? One thing I want to say about this is that there's no secret sauce. There's no internal tooling that I'm using for all this. All this is using open source libraries and open source projects, right? So if I can do this, there's nothing stopping you from you guys to do the same thing. Because like I said, there's everything's open source. Um... Okay, let's go back to the demo. All right. Cool. So if I do Kube CTL, then... So Cola installed everything on the Cola namespace. So, all right. You can see that? Oh, maybe it's too big. But you can see that everything is, you know, it's up to the races. So right now... Nope. So you can see it's... OpenStack is being initialized right now. So normally start... If you can see here, my database... Okay, probably not, if I scroll up. You can see that my database is running. It also, after the database is running, it runs the RabbitMQ, Keystone, and Neutron. Oh, you can see at the end that RabbitMQ is already running. So, yeah, so once all these services are running, then it starts the next services, which is the NoAPI, and the CinderAPI, the Glance, and all that. And after that's running, then your compute, Nova compute pot gets initialized. This process takes around 10 to 15 minutes, but in the instant of time, I have another Trino cluster that I set up before the presentation, just to show you the end result. And that's what I'm going to do now. So I'm going to point this to my other cluster. Okay, so if I do QCTL again now, I have 13, 14, 15. And I brought this environment the same way that I showed you in the demo. Everything is PC boot. Okay, and I also, if I do QCTL.incola, get pot. Oh, sorry. Okay, let me just... This is what I did before. Is that good? Okay. So, yeah, this is a different cluster that I started before the presentation, and you can see it's 13, 14, 15. It's a Trino community cluster, and I brought it up the same way. PC booting, just by turning it on. And I also deployed OpenStack. So, and you can see it's all running right now. Right? It wasn't before the presentation, it was two days ago. You can see here. And if I source my StackRC for this OpenStack, I think I have a NOVA running somewhere. So if I do a NOVA show demo one, you can see that it's running on Castle 14. Do you think you see it? Yeah. Okay, so let's just show you that this is actually working right now. I'm gonna create a VM here. Let me see. What's my image? Zeroes, okay. What else do I need? I need flavor. Okay. I need my key. Is that how you do it? Keep your list. And what else do I need to create a NOVA server? Oh, networking. All right, cool. So I think I'm ready. OpenStack, server create, that's that image, zeroes, that's the key. Is the key name like that? Yes. Okay. My key, flavor, M1, tiny, and I'm gonna name it, I don't know, demo, I don't know, 22. So if I do a NOVA list now, you will see my demo of 22 is building. Let's just watch it. It should become active, hopefully, in a little while. Come on. There we go. So, come on, I think that was pretty cool. You guys are a tough crowd. Actually, from bare metal, I deployed Kubernetes and OpenStack in containers. Yeah. So, to summarize, OpenStack is really hard, and you guys probably agree with me. It's really hard to deploy, and once it's deployed, it's really hard to manage it, it's really hard to scale it. Let's not talk about upgrading it. Kubernetes makes it easier, but Kubernetes is not for the lighthearted. You still have to know how to deploy it, but the good thing is that there's a lot of tools out there that help you with Kubernetes. But you know what? I don't think that's enough. I want something, I want to take a step forward. I want to be able to just boostrap it with a push of a button. I want to get my server turned on, and like I said, get a cup of coffee, and then 15-minute back, come back, and then have an OpenStack, production-grade OpenStack running. That's what I want. And with that, this is my end of presentation. Let me know if you have any questions. I just want to go back and see the other cluster. So I'm going to switch back to my other Kubernetes cluster and see what it's doing. So you can see, can you see that? Maybe not. So you can see there's some services that have started running. It's getting there, but I promise, I promise I'm going to get there. But yeah, any questions, concerns, critiques, suggestions? No? Going once, twice. All right. Thank you guys. We have one question, I think. Add new nodes to this cluster that you just created. So you showed the control part, right? How do you do the metal provisioning that will be used by this cluster? Yeah. He asked, how do you add a new node? The way you add a new node is, you know, let's say you call HP, IBM, and Odell. You order that new server. All you have to do is just connect, push your server in your rack, connect to your network, you know, and then power it on. That's it. So you showed a workflow where you were able to create a cluster in which you are running the open stack control plane. Yeah. Using that open stack control plane, now I need to manage a whole bunch of bare metal. Is there anything you have in your demo that shows that? I don't have anything in my demo to show that, but like I said, before doing that, to bootstrap your new node, like you turn on and then, hopefully, you have some profile already that sets it up as a Kubernetes worker. And then it depends how COLA works. So I don't know how the Nova Compute pod is running. If it's a Damon set, if it's a Damon set, then it will detect that new host is part of Kubernetes and it will install Nova Compute in the host. Does that answer your question? Yeah, we can chat online, okay. Any more? Awesome. Thank you, guys.