 All right, can you all hear me OK? Cool. So we're going to talk about OpenStack and especially getting to OpenStack from 0 to 45 seconds. So thank you all for coming. So maybe you might have seen this talk on Tuesday. This is the extended cut if you're here after that talk. So first, a little bit about who we are. OpenMetal is a new startup in the OpenStack space. We spun off from our parent company in motion hosting. We've been selling OpenStack Clouds for about 18 months now, I would say. We are a silver member and infrastructure donor of the Open Infrastructure Foundation. I'm real excited to be a part of that mission. We did have a booth here. It was over just outside the stairs. So hope some of you had a chance to come by and see us and get a sticker. Not happy to talk to you after the presentation. So we're going to talk about OpenStack and especially deploying OpenStack. Because if you're like me and you've tried to deploy OpenStack yourself, you know it's not the easiest experience. It's a lot of work involved. You got to get a lot of resources lined up. You got to get your time lined up. You've got to do a whole lot before you can even get started with whatever work you have to do. It's everything from getting the hardware to configuring your storage to actually deploying all of the software that you need. And then finally, you get to do your work. I don't know about you, but this easily was days or weeks of work for me to try to get it all perfect and right. And I feel like that's an experience a lot of OpenStack people have had. Don't get me wrong. I love OpenStack, but I don't love deploying OpenStack. So I want to highlight this part because as an OpenStack engineer and a user of OpenStack, I know there's a lot of things that you want to do, that you want to test, that you want to do experiments and research for, but you can't. Because it's really expensive to provision OpenStack. And in a lot of those cases, you just can't do that research. You can't run that test. You can't do that development. And you just have to leave it alone and let it go. But what can we do to help? What can we do to solve this problem? So for us, we tried to make OpenStack as easy as possible to deploy. As you can see right there with a single click, that's our target. We want to make sure all of the resources are available always right there when you need it so that as an engineer or a developer, you can get started right away. Because at the end of the day, you're not sure your job may be to deploy OpenStack, but in most cases, people want to deploy OpenStack to do something. They're just not wanting to deploy OpenStack for fun. I mean, some people do, and I respect that. But at the end of the day, you want to run some software, you want to deploy a workload. You just want to get on with your life. And spending a week trying to spin up a cloud is not getting on with your life sometimes. So I'm going to show you real quick what we've got, what our deployment journey looks like. Let me just switch over here to this other tab. So this is our platform. We built a hosted OpenStack private cloud platform at OpenMetal. We have some different hardware types. That part's not super important. I just want to show you what it looks like when you can deploy OpenStack in 45 seconds. So we'll pick this hardware right here. We will give this a name for this cloud. Normally, we put in SSH public key. It gets put onto the hardware for logins, since we're just doing a demonstration, though. Oh, no. It needs a valid key. Give me just one second. Sorry about that. I remember when we didn't have that check, and it was a big hassle when people put things in that weren't public SSH keys, which happens way more often than you would think. All right, so we have a free trial system, so I don't have to put any credit card in as a regular user. And when I click that button, we're going to have a cloud. It's going to start our deployment process. It's going to pick a cloud out of our inventory. It's going to associate and configure with my user. And fairly shortly, we should get a redirect to a new page that has some information about the cloud, including some login information. You can see that nice green box there. It means the cloud is running. Here's our Horizon dashboard. As some of you folks may be familiar, if you use OpenStack every day. So that's the 45-second deployment. I know, real quick, kind of simple. But I like it, because then I can get on with whatever work I need to do. So what actually, though, is happening in that process? So what do you get at the end? Well, in our deployment system, you get three boxes. They are deployed in a hyperconverged deployment. So each box has an instance of the OpenStack control playing services. It has an instance of storage, running on Seth, and an instance of Compute, the Nova hypervisor. And this cloud, it is production ready. You could go right now and run Terraform, or Ansible, or whatever kind of automation you have and have your workloads on it immediately. And one of the features I like most as an OpenStack operator is that I have direct access to the box. So that SSH key I put in earlier, I can log into those physical machines and adjust the configuration and tweak it however I need. So we have two main ways for how this works. So you guys can know what the secrets are. The first one is the 45-second journey I just showed you. And then there's a more detailed in-depth journey that takes 45 minutes to get a cloud. So for the 45 seconds, it's very simple. We keep a portion of our inventory as fully deployed clouds. All they need is just a little bit of user information, a little bit of final setup. We usually have five or six clouds in that three box deployment on hand, ready for anyone to use. All of the software is ready to go. OpenStack's been deployed. It's powered on. Just waiting for somebody to say, give me a box. And then when somebody does that, our system goes in and makes the final changes and hands the cloud over to them. The longer 45-minute journey, that is the real magic. This is what really sets our platform apart and makes it all work. So you can see all the bullet points. These are the main critical steps. We find hardware that matches what the user is requested. We power the hardware on using OpenStack Bifrost to manage the hardware. We install the operating system. That's going to be the host cloud node operating system. We configure networking. So we set up the switch ports on demand. When a cloud is provisioned, we set up VLANs dynamically to isolate that cloud's traffic. And then we deploy Ceph, as you can see. So we use Ceph Ansible. That takes care of getting the Ceph monitors, the Ceph managers, the OSDs, all that configuration ready to go. We deploy OpenStack next. We use Cala Ansible. And if you're not familiar with Cala Ansible, it uses Ansible playbooks and some tooling that the Cala Ansible project is put together to deploy OpenStack services in Docker containers. The Docker containers are provided from the OpenStack Cala project. Really nice. It's really great having those services containerized. And Cala Ansible, I can definitely recommend, is a deployment method for OpenStack and for configuring OpenStack after the fact. Finally, at that point, we have a fully ready OpenStack cloud. And then we just do some final steps, like setting up some flavors and images and things like that for users to use. A lot of users, a lot of our users, aren't necessarily ready to pull an Ubuntu 20.04 image off the shelf and upload it to their cloud and get right away. They don't know how to build that kind of stuff. So we give them some pre-configured defaults to get them going from the beginning. And then we just run tests to make sure that it is a ready OpenStack cloud. And it is just a pure plain OpenStack instance of the software. We don't modify or customize it or have any special flavor. It's RevStack-compliant. You can find us on the OpenStack, on the Open Infrastructure Foundation's website, and look up our certification as OpenStack-powered. If you're not familiar with RevStack, it's a set of tests that the OpenStack community has put together that just says, yes, this is what we would consider a pure, compliant, API-safe OpenStack cloud. So that's where the magic happens. And these two processes work together, that 45-second process and this 45-minute process. I pulled that cloud earlier out of our pool, and that triggers a process that starts this whole sequence of steps to fill and put another almost ready cloud back in our pool for somebody to check out. And that's really one of our key things that makes this whole platform work and this whole proposition work. So that's how we do it. And I'm just going to share some of our experiences, how we've helped our users, and some of the things that we can now do. And that OpenStack users can do when you have this type of deployment flexibility available to you. So like I said, you can get started working right away. You can test new services and deployments. That's one of the things I've used our platform for quite frequently. Say a customer comes in and says, I want network file share. Well, they need OpenStack Manila. That's not part of our core configuration we have right now, but thanks to our platform, I can spin up a cloud. I can use Kala Ansible to deploy Manila very straightforwardly, and I can test it and evaluate it in a safe, isolated environment, do all the work I need to do, and then just throw the cloud away and have the hardware go back into our pool. All without having to get a sign-off or wait for somebody to put hands on a machine and get it ready for me. One of the other big uses we've used is testing new releases, because I'm sure some of you folks running Big OpenStack clouds. It gets a little nervous. You want to make sure that release works, and then there's not any crazy bugs that you're going to have to get rid of. So I'm going to give you a quick lesson. And when you can deploy multiple clouds, easily and simply, you can give three or four engineers their own clouds and say, hey, go test this out. Check it out, make sure it works, and have that work done. And I know that's tough to do for a lot of teams that they don't have big hardware pools or just resources lying around to spin up OpenStack, extra OpenStack clouds. One of the other things that we've done, so I know one of the big promises of the cloud is that you have less risk, because you can use automation and reusable infrastructure and disposable VMs. But I think what people don't understand about the cloud is that the cloud is now your failure domain. One of the big things of clouds is cattle not pets. So your infrastructure and things should be reusable. And that's great at the server level, but I think a lot of folks don't understand is that your cloud is now your pet, like your cloud infrastructure. If you're running your own hosted infrastructure. So it helps, for me, when you can break that up into multiple clouds and have kind of like availability zones, but at a smaller, faster scale. And of course, you can also customize your cloud configurations. So I think all of the customers that we currently service, everybody's got a different configuration. Some people are running Manila, like I said. Some people are running OpenStack Courier. Some people are using OpenStack Magnum. And we can support every single one of these use cases because every single cloud is isolated. It has its own dedicated control plane. It's running its own instance of the OpenStack software. So they have no dependencies to one another. And then, of course, what's really helped me out personally, and I'll talk about it a little later, is that if you have a problem, you can get a new cloud right away and migrate the hardware, migrate the data to that new cloud and get right back up in business very quickly, which is a huge, huge, huge help when you're in a bad spot. So I've got a few stories that I can tell. Since I've got more time, we are 13 minutes in, it looks like. So I'm going to talk about some of the things that we've been able to do with real-life use cases thanks to our platform that we couldn't have done if we had, like, a single big public cloud or one homogeneous configuration across all of our customers' clouds. So we've done a lot of customization, like I mentioned. One deployment I manage, which is an interesting one, it's this 3,500 Docker container deployment. And the trick with that one is that they all needed IPv4 addresses, publicly routable IPv4 addresses. So thankfully, the OpenStack Carrier project gives you a way to bridge the Docker networking space with the Neutron networking space. So you can pretty much attach your Docker containers to Neutron ports and set up floating IPs to give them that public access. That's something that we could do thanks to OpenStack and thanks to being able to customize the cloud configuration that this particular customer couldn't find anywhere else. That there wasn't another provider like a public cloud or something like that who could give them the density that they needed, which is 3,500 containers on nine physical machines in three clouds. They couldn't find that anywhere else, not like a public cloud or other OpenStack providers. And they certainly didn't have the skills or the knowledge or the capabilities to build that themselves. But that's the thing we were able to build. Just made a cloud, did some research, tried out OpenStack Zoom, OpenStack Carrier and called it a day. One other thing I'm excited about is that we're working on now is configuring for the best CPU performance. So if you haven't had to do this, there's a ton of things to squeeze out every bit of frequency out of a processor that you can do. From tuning settings in the BIOS to dynamically setting processor settings in the operating system to customizing your CPU topology and telling Nova, hey, pin VMs this way to these particular cores on these particular boxes. This is something that one of my coworkers has been working on recently for a customer. It's really exciting seeing him be able to do that. Because if we had a single big cloud or a less flexible system, we couldn't do this kind of stuff. People would just have to take whatever we have or leave it. But since we have this isolated cloud model and we can break up the clouds and they're not dependent on each other, we can give somebody hardware that's optimized to squeeze out all of the CPU so they can have their workload run as fast as possible. And I know it's a very specialized workload. In most cases, you don't have to do this. But for this unfortunate person who has to run unfortunate, old, non-hyperthreaded or parallel processing-friendly software, they need big CPU frequency. Finally, the last thing I like too is, and this is more about OpenStack. OpenStack is highly customizable. You can make OpenStack fit your use case any kind of way by mixing and matching the OpenStack services that exist in the OpenStack ecosystem. And like I said, every single one of our clouds we've deployed, it is unique. It's got some kind of difference either from different CPU topologies, different storage topologies to different OpenStack services that we've deployed. I know we've done Manila, Magnum, Octavia, Zoom, which is for containers. So that's one of the things that I really like, that I can do as an OpenStack engineer is that I can do all of these different things. I can try all of these different services and meet all of these different needs. And I'm not bound by the existing configuration or the existing code of the platform. So now to having the resources. If you can go in and get a new cloud right away, like I showed you, it really, really, really helps. So that Docker deployment that I manage, I blew up one of the clouds. Absolutely hosed it. And it was a bad time. And it was great because I was trying to fix, there was a Docker port, one Docker port, that wouldn't delete out of the Docker network. And from there, trying to fix that one thing, I ruined the entire deployment of like 1200 Docker containers. But I was able to get a new cloud in a minute and start migrating stuff right away, which helped, which helped get that deployment back up and running in a couple hours instead of days to get it open stack set up and stuff like that. So it's funny to say, I've not done that again, haven't ruined another cloud yet, but it was really nice not having to get those resources lined up. And I was able to help this customer. Don't ever just take it from me, don't ever go digging in the Docker bolt DB because you are definitely gonna have a bad time unless you know what you're doing, which I did not. I mentioned that they're waiting for approval. I know in a lot of teams to get hardware resources, to get thousands of dollars of silicon in your hands, you usually have some kind of sign off process or checkout process, which when you think about it, that's not a really cloudy way of doing things. Like the cloud is all about getting the resources right then and right there. So I like that I as a user and the people on my team can just go get hardware and check it out right now. It's really valuable as an open stack engineer for that. And of course, the configuration, it is a real environment. It's same as production ready, so you don't have to worry about translating things over. You don't have to worry about, is it gonna fit, is it gonna perform right? It's the same as what you can expect. And since January, 2021, we have deployed over 700 clouds on our system. That's almost a cloud, actually it's probably more than a cloud every day. Since it has been two years. So it's really nice. It really accelerates our work as open stack engineers to be able to have that and to be able to get to work right away. So one of the things I consistently hear from customers is that they love open stack. They wanna use open stack, but they're so scared or they've had a bad time in the past deploying it and got turned off by it and then they had to go with somebody else who wasn't a good fit. Because honestly, a lot of these other solutions and platforms aren't there, aren't the best fit for these people. It's a one size fits all, one size fits none kind of situation. And open stack could be the solution if it was just easier to deploy, if it was simpler to manage, or if you could get comfortable with it as a user without having to be an architect first. Because if you try to set up open stack yourself, you really have to be a master and an architect and know how to do open stack from the ground up before you can even get started running your workloads and know if open stack's gonna be a good fit. So for me, this is the thing that I'm most excited about our platform is that we're making open stack more accessible, we're bringing it out of the enterprise to everybody so everybody can enjoy that same capabilities and power that the enterprise users can. And we're helping our own open stack engineers and other open stack users too who have their clouds do the kinds of things that they can't do. They can run the kinds of tests and experiments and research that they wish they could do, but can't because they either don't have the hardware or they can't touch production and can't ruin their one production cloud. So that's the end of the presentation largely. This is me. Here's my information if you'd like to follow me or connect after the conference. Happy to talk about our platform, how we make it work. Happy to give people a trial if you'd like to try it out. See if it helps you, fits you in the way it has our team and our engineers. We also have our site and you can sign up for a free trial yourself if you don't wanna talk to me. If you're tired of seeing my face, you can just go right to that URL. And we do have a booth but we closed up I think since it's kind of the end of the day. But I'm happy to hang out for a few minutes afterward. Of course, people wanna talk or discuss the presentation or what we do outside of this presentation setting. So we have some time for questions. If you guys would like to ask questions, I believe they would like for you to go to the microphone over there if you're comfortable doing that. So they can get the question on the recording. Hi, I have two questions. How do you charge your customers? Do we have a fixed price packages or like GCP or AVS? You take CPU and storage or? Okay, the question was how do we charge our customers? So we charge our customers by the hardware box, by the hour. So when you get that three node cloud, for example, you pay for each of those hardware boxes. It's a fixed price every hour and that's it. So you use this system only for to test your environment and to go back again to your open stack or? You can, he asked if you can use it to test your environment or use it for production. You can use it for anything. It's ready for testing or production. You can use it for an hour and turn it off or you can use it for forever. It's flexible in that way. So it's, we've tried to have open stack private clouds but on a public cloud kind of terms. So it's flexible. I mean, after this presentation, I'm gonna delete that cloud and it's gonna go back into our hardware pool. But yeah, it is usage-based and you just pay by the box. So you can use as much CPU as you want, as much storage as the box has and it's the same price, month to month. Thank you. Are you, didn't wanna have any other questions right now or anything that they'd like me to talk more about? About how we built the system or what's that? Okay, the question is, what if the customer wants more storage? I didn't show it but our system fully automates the process for adding new hardware nodes. So through that same portal I showed you, you can go into that cloud, you can click a button to add hardware and it'll show you a selection of hardware types that we have in our inventory and you just click which one you want and the automated system will get it into your cloud in 20 to 30 minutes. So it'll power up the new hardware node, it'll set up the networking and it'll run SAP Ansible if necessary to add it as a SAP node or it'll run CallAnsible to set up the open stack and it all just happens through that one click and the user doesn't have to worry about it. Yes. Okay, the question is, where are our sites located? So we're still a new company, still kind of small. So right now we have one data center location outside of Washington DC in the United States. Later this year we're going to open another data center in Los Angeles, California and then later later this year we're aiming to open one in Rotterdam in the Netherlands. Thank you. Do we have any other questions before we conclude? I know we're getting towards the end of the time. Great, well, thank you everyone for coming to hear me talk. I'm honored to be here with you and if you have any questions, just go to our site or look me up by email or LinkedIn or Twitter. Happy to share more information and answer any questions you might come up with on the car ride home or plane ride home. So thank you everyone.