 All right, so thank you all for joining me today. My name is Jayden. I'm with Canonical, but I've been working in OpenStack for a few years now before I joined Canonical, and I must say it's an honor to stand before an audience with such excellent taste. So thank you for being here with me today to talk about OpenStack and deploying OpenStack. So I think it's great you wanna set up OpenStack and then you wanna look at deploying OpenStack differently. That's amazing, but if you're like me when I started my OpenStack journey some years ago, where do you start? How do you begin with deploying OpenStack? There's a lot of different ways you can do this, and this is just a list of the open source options. You've got all of these different great projects, every different kind of software. I mean, we've got 10, 10 lists there, and if you're new to OpenStack, how do you start? Or even if you've been with OpenStack for a while, how do you know which one might be a good suit for you or what is the best one to use? There really are a lot of great options though. I would say all the options on that list are definitely good, they're great, they're work checking out, the people who put them together work really hard. It's a great community, and I can't say enough how good they are, but they're not always the best fit for everyone and for every use case. So I would say we're not gonna talk about which is the best one, because there isn't one best one. I would say it's better to ask which one's the best one for you, because not every use case is the same, not every team is the same, not every infrastructure or situation is the same, and what works really well for one team won't work well for you. So who are you? So I would say if you're new to OpenStack or even if you're looking to change what you're doing, ask yourselves questions like these. What does your team look like? What does your skill base look like? What are you doing today right now? And how much are you willing to commit over these different kinds of items? Because whether you're taking your data center infrastructure or the software or each of these different parts of the stack you could say, that can greatly influence the type of OpenStack deployment that you will go with or the type of solution you will go with. And of course, if you see that list and you think that is way too much for me, I can't do that, that's okay. There are plenty of great commercial providers you can rely on to help you with different parts of OpenStack. From getting set up, from running the whole thing for you, from running parts of it, there's a really nice, nice commercial ecosystem for OpenStack right now. It's really exciting to see it grow and get bigger every year. And that QR code there, we'll thank you to that URL you see, to the OpenStack marketplace. If you haven't checked out the marketplace, it's a place on the website, openstack.org, where you can see different types of providers in these different categories I've listed here. And beyond that, who can help you meet your individual needs. So if you are just too nervous about deploying OpenStack, definitely talk to one of these people, check out the marketplace. These are the people who are sanctioned, you could say, or you can trust that when you get OpenStack from them, it's what the Open Infrastructure Foundation considers as true OpenStack. And it's compliant with all of the standards and API requirements that OpenStack provides. But if you're bold and brave and you're confident that you can be up to this challenge, and it is a challenge, then I have some questions that I want you to ask and to consider for this. First, what are you doing today with your infrastructure? So are you using Kubernetes, for example, or are you just using something else? What distribution are you using? Are you using a Red Hat-based distribution? Are you using Debian or Ubuntu? Are you running on Windows? Do you use some kind of orchestration software to manage your deployment and to manage the configuration right now? And just to be clear, it's great. It's perfectly fine if you say no to these questions or you can't answer them, that's fine. But your answers to these questions will shape the types of solutions that may be a good fit for you because some OpenStack solutions are better for one operating system or distribution than another or some run on Kubernetes or not. So really I want you to think about what are you doing today? What are the skills that you have? What's your infrastructure like? And what can you carry forward? And what are gonna be the gaps in your own infrastructure when you're trying to deploy OpenStack? Because that will make a big, that'll be a big challenge for you. Such if you pick a solution that doesn't fit what you can do today and you're not ready to grow into that solution. Next I would say consider your use case because all of these different OpenStack projects out there they're made for specific use cases, by specific people in specific contexts. So some of the most common OpenStack workloads you see that some of these projects were made for are high performance computing or telecommunications. Some of them you depend heavily on container orchestration or some of them are just for plain virtual machines. But I would also say to consider too who are the people who are going to be using your cloud from day to day? Is it just gonna be you and your own internal team powering stuff? Are you gonna be opening it up to external customers? Are you going to be using it for internal customers? What kind of workloads are you going to be be powering with OpenStack? Because again, this kind of consideration will shape what kind of OpenStack will be a good fit for you and what will help you be more successful over the long term. So, yeah, so like I said, also consider other use cases similar to yours. So like if somebody in the HPC field makes a version of OpenStack that's tailored to the HPC field, that's probably one that you want to use. Or if you're a telecommunications provider, there's ones, distributions of OpenStack you could say that are made by telecommunications providers for telecommunications providers. And they're probably gonna fit your use case better than an HPC or a general purpose virtual machine hosting platform. If you are comfortable with Kubernetes, if you answered that first question, then there is a project that may be a good fit for you. It's the OpenStack Helm project. And when I say comfortable with Kubernetes, of course, I mean not just using it, I also mean administering it. So I would say in my experience, you can be really great at using Kubernetes, but if you've never administered your own Kubernetes cluster, you're missing out on this other area of knowledge and expertise that you would need to run OpenStack on Kubernetes unless you're gonna run it on somebody else's platform, like GCP or AWS or some pay provider, which is fine. I understand not everybody wants to master the intricacies of Kubernetes and how to administer it. But if you do feel comfortable with it, I would encourage you to check out the OpenStack Helm project. That it uses Helm charts to deploy OpenStack. If you're not familiar with Helm, of course, it is a tool for grouping and organizing your Kubernetes deployments in reusable, composable fashions. So you can more conveniently deploy them and manage them. And OpenStack Helm also gives you a lot of niceties, scripts and tooling to deploy Kubernetes itself if you need, but to also configure Kubernetes to meet their workload and to deploy other services that OpenStack requires like Ceph or NFS or networking that the project has all of those pieces. So if you're great with Kubernetes, definitely give them a look because it's a good project. It's been around for a long time and it's a good fit for people who are comfortable using container orchestration and who want to live the bold life of running OpenStack on Kubernetes in containers and pods. Probably not as bold as it seems, but I think it's a little bit on the edge out there. If you use Ansible, there are some good projects that can be a fit for you and that can really line up with your existing practices. Those are the OpenStack Ansible and Kala Ansible projects. They both use Ansible to orchestrate the deployment of OpenStack services and to configure OpenStack and to manage the services after they've been deployed. They do use a slightly different approach. The OpenStack Ansible project uses LXE containers to deploy OpenStack and the Kala Ansible project uses Docker containers to deploy OpenStack. So I would say if you don't wanna have to deal with Docker, which I understand it can be a little finicky, little troublesome sometimes, maybe go for OpenStack Ansible, but if you're okay with Docker and you're comfortable using Docker and you like it, then Kala Ansible is a good fit and those Kala Docker containers are good. I like these projects too because they are more flexible than some of the other ones. They're not as strict in what you can do. They do provide a lot of defaults and help to get you started, but you can do a lot more to configure them since they're not as opinionated as like, you gotta run on Kubernetes or you gotta do it this exact way like some of the other projects tend to be. These ones also, that question about which distributions do you run on. These ones have wider support for different Linux distributions than some of the other projects that we're gonna talk about. So if you use juju, then I could say Charmed OpenStack is probably a good fit. Charmed OpenStack is the OpenStack deployment that uses juju to orchestrate and manage the deployment of OpenStack and to provide some day two operations. Things that you would do with OpenStack after. It is more opinionated than the other solutions, like I said. So if you're gonna use Charmed OpenStack, you have to manage it through juju and run it through juju, which is fine. Some people like that. They like having the guardrails and the safeties and having an opinionated solution so they don't have to worry about figuring out their own opinion on the solution. This one also integrates well with Metal as a Service and other software that Canonical works on, but by all means, again, if you're looking for something a little more flexible, then maybe look at one of the other projects. But if you're looking for guardrails and opinions, then this is gonna be a good fit for you as well. And of course, if you use none of those things, don't worry, there are other options. There are some solutions that are built on tools like Chef and Puppet and Salt. They are less developed in recent years than some of the other ones. So definitely take a look at how much community support they have and how much what their user population is so you don't do a five-year deployment on a tool that's gonna go away next year because nobody's using it anymore. There's also other tools out there that I haven't mentioned that, because we don't have time to cover more than just the handful of tools. The ones I have mentioned are all the ones that are officially listed on the OpenStack.org site as well, if you wanna refer to that list. And of course, you can always pick what works for you from these different tools. Like I know a previous company I worked at, they tried out a bunch of different solutions and they settled on Kala Ansible, but then they took out the pieces that they liked and extended it with some custom tooling and really shaped Kala and Kala Ansible to meet their own use case. So it's still built on top of those pieces and we can still benefit and contribute to the community with those pieces, but we can make a solution that fit us, that fit the unique needs that we had at that time more closely. So definitely, if none of these solutions seem like a good fit, just find the parts that work for you and pull them out and try to figure out how you can fill the gaps and how can they help you get your workloads done and deliver value for your users. Now, some of you folks who have been working with OpenStack, you may be wondering why I haven't mentioned TripleO yet. Did wanna just mention a small note about TripleO that, so TripleO was a deployment tool that used OpenStack to deploy OpenStack. That's what the TripleO is. OpenStack on OpenStack. And what you had to do was you had to deploy an OpenStack cloud that you then used to deploy other OpenStack clouds. Using OpenStack services like Ironic for the bare metal management using OpenStack Heat for orchestration using Mistral for workflow management. If you don't know those services, that's fine. That's completely fine. It also had some bash scripts and puppet scripts and this and that mixed in for fun. But it was a good, long-tenured project. It did a lot of good work, helps a lot of people. But earlier this year, the developers who are working on TripleO announced that Wallaby is gonna be the last release that they support. And as of now, that's the last news I had on it. And that QR code will take you to the post if you wanna see the full discussion about it. So yeah, that's why TripleO, it is good. It's a good solution, but since it's not really getting a lot of development on new releases, can't really recommend it, but maybe somebody will pick it up and if they do, then definitely give it a look and see if it can meet your needs. But again, are you ready to, if you're ready to deploy OpenStack, you're ready to get started with this journey. All of these different softwares, they do offer evaluation, they offer evaluation tools that you can use to try OpenStack, to get started easy. The main official one you could say is DevStack. Now it will caution you, DevStack is the deployment of OpenStack that the community uses to test out OpenStack. So when they push changes to the OpenStack code, they use DevStack to make sure those changes don't break OpenStack. So it sets up really easily, it sets up very quickly. It is very straightforward, I would say, to get to an OpenStack login for Horizon, the OpenStack dashboard, or to get to APIs. But it's not necessarily for start here and then go to production. It's just, is OpenStack working? Here's what OpenStack looks like. And if you're fine with that, then check it out. It's pretty easy to install. You can install it on a single machine. Definitely use it in a VM or a machine that you don't care getting wrecked because it installs a bunch of packages, makes a bunch of changes, because it's meant to be used in disposable environments. But it is a good solution. And if you wanna get the URL positive for a second, it's QR code to take you to the documentation for DevStack. If you wanna get that. And DevStack's not the only tool. There are other tools out here in the ecosystem that have all-in-one installers or quick installers. I know OpenStack Helm gives you a lot of stuff to install Kubernetes and other resources that it needs. It doesn't really have an all-in-one installer that's meant for a small footprint, but it can take care of a lot of the work of getting you to running OpenStack Helm on Kubernetes with their defaults and their configuration, which is nice. Saves you a lot of time, saves you a lot of effort in having to learn how to run Kubernetes and SEF and all of these things that you could master on their own to make it work. The other two projects, the Kala Ansible and OpenStack Ansible, they do have really nice installers for evaluation, trials, all-in-one kind of looking at OpenStack. The first one is a universe from nothing, which is a tool that builds on Kala Ansible and some extra pieces to give you an OpenStack Cloud on your laptop or a single machine. Relatively quickly, I'd say probably like, maybe within an hour or two, depending on the computer you have. I know people are like, that's quickly an hour or two installation, but for OpenStack, that's pretty fast, especially if you don't have to spend six months configuring it and figuring out how all the pieces fit together. No disrespect to OpenStack, it's a complex system with a lot of parts and a lot of things you have to worry about to get it to work. But the universe from nothing, I like that one, that's really good. Another one, OpenStack Ansible, like I've got listed there, they have an all-in-one installer that'll do a VM and put everything together in a single unit instead of distributing it all over multiple systems. Here is a QR code for the universe from nothing and there's the URL if you'd like to try that out. And these installers are things that you should be able to do in like an afternoon or within a single work day to get to an OpenStack cloud. And then here is the installer for OpenStack Ansible. I like these installers too because I feel like, especially the universe from nothing, they give you a good starting point to learn how they're configuring OpenStack and how OpenStack is configured in general, that you can then grow from to create a production configuration or a production environment, then more so than DevStack. They're meant to give you the seeds in the beginnings of your own OpenStack configuration that you can run in production. So again, there are so many great options. The community is big, it's growing, there's all of these different choices. There's no way to say which one is the best. So really try to find the one that is best for you, the one that best meets your needs and the context of your team and what you're trying to do. Cause I think if you can do that, if you can find, if you can figure out who you are, you can find the solution that'll fit your needs and you'll have a much better journey with OpenStack than if you just try to brute force it or try it blindly. And of course, there are plenty of changes and developments coming on in the deployment of OpenStack. That it's still a very active field. There's new commercial providers coming out every year with different approaches. There's new projects and changes to projects. So if you aren't entirely satisfied with this spread we've looked at, just wait and see what the community's got cause a lot of great things are coming out. I have a little shameless plug for a project I got to work on. At Canonical, we've been working on a new deployment called Sunbeam. It's a very new approach to deploying OpenStack. It's meant to be so easy that a person who knows nothing about OpenStack and very little about technology can deploy it and that you can have a production ready cluster in maybe an hour or two and you can start learning OpenStack and really getting into it. We have a fun competition, fun game we're doing. You've tried OpenStack, give it an installation, you can get a code that you can redeem at our booth for swag and on Thursday we're going to have a larger workshop or we'll have people on hand take you through the installation and talk about it and demonstrate it further. And if you have any questions of course where the people in these bright orange see from space t-shirts and polos, we have a booth in the marketplace if you want to come and ask us about it and see what it's about. Yeah, so it is an exciting time to be deploying OpenStack. It's definitely gotten a lot better than when I first started deploying OpenStack four or five years ago and I'm sure even then it was a lot better than deploying OpenStack 10 or more years ago. So it's a great time to try OpenStack and to get out there and really get your own cloud together. This is me. Nope, that's me. Those are my different contact information if anyone has questions or wants to follow up afterwards or just wants to say hi or send me funny cat pictures. And I think all the slides and recordings of course will be on the internet later which I may regret that once all this information is to the internet. The sacrifices we make. So thank you for coming here. I appreciate you all listening to me talk about this and we can do some questions if you have questions. Yes, so I know I had Airship and Starling X and them on the big list of OpenSource projects. I didn't mention them because as I understand, those projects are relatively focused on their use case. So they're not necessarily like, I wouldn't point like somebody new to OpenStack was just looking for general purpose virtual machines to start with something like Airship or Starling X just because I understand them to be more focused on other use cases than just that particular one. But if I'm wrong, I would love to be educated and tell me I'm wrong to my face so I can learn better and learn more. So yeah, the question in case of the folks didn't hear is how do they scale? How do these different solutions scale? So it does, it depends. They all have their different breaking points. I know some of the solutions will only scale to a few undernodes because you start overloading the database that OpenStack's relying on or you start overloading the network layer. That is a question that depends a lot on your individual architecture and that you're set up, how you have things set up. I would say that OpenStack Ansible and Colla Ansible probably should scale nicely since they are simpler. I know when you add Docker in, it gets challenging and some of the clustering services that they deploy don't scale nicely beyond 50 or 100 nodes. I will say though, I have seen in my experience that there are not as many like 1000 node clusters as you might think or 10,000 node clusters or 100,000 node clusters that when somebody has 100,000 nodes, they've got 200 clouds that are running 200 nodes a year, 500 nodes there. So it's not quite as big of a scale as you might see but these tools should get you to a relatively large footprint. I'd say at least a few undernodes. Thank you. Yes. Thanks for the wonderful presentation. So I have a question like most of the customer like what percentage of the customers are using which deployment? Which is most popular and which is the least preferred? At least if you can give me. That is a good question. I don't know the answer to that or have numbers but the Open Infrastructure Foundation does a, or the OpenStack community does a user survey that I think breaks that down in some kind of detail. Because sometimes management just ask, what's the popular one? Why should I go with something which is least preferred? So I don't know what is the least preferred. Once I select that deployment and then later the management comes and say, why do you select the one which none of the customers are using. So at least which is popular makes more sense for the management to put the money on. So that was the reason we had this question. Yeah, that makes sense. I would say, I do know that, or I would say the Kala Ansible. Kala and Kala Ansible and that group of projects has a relatively wide user base that it's not just one company driving that project that there's a bunch of different groups and people from different industries who are working on that. So I would say, in my experience at least, that's been the most diverse or wide community out of the others. Because I think OpenStack Ansible, it has a good community but I think it has like a large company sponsoring it. And then I know Charmed OpenStack is canonicals and then Triple O was sponsored by Red Hat in a large part. And I mean, OpenStack Helm, I think is good. But I think OpenStack Helm being a Kubernetes deployment, that limits the user base to people who know Kubernetes, which is a big user base, but it's not as big as everyone else. So, but if you're worried about that, I would say Kala Ansible is probably the broadest adopted or used tool of the ones we talked about. Yeah, you're welcome. Yes. So the question was, what do you do when you have components from different releases? I would say don't, if you can avoid it. I know that's not an option. I know sometimes that you can't manage that. I mean, I know the way I've seen that problem saw before is just backporting patches and maintaining your own forks of the upstream for your release. Because I know my previous employer, we had some issues with Octavia that were fixed in a later release and we backported the patches into the release we were using so that we can have the fixes without having to force upgrade the whole thing. But I know that's not a great solution because then you have to maintain all this extra code and patches. So, I don't think there's a good answer. Well, OpenStack, I don't think generally it doesn't like it. I wouldn't say if necessarily it's supported. But I would say, so in my experience, Kala Ansible is relatively flexible in how you can deploy the different services since it deploys the Docker container, at the services as Docker containers, you can select which containers you wanna use. So, I've used ones that were, some were built from source, some were built from binaries. I've mixed and matched the deployment to try to get it to deploy services that there weren't images for for Docker images for or that we had custom-built Docker images for that we used instead. So, I would say Kala Ansible is probably the most flexible one I've had experience with and you could probably make that work. But I wish you the best of luck and I hope you can get out of that situation as soon as possible. Yes. Yeah, so the question is what does it look like or what's the use case kind of looking like for running Kubernetes on, or running OpenStack on Kubernetes? So, I would say it depends on, it depends on what you're trying to do. So like that Sunbeam deployment that I mentioned, the way it works, it deploys the control plane services on Kubernetes, which I think is a reasonable thing to do because you can get them to be more recoverable, you can have them be more, you can get the clusters together more easily, you can manage those services and maintain high availability more easily than you probably could just deploying the cluster and isolation on bare metal with the nodes clustered, but they don't know what they're doing or they don't know their health. So, I like that approach, but I know some other new approaches and new tools that are, for running OpenStack on Kubernetes are running the hypervisor on Kubernetes. And that, I know that's an unsettled area, that there's a lot of development and cutting edge stuff going on in that part, but I know sometimes you have to make trade-offs to get OpenStack to work, you have to give up some of Kubernetes, either some of its security or some of its container orchestration or some of its isolation or you just have to break it or reshape it because OpenStack was made to run on bare metal and so it's gonna have a certain set of expectations for how its environment looks and Kubernetes by itself may not fit that. And so I know some of the projects out there have looked at changing Kubernetes to writing new virtualization drivers that you can run through Kubernetes. So, I would say, I think for the control plane, if you can put them in containers and run that, that's great, do that for days. But for running the hypervisor and compute services, maybe wait a little bit and see how that settles down and how that technology evolves because it's still working and being developed right now. So this will be the last question and then we gotta go. On upgrades? Yes. Good luck. No, no, no, it depends. I will say, when you put the services in containers, that helps the upgrading process a lot. That makes it a lot easier to do rolling upgrades in my experience. So I've used KalaAnsible and Kala extensively out of all the different projects and the upgrade journey was nice. It was relatively smooth because you just pull new Docker images and restart the containers with the new images and all the configuration is mapped in from the host directories. All the databases are still there because they're mapped to volume mounts on the host. So you just roll over the containers and there you go, now you've got a new release. Still definitely be cautious and take care and double check and be safe so you don't ruin your cloud because I've done that and that is not fun. So that was the last question. I know we're at the end of the time. Thank you everyone for coming. Feel free to email or send questions. Come find me at the canonical booth, whichever. But thank you all.