 We hope you all really enjoy this time. We hope you really enjoy your time here this week. There's the best information here. Yeah. So yeah, we're all really happy to have you here. There's a lot of information going on. There's a lot of new OpenStack news that are happening in the next few days. And so we wanted to talk a little bit about what's going on in the Rocky release. So OpenStack is becoming increasingly more robust and stable. It's increasingly more trusted in production environments. And we're seeing a lot of interesting use cases. Just this morning, we saw all sorts of interesting companies that are using OpenStack to run their businesses. And so I wanted to maybe ask is how much really runs on OpenStack? And so it's a really big number. It's 10 million cores, which when you really think about that number, it is a large amount of servers serving all sorts of businesses. But that goes beyond just the deployments. We also look at the number of changes that happen. So earlier, we talked about how many changes have happened over the year. But over the Rocky cycle, we've seen around just over 30,000 changes, which is an immense amount of changes inside OpenStack. And I wanted to maybe dive in a little bit to some of these changes and where they're actually coming from. So we here have a graph with all of the different countries that are actually contributing to OpenStack. And we see that it's really not just the US. It's all over the world. And we see even Germany is one of the top contributing countries to OpenStack. And this is really interesting because it means that OpenStack really has a global footprint. The other thing is operators are really important. They're really important in order to help stabilize the OpenStack services because they are really who runs into the day-to-day issues and have problems with them. And so we've actually seen an increased alignment of operators with the developers thanks to things like the public cloud working group and other working groups inside the OpenStack community. They've actually helped bridge the gap between operators and employers. And so that has actually resulted in close to 25% to 30% of contributions in OpenStack over the past few cycles being done by operators, which means we're really helping the stability of the product. Now, to kind of take another side of it, we actually aren't just shipping bug fixes. We're actually shipping a lot of more new features. And so there was a big focus over the rocky cycle on bare metal. And so who better talk about that than the PTL of the OpenStack bare metal ironic project, Julia? OpenStack has done a great job with making virtual machines easy. We have an open API to help drive consistency across vendors and platforms. Ironic does the same thing for OpenStack bare metal clouds so that it is easy to deploy. With more and more operators contributing to OpenStack, as a community, we're able to help deliver the features they need to make their lives easier. In the rocky cycle, we added a BIOS management feature that allows an operator to turn on features such as virtualization or hyperthreading. We also enabled some functionality called conductor grouping that allows an operator to represent the physical realities of bare metal in their data centers more accurately so that they're able to have environments that work the way they need it. And so by doing that, we kind of came up with all these new emerging, new hardware architectures. And so we spoke a bit about that earlier today. And the first one I wanted to talk about is GPUs and FPGAs. Those are really getting more common for use cases like artificial intelligence, machine learning. And what's actually really interesting is OpenStack has had support and allowed you to do these things for a very long time using features like PCI pass-through. However, what is kind of happening is it's becoming more and more easy to deliver these services as part of OpenStack so you don't actually have to go and dig into the deep understanding of how to get PCI express, PCI pass-through and whatnot to work. You can just get those services right off the bat. The other thing we talked about different architecture is moving up to the actual things like CPUs. So it's very known that x86 is probably the most common server architecture. But ARM has invested a lot of resources in actually making OpenStack available and running on ARM. And so now you can actually run ARM for your OpenStack deployments, both for control plane and your data plane virtual machines, which is really interesting because all of the above is all available in both public clouds and can easily be deployed in private clouds using any of the OpenStack deployment tools. And a lot of the upstream open source ones support those as well. And this kind of has kind of led to a lot of interesting use cases and a lot more deployments because of how simple it's become to deploy OpenStack. We know that smaller deployments are more common. We know from this user survey that 60% of deployments are between 100 and 1,000 nodes, sorry, 10,000 nodes. And that 31% of deployments are under 10 nodes. We know that deployment tooling is making it even easier to deploy systems at that scale. OpenStack is no longer a software to help solve problems at huge scales, is continuing to be pushed to the edge with edge deployments. And there's also a session after the keynote from the Central Florida Community Cloud, where they'll talk about their experiences deploying a small bare metal cloud or small OpenStack cloud. And so I wanted to kind of take a step back. And I think OpenStack has historically been the kind of way to go to build cloud for to support your compute infrastructure. But really, OpenStack has gone kind of beyond that. OpenStack is more than just VMs or bare metal machines. And to kind of talk a bit about that, I want to talk about another project, which is Magnum. So Magnum is a project that allows you to deploy Kubernetes clusters on top of OpenStack. And so over the rocky cycle, Magnum became a fully certified Kubernetes installer by the CNCF, which means that it deploys a fully conformant cluster that is fully integrated with the OpenStack cloud that it's running on top of. So it includes full integration with the Octavialo balancer service, full integration with the sender block storage service, with that Kubernetes cluster that's being deployed. And to kind of also take another kind of step back, it's like we want to kind of imagine a world where OpenStack is running at the heart of every data center. You start with the plumbing of every data center, which is the networking. And networking can be managed all by Neutron, whether you're talking about the virtual switches running inside your compute nodes, or you're talking about the physical top-of-rack switches, which can also be managed by Neutron. And then you can kind of go towards the compute side and look at how Nova, combined with Ironic, can actually help you help deliver virtual machines and bare metal machines for your computing needs, combined with something like sender, which is our box storage service. So take an example. You might have an open source cluster based on Ceph, and you might have some proprietary storage solutions. But you can have all of those running within sender, all managed by one unique API that's open. And then kind of all of that, for your container workloads, you have Magna on top of it to deploy on top of all these infrastructure services that you just built out. And there's all sorts of other really unique scenarios. Like if you have a couple of HSMs and you want to store keys, we have a service called Barbican that allows you to store secrets, so if you have SSL certificates or anything like that, to store in a secure environment. So it really opens up all these avenues to really convert your entire infrastructure to all the API-driven and API-based across your entire stack. And so if this all sounds really exciting to you, we might want to get involved and contribute some of your experiences, because we do want to hear more from users, whether you're using OpenStack or not, and how it can fit your needs. So maybe you want to know on how to get involved. This week, attend the forum sessions. Ask questions. As the experts are here this week, talk to our developers, talk to the operators. We have over 29 onboarding sessions running this week. Feel free to join in, ask questions. Contributions come in all forms. If you use OpenStack, if you care about OpenStack, or if you develop OpenStack, join the discussion and help influence the future. And so just to kind of close out, we're talking a lot about code. We're talking a lot about architecture, infrastructure. But we want to talk a bit about the community. A lot of us really that have been part of this community for a while can tell you that it's kind of like one big family. And I wanted to kind of share some of those maybe moments to kind of say that it's like it's beyond code. We all kind of work together as a community. Amy Nie, who is one of those longtime contributors, is always organizing these events, going and doing all sorts of physical activities, or going to our project team gathering in Dublin, where it snowed for the first time and it hadn't snowed for years there. And we got to witness some of the OpenStack community members that had never seen snow before, making their first ever snow angels. And then actually, Jonathan is a pretty good piano pianist. So he actually comes up and plays songs and the rest of the community is singing along. So it really goes beyond just kind of code. It's a whole community. And I think that's kind of important to look back at. And so kind of in closing, thank you everyone. I want to thank all the contributors, the deployers, the operators, the companies, and the foundation staff that make all of this happen. Without everyone involved in this, we wouldn't be where we are today. We wouldn't have this great, awesome community. So I wanted to have a big round of applause to everyone that makes OpenStack happen. Thank you, everyone.