 Good afternoon, everyone. Today we're gonna talk about STX OpenStack and how it can help with the migration from virtual machines to containers. I'll talk a little bit about myself first. My name is Douglas. I'm a software developer manager. I've been working with Starlinx related projects for the last three. Ears, since 2020. I'm working for Incora since before the acquisition of Data and Group and I'm with the team for the last 15 years. I'm based in Brazil, in case you're wondering. I'm graduated as an electrical engineer and I have an MBA in project management by FGV. And you might think that this picture is me running away from a mad wife and a crying kid, but that's not what I meant. I'm just trying to say that I really like to run and I have a beautiful wife and two boys. Here are my contact information. Feel free to reach out to me. I'll be, me and my colleagues here will be in the event for the next couple of days. All right, today I'm gonna tell you how STX OpenStack can easy the migration from virtual machines to containers, containerized workloads. Before I do that, we will illustrate the problem, one of the problems we are trying to solve. I will show you the evolution in the world of telecommunications, the transition of distributed radio access network into centralized radio access network, then virtualized radio access network and ultimately OpenRAM, which is not exactly an evolution but an alternative to VRAM. This shift brought both opportunities and challenges for telecommunication companies as they migrate from virtual network functions into containerized network functions. Traditionally, wireless network were built on a distributed architecture where each base station had its own radio, a baseband processing and network functions. This approach, though effective, had limitations in terms of scalability, resource utilization and operational efficiency. To address these challenges, the industry introduced CRAM, which had its issues as well. With the advent of virtualization technologies, VRAM emerged as the next step in this evolution. VRAM leverages virtualization to further optimize network infrastructure. In VRAM, the baseband processing functions are virtualized and run on standard servers that are in beta centers or in cloud environments. This virtualization allows for dynamic resource allocation, scalability and flexibility. VRAM enables operators to efficiently deploy and manage network functions, leading to faster service deployment, a reduced hardware cost and enhanced network agility. Now, we come to ORAM, which represents a significant shift in the telecommunications landscape. ORAM is an industry-wide initiative that promotes open and interoperable RAM architecture and interfaces. It aims to the couple, hardware and software, enabling multi-vendor interoperability, also innovation and competition. However, this evolution from DRAM to CRAM, then VRAM and ORAM, it brings a problem, a challenge for telecommunication companies. The migration of VNFs into the containerized environment required by ORAM. VNFs are software-based network functions that were designed to run in virtualized environments, whereas CNFs are containerized network functions that leverage containerization technologies, just like Starling X. Telecommunication companies need to adapt their existing VNFs to run as CNFs, which involves re-architecting these network functions, containerizing them and orchestrating them. Migrating VNFs to CNFs requires careful planning as it involves performance optimization, resource allocation and natural working aspects. But there's a way to make that transition smooth. By incorporating the STX OpenStack application into your Starling X cluster, you gain a powerful tool set that allows users to efficiently manage both containerized workloads and traditional virtual machine workloads. The integration of STX OpenStack enables seamless orchestration, deployment and scaling of containerized applications, utilizing the benefits of containerization, we all know, such as resource efficiency, rapid deployment and portability. Additionally, STX OpenStack allows for the management of the VMs workloads, providing a familiar and reliable environment for running legacy applications and supporting these diverse infrastructure requirements. This unified approach to workload management within the Starling X clusters, simplifies operation, enhances flexibility and enables users to make the most of both containerization and virtualization in a cohesive manner. STX OpenStack is a highly scalable, just like Warren was referring to Starling X earlier today. STX OpenStack is highly scalable, secure, reliable and it is a complete cloud computing platform. It is a containerized version of OpenStack U3 designed to support the migration of virtual machines to containers. As part of the STX9 roadmap, there are plans to migrate STX OpenStack components to Enchilub, ensuring that the platform remains synchronized with the latest stable version of OpenStack. We're currently using U3, right? Discommunication from the community guarantees ongoing updates and enhancements, aligning STX OpenStack with the most recent features in OpenStack technology. It's not as simple as other solutions that you've seen that are just cloning OpenStack or using directly into their solutions. It requires some modifications to the Helm charts and updates them. So that's why we have a team working on that, on that transition and update. STX OpenStack is also designed with security in mind. It offers features like RBAC, role-based access control, encryption, and network isolation. Considering it runs on top of StirlingX, some key features of STX OpenStack include the support for both virtual machines and containers, we are just describing now, allowing the easy migration between the two. High availability and fault tolerance, ensuring that those applications are always up and running. And the easy deployment and management through a web-based interface, just like OpenStack. We have the flexibility to configure STX OpenStack to utilize either the OVS DPDK v-switch which is integrated into the StirlingX platform or OVS, which is our own containerized v-switch solution. By leveraging STX OpenStack, those organizations can easily migrate their workloads from virtual machines to containers, while also benefiting from a range of powerful cloud computing features and capabilities. This is a simulation. I'm not that lucky with live demos, like the guys from yesterday. And in here, what we are trying to see is how easy it is to launch a VM. We select a source image, a flavor, and apply the network configuration, and that's it. The instance will be created. What we are gonna do is to try to see the console of the instance that we just launched. We're navigating through it, and next we're opening the console. If you're familiar with OpenStack, it's not new for you, because it is OpenStack in a containerized manner. As you can see, STX OpenStack provides a familiar and reliable environment for running traditional workloads and legacy applications. It ensures that our existing VM-based workloads integrate into our starting X cluster, our existing starting X cluster. This capability allows us to leverage the benefits of our highly efficient, resilient, and scalable infrastructure without the need for significant changes or reconfiguration. Just like OpenStack, we can dynamically allocate CPU, memory, storage, and network resources to our virtual machines, ensuring optimal utilization of our hardware. This translates into cost saving and improved network efficiency. By efficiently allocating resources based on demand, we achieve better overall performance and we ensure that our VMs scale to meet the required workload. An additional advantage lies in the interoperability and compatibility of STX OpenStack. It is built on open standards, allowing for integration with a wide range of applications and supporting the interoperability across different OpenStack-based environments. This means that we can leverage our existing infrastructure investments and easily integrate our VM workloads within our starting X cluster. STX OpenStack provides OpenStack components we are all familiar with. In conclusion, the ability to launch VMs using STX OpenStack within our starting X cluster is game-changing capability with a multitude of benefits for your organization. By leveraging this powerful tool set, we gain management of traditional VMs workloads, we optimize resource utilization, and we solidify security measurements in promoting interoperability across our infrastructure. With STX OpenStack, we have means to meet that ever-evolving demands of our business in a cost-effective, scalable, and adaptable manner, ensuring that our organization remains at the forefront of innovation and operational excellence. As we have seen, with STX OpenStack, we can run both VNFs and CNFs in the same cluster, solving the problem that was mentioned earlier. I would like to thank the STX OpenStack team and in special for Tali's and Gabriel. Tali is our tech lead for the STX OpenStack project. He's a core reviewer into multiple starting X repositories, and Tali's run the starting X distro OpenStack, which is the name of this team. He runs the community call every Tuesday, 11 a.m. Pacific time. Gabriel is one of the key engineers working on this project and has done many contributions to starting X automation and sanity analysis. Both of them have helped me building this presentation here. And if you guys want a copy of this project, if you guys want a copy of this slide deck, you can reach out to Mario Zimmer. He can give you a copy. And if you have any questions or special needs that we can, with starting X OpenStack or STX OpenStack, feel free to reach out to me or Mario Zimmer. We'll be glad to help you with that. I left a part of the presentation for questions. So if you have any specific question about this solution or how we can help you with that, feel free to ask them. Sure, if you have your cluster configured properly, your starting X cluster configured properly, it's just one simple command. System apply application and you pass the tar ball and you are all set. The secret, not exactly a secret, it's all documented, well documented in the community, the week page. It lies in the configuration. You have to provide all the configuration for the application to work properly. And this is something that Encora has experienced and can help with, although it's well documented. People, as with a lot of different OpenStack projects, people have trouble getting that correctly. So we have the means and the expertise to help you to set up your cluster and run SCX OpenStack. And that's pretty much, I don't have it in here, the exactly steps for you to do that. I can send you the link with the detailed steps. It's pretty straightforward to get through that process. But if you need any help, we'll be glad to help. The same server. That's a good question. You have to configure, it can, one deployment, one system, we will only be able to run the containers workload or the VMs workload. You can have servers dedicated for one type of workload but different workloads in your cluster at the same time. So if you configure this particular server to run VMs workload, all of this, all of its resources will be allocated to run VMs. But you can have both in your solution, your overall solution. So yeah, it is possible. It has been introduced, I think in SCX 7, if I'm not mistaken. One of the features that we contributed to the community. And along with the RBAC one role-based access network and other contributions that we're making to this project. Yeah, anything else? If you have a pertinent and smart question, I'll give you one of Encore's t-shirt. Perhaps it's compelling. Okay, I think we can wrap up then. That's pretty much it, thank you very much.