 My name is Brendan Chepard and I'm a software engineer working for Red Hat on the Triple-O project. I'm excited to give you an update of what we've been working on over the last 12 months and what we have in the pipeline moving forward. Triple-O is a deployment tool used to orchestrate production-ready deployments of OpenStack. Triple-O leverages a combination of OpenStack projects and Ansible to define your infrastructure as code and orchestrate the deployment. Triple-O is leveraged for deployments of Red Hat OpenStack platform. Our work over the last 12 months has really focused on perfecting some of the simplification changes that we've made over the last few cycles. We've chosen to focus on specific releases, not releasing a new version of Triple-O for each release of OpenStack. Our currently supported releases are Wallaby and Zed. This allows us to focus on the areas that are important for our users and increases the CI coverage we have for key features. We've worked to optimize the overall deployment process with the highlights really landing in three main areas. Those themes for us are ephemeral heat, a simplified undercloud, and a more modular deployment process. The ephemeral heat initiative allows us to continue leveraging heat and the templates our users are familiar with while working to reduce some of the complexity and overhead that come with managing such a large and complex heat stack. With this feature, we now start the heat service, process the user templates, and output artifacts onto the local file system in the form of Ansible playbooks and group variables. These artifacts are stored in a local Git repository where each deployment creates a commit that can then be tracked over time. The playbooks are then used by Ansible to orchestrate the deployment of your OpenStack cloud. The benefit here being that we no longer need to work with heat stack updates, and this reduces the complexity of solving issues should a mistake be made during that stack update process. Storing the artifacts in a Git repository means that it's easy to compare the differences between what was previously deployed and what has changed with the latest deployment, simplifying the process of validating your changes between deployments. A simplified under cloud, leveraging OpenStack projects to deploy OpenStack has served us well, but it has proven to be overkill at times, leading to higher resource utilization and increased complexity for our users. With the latest release of triple O, we've worked hard to simplify this and only use services that make sense for our use case. This has had the effect of reducing resource utilization and complexity of our under cloud director node. Some of the services that we have removed include Mistral, Zakaar, Nova, Swift and Glance, while we still continue to leverage Heat, Ironic, Neutron and Keystone. A new and more modular deployment process, triple O has traditionally relied on a fairly monolithic approach to deploying the cloud, whereby we would create the heat stack and do everything from provisioning the nodes and networks to rolling out services and configuration. The new modular deployment process allows us to break that up into steps that are easier to manage and troubleshoot. For example, the two most likely places new deployments would fail are during the node provisioning and during network provisioning of those nodes. When this failed during our monolithic deployment, it could take 10 to 15 minutes for that failure to occur. Now, users are able to run each of these steps independently and fail faster while they're tweaking these crucial configurations. This means our users can now be more confident in the over cloud deployment succeeding since the fundamentals are already in place before they start. Our focus for this release, we've really focused on honing these simplification efforts to ensure the best possible user experience. This means making things more efficient and easier to troubleshoot when things do go wrong. As part of this effort, we've worked closely with our CI tools and teams to ensure these new changes are adequately tested and multiple scenarios are executed for every patch. We've worked with the teams behind triple O Quickstart and the Infrared project to extend these tools and features. We've also focused on moving our deployments from CentOS 8 stream to CentOS 9 stream as part of our latest release. So our focus moving forward, in the coming releases, we aim to continue these efforts and move towards modularity rather than monocity. These efforts include working with multiple versions of the operating system, moving bespoke playbooks out of triple O heat templates and into targeted Ansible roles, and optimizing our updates and upgrade process in support of these goals. We want to make our framework easily consumable by other projects that may wish to leverage the work that we've done. An example of this is the OSP director operator, which is used to deploy OpenStack from a container native pod running in Kubernetes. We will continue working to improve the updates and upgrade process in line with our efforts to simplify triple O and ensure software stability across versions. This includes our approach to modularity by working to ensure that upgrades can be done in stages while still providing a reliable experience for tenant workloads. Moving forward, we'd really like your help. So if you're a triple O user, I would encourage you to engage with our community by reviewing our upcoming features and specifications on our specs page. You can review our changes on review.opendev.org and you can engage with us on the OFTC network on hash triple O. Our documentation can be found in the links that are provided in this slide. And if you want to see more about any of these features I've talked about, I've made a few videos that cover each of them in a lot more detail. So feel free to check out my YouTube channel. Thanks for attending and listening to the update for the triple O project.