 Hi, folks. Welcome to this discussion of self-integration with Red Hat OpenStack platform, specifically the Upstream Triple-O project. The original author, President Manoj Qatari, couldn't make it. So I'm one of two people filling in. I'm Alan Bishop. I'm a developer at Red Hat. I'm Gautam Pacharavi. I'm also a developer at Red Hat. Work with Alan and with Manoj. So we had to kind of scramble these slides together. The agenda, it's going to be fairly quick. We don't have a lot of time here. We're going to cover the material at a high level. Any questions? Please feel free to, you know, see us on the hallway afterwards. But we're going to cover basically CEPH and how it's integrated into OpenStack at a very high level. We're going to discuss the transition that the Triple-O project took to migrate from using CEPH Ansible in earlier Triple-O releases, compatible with earlier CEPH releases, and how we migrated everything to using the new CEPH ADM tooling and also how we can upgrade clusters used consumed by OpenStack from CEPH Nautilus to Quincy. We'll briefly cover some Triple-O concepts of externally deployed CEPH and what we now call deployed CEPH in the most recent Triple-O release here. And then lastly, how the upgrade process works to upgrade your CEPH cluster from Nautilus to Quincy. What's Triple-O release? I'm sorry? What's a Triple-O release? Okay. Excellent question. For those that aren't familiar with it, Triple-O is the deployment tooling used by Red Hat for deploying OpenStack. Interesting story. We're kind of in a state of flux here. The Red Hat as a company is migrating away from the use of Triple-O to a next generation tooling. So specifically what we're going to cover here is how the Triple-O project upstream, which is sort of equivalent to Red Hat OpenStack downstream, works as of the Wallaby release. So after the Wallaby release, Red Hat is going to be migrating to a different tooling, but the general term for our tooling upstream is the Triple-O project. Triple-O stands for OpenStack on OpenStack. I could go deeper, but that would consume far too much time, but happy to talk at length afterwards. A quick overview of integration of CEPH with OpenStack and kind of why. OpenStack basically provides support for the three major storage services if it will block, file, and object. Block storage is the upstream Cinder project, file is the Manila project, and object storage is often associated with Swift. But not coincidentally, CEPH also provides support for block, file, and object. When it comes to object, CEPH supports the Rados gateway, which can be used to basically emulate Swift's API so that you can use CEPH with its Rados gateway in lieu of using Swift for your OpenStack object storage. I kind of got ahead of myself here. One of the major benefits, of course, is all of this is OpenStack, but open source rather. All of OpenStack is open source. All of CEPH is open source. It's all open source. The old Wikipedia citation needed. I can't explain this one, but apparently RBD, as Cinder's back end, is consistently ranked as the most used back end for Cinder, the block storage. And CEPH FS is likewise ranked highly for the back end for the Manila file service. So there's a huge synergy between all of these. There's a couple more points at the bottom that I can't speak to, Gautham. I don't know. Yeah, one of the biggest things is because we, from the get go, supported the OpenStack multi-tenancy model with CEPH. And so it got easier to integrate that piece with Keystone multi-tenancy. So especially if RGW, for instance, is a stand-in for Swift. Swift does multi-tenancy with Keystone very well. And RGW has implementation of it. So it looks and feels a lot like the Swift interface and does the multi-tenancy aspects of it. Awesome. And we have flexible deployment topologies with Red Hat OpenStack. And one of them is, of course, using Red Hat OpenStack platform to deploy CEPH for you to use alongside the OpenStack cloud that you're actually deploying. And that's the aspect that you could do this hyper-converged on the same infrastructure for much smaller form factors, maybe on edge sites and so on. And there's also the other use case where you're deploying CEPH elsewhere or consuming CEPH but deploying an OpenStack over cloud that can connect to it and serve Cinder, Manila, and RGW for you. This basically covers the major steps that Triple O implements in order to basically do its thing with CEPH. I'll phrase it that way. In earlier releases, the deployment of CEPH along with OpenStack was sort of woven together. And in the new model right here, there are very discrete steps that can kind of isolate a chunk of the functionality to prepare for the next step. So the way things are deployed now is, first, there's a hardware provisioning step. So nothing happens until after the hardware is provisioned. Then there's a step for configuring the network that, again, is a completely isolated step. And then, Seth, the key point here for folks that are familiar with how Triple O worked in the past was that, you know, now CEPH is deployed before any of the OpenStack services are deployed. And it's basically a way to start to decouple things a little bit to be able to cleanly focus on the CEPH deployment before we go on to deploy the OpenStack phase, which can rely on the fact that CEPH has already been deployed and verified and up and running. Triple O is able to deploy all of this for you at once, although it breaks it down into those steps. But it's also capable of integrating with an existing CEPH cluster. In fact, it will integrate with multiple CEPH clusters if you have them. So you can basically configure Triple O to do the OpenStack deployment until I've got that CEPH cluster over there. Here's its FSID and CEPH keys, et cetera. And I have another CEPH cluster over here. And basically, that will configure from OpenStack's perspective all of the clients on OpenStack so that they can access these external clusters. So the key thing is that Triple O can actually do your CEPH deployment for you but can also consume or grant access to an existing CEPH cluster. Read the slides to see if I'll cover everything. Which one of us was going to cover this? Yeah, I guess so. Sorry, was that a question? No, okay. So in the past, when we were working with releases like Luminous and so on, like everything older that we needed to use CEPH Ansible for, for instance, so Triple O was using CEPH Ansible. So it had some Ansible playbooks that would then invoke CEPH Ansible. So it would provide the UX to OpenStack deployers that they were familiar with. So as far as deploying CEPH alongside OpenStack is the same thing as deploying OpenStack. Just a few more configuration options to look a lot similar. And we were trying to preserve that UX as the installer behind this was changing. So the community, the CEPH community was, I mean, came up with CEPH Adam and was hardening it. And as we were, you know, also feeding back, we kind of wanted to preserve the UX for the deployment piece. And once the deployment is done, step away because Triple O is no longer the best tool for you to manage your CEPH cluster once it's deployed. There's already native CEPH tools now that are mature enough that you can do your own upgrades and independent of managing your OpenStack deployment. So we no longer need to have this one giant UX. So we'll still give you the UX to deploy. And then you can use the CEPH native tools to manage the data operations and so on. But all of this doesn't line up as far as which CEPH version and which OpenStack version does all this work with. And that's going to be the part, yay. And that's going to be the part, but if you are using the deployed CEPH approach for Triple O, there's an opinionated approach from Red Hat and the upstream OpenStack community, which is that if you're trying to deploy it for the first time with Wallaby, what we'd start with is CEPH Quincy release. And if you're trying to upgrade from an older version of OpenStack, we'd expect you to at least have CEPH Pacific if this is an external CEPH cluster. And if it's an internal CEPH cluster, we would want you to execute a Triple O and to upgrade your CEPH cluster before you start upgrading your OpenStack cluster. That's how we are lining up the versions. And in terms of product, you'll find these in the slides. There's the Red Hat version numbers and then the upstream CEPH version numbers and so on and so forth. Trust me, we try to keep that away from the deployers and this thing as much as possible by making some automated and intelligent choices. And how we do that in the background is what this slide was probably talking about is how we bootstrap ourselves. So we're using a very vanilla way of doing CEPH Adam. We do the same thing as upstream has designed it for. There's nothing, no special sauce with Triple O as far as it's just easy for OpenStack deployers to do it this way. That's it. And one important caveat I think is the one service that we're still going to run with the help of Triple O is CEPH NFS. And if you come back maybe in a couple of hours we're going to talk to you about how we're also going to try to do that with CEPH Adam next. And yes, I think we can run through this. So very little answerable that's wrapping up CEPH Adam is what we're trying to do with Triple O. And certainly we'll give you a reference to these slides if you want to get into the details of how this is done as far as the UX goes. If I may just very briefly back up. So for those that aren't familiar with Triple O that much, Triple O uses heat templates as the UX, but when you actually execute commands it will generate and then execute Ansible under the covers. And then what those Ansible playbooks will do will either drive CEPH Ansible playbooks for upgrading earlier releases or will directly control CEPH ADM commands. So there's Triple O heat templates at the top, but we use an awful lot of Ansible then drive the actual commands at the bottom which in this instance we're referring to CEPH Ansible and CEPH ADM. So that's that extra layer of Ansible that's kind of in the middle. And here we sort of see it where in order to do the upgrades say you're starting with an older release back perhaps on train where CEPH Ansible was used to configure your CEPH cluster and you're running on an older Nautilus release of CEPH. There's a process by which we will use CEPH Ansible to upgrade your CEPH cluster to Pacific. And then it supports this notion of CEPH ADM adopt and that's where we will step out of the CEPH Ansible world and into the modern CEPH ADM world and from there Triple O will manage everything to get up to Quincy and do all the management using CEPH ADM. So that's the upgrade process from older Triple O and older CEPH and how we sequence the steps to migrate both of those subsystems up to what we consider the modern approach. I think we're done. Yes. So any questions? We have it off the stage so if you need to speak on. Thank you.