 So good afternoon, everyone. Thanks for being here. So what I would like to talk about is some of the lessons that we learned. So the Triple O project, I'll get a little bit into the details of what Triple O is, but we're going to try to present today's. Some of the lessons we've learned while trying to containerize OpenStack as part of the Triple O deployment tool that some of us actually work on. As well before we get there, actually. So this is me, one of us, younger and naive. I'm not just naive. Thank you all again for being here. I work at Red Hat. That is my Twitter Hunter. I love feedback, so if you like the presentation, I know the app has some ways to provide feedback, but if you prefer to tweet me or email me, that is my Twitter Hunter, and that is my email. If you don't like the presentation, that is not my Twitter Hunter, and that is definitely not my email, and you can forget I was here. But jokes apart, really, if there's anything that you would like to provide, even if there's missing content, because you did it as well and turns out that you did it better than what we actually did, I want to know that. So by all means, feel free to contact me. I look forward to that. Couple of disclaimers. I tend to speak really fast, so I'm sorry in advance. I'll try not to drop any F-bombs during my presentation, but that might happen. And the last disclaimer is that what this presentation actually is, again, like a whole bunch of things that we learned that I kind of like tombed into slides in really short statements, and I'll try to explain those to you, but you don't really have to follow them all. They actually just worked for us, but they might actually not work for you. So again, if there's anything missing here, by all means, do alert us now. So Triple-O, you probably saw me adding this slide in the last 30 seconds before we started the presentation, because I realized that I had a whole bunch of slides explaining what we learned, and there was no slide actually explaining what Triple-O is, and some of you might actually not know what Triple-O is. So Triple-O stands for OpenStack and OpenStack. It is a deployment tool that uses OpenStack services wherever it's possible to deploy OpenStack as well. It is an upstream project. It is an official OpenStack project, an official deployment tool that you can use if you want. And yeah, and that is, I think, as much as I want to say about Triple-O, because I would like this presentation to be as agnostic about Triple-O as possible, and more about the things that you might want to consider when putting OpenStack in containers. So first and foremost, the why, I'll go through three or four motivations or things that actually motivate us to start doing this job here, to just work here. So there's deploying OpenStack, first and foremost, I believe. Deploying OpenStack is relatively easy, really. And the real hard work comes on day-to-operations when you have to keep OpenStack running. So whenever you have to upgrade OpenStack, whenever you have to do some changes or just have your users actually using OpenStack, and so you've got to maintain the lifecycle of OpenStack. That is the hard part. So this is one of the things that we kind of kept in mind when kind of diving into this topic and figuring out whether this was something that we really wanted to do or not. So one thing we would like to achieve by doing this is service isolation. So being able to have services, OpenStack services, or any other service that OpenStack depends on as isolated as possible from the rest of the clusters so that we can have more, I guess, run our control on the services and the way that we deploy those and the way that we run those. So service isolation and the dependencies isolation, so each one of these services that we install, either OpenStack services or other third-party services, they come with a set of dependencies that I have that we also wanted to isolate so that whenever there's an upgrade in process, we won't be updating dependencies that other services might still be depending on and probably breaking our environment. So these are all things that you might have heard already many times when it comes to pitching containers, but, again, this is actually stuff that we would like to achieve in our deployment and that we believe it is beneficial for OpenStack deployments. Ease of upgrades, again, many things we've heard before about when people pitch containers, but it does make the upgrades somewhat easier. It does not solve all the upgrades problem. There are many upgrades problem that you need to face or that you will face whenever you try to upgrade OpenStack, but it solves some of the problems when it comes to getting the whole package, getting the isolated service and actually running it and moving from one of the processes to another. And if you do it in a more distributed fashion, it does it like, I've not talked about whether you're using Docker or some COE or what container runtime you're depending on, but depending on what container runtime you're using or whether you're using a COE or not, it might also make your upgrades more dynamic and easy to scale by using the orchestration capabilities of that COE. And it provides also some deployment flexibility. So if you use a COE in this case, more specifically, like Kubernetes, for instance, then you won't be tight. So what we do right now, we have very specific nodes that are used for very specific services and those are pretty tight. So if you use a COE or Kubernetes that you have some extra flexibility because you can let Kubernetes orchestrate these services for you based on the way you label and the way you create your cluster. And again, I understand some of the things that I'm saying here might be over simplifying the whole progress, the whole process, but just bear in mind that the high level feature that we actually want to gain from doing this. And there is a scalability. And again, once you have, if you run these on Kubernetes or you run OpenStack on some other container runtime and you have the tools for scaling your nodes, it'll make easier for you to just tell the COE to go ahead and scale your services and add more containers to your entire infrastructure. And again, it is not as simple to scale on a computer and we know that, but for the rest of the API services and some of the controller services, it does make it simpler to actually add more nodes to your cluster. So these are probably the main reasons, hopefully I'm not forgetting any obvious reason, but these are probably the main reasons that we started like diving more into this. Besides all the feedback from users and people that are actually interested in running OpenStack on containers because they believe it runs better based on their own experiments and deployments. So once you've answered whether, why you want to do the whole thing and why you want to go down this road, the other things that we need to answer and this is a question that we kind of like faced ourselves is what exactly we want to containerize. So OpenStack is not just like a set of APIs. There's a whole bunch of other pieces that come into play whenever you deploy an OpenStack cluster. And there's all the OpenStack services, all the libraries and dependencies that OpenStack services depend on. There's all the third-party services that OpenStack depends on like databases, message keys, et cetera. And like even Liberti or like an OpenV switch, like all these things and tools and third-party software that OpenStack actually depends on to work properly. So three probably obvious ways to answer these is you can containerize some OpenStack services and like leave outside all the third-party services and some services of OpenStack you don't want to containerize. So you might want to containerize only the API nodes, the API services like Nova API or Glance API, Synod API, et cetera. Because those are probably the ones that will give you less trouble whenever you want to upgrade them, whenever you want to scale your environment and whenever you want to create configuration files, et cetera for them. Those are the ones that are probably easier to maintain. And if you go down this road, that means that you're gonna leave the more complex services outside and like Nova Compute and Synod Volume and let those run in the host because you know how to run them already. This is assuming that you have already a OpenStack cluster running already. And you will leave like all the third-party services running on the host as well, like MariaDB or RabbitMQ because containerizing some of those services that need to be highly available and that you need to provide some other guarantees for is harder when you try to move them into Docker or even Kubernetes just now getting the proper semantics to support stateful applications. So this is one way. So we'll give you a very hybrid approach where you just have like the API services that you might want to scale more and leave the rest of the things on the host so you have more control and they are closer to the metal, I guess. And then you can only containerize all the OpenStack services and leave the third-party services outside containers. And again, you go ahead and you containerize all the API nodes, schedulers, et cetera, et cetera. And then you containerize also the Nova Compute nodes and, sorry, the Nova Compute service and the CinderVolume service. And then you leave in the host services like Libbird or OpenTheSwitch or MariaDB, RabbitMQ, et cetera, et cetera. What this gives you is more consistency because you know that all your OpenStack, everything, every process that starts with OpenStack so that it's built by the OpenStack community, you know that you have all of them in containers and it gives you an extra consistency in your deployment and then you know that all the third-party services are running on the host. And this is not possible with every third-party service side. This assumes that if you're doing this, this assumes that you can talk to these third-party services using the network because you cannot, if you're containerized and unless you're running stuff in Docker, but again, I'm trying to keep this as technology-agnostic as possible, but unless you're running things in Docker that you don't really want to access the host directly from the runtime from the container. So, and the third option is containerizing everything. You just put OpenStack and all the third-party services in containers. And this is probably the hardest option because that means that you now have to figure out how to containerize MariaDB, RabbitNQ and all the third-party services like Liberty and OpenVswitch, which is a problem solved for some of them because people have done it already in other deployments and with other tools. But for other services, it's actually not a solved problem and it's something that the community, the OpenStack community has also worked on and tried to figure out in the best way possible. So we like hard challenges, I guess, and this is the option that we actually went with. And I guess one of the motivations behind this is mostly consistency. We want to eventually be able to have a fully containerized OpenStack deployment, including third-party services so that we have a very consistent, again, deployment that we can ship. And eventually, all these can also even run on whatever, like an atomic hose or something like that, that it only has and only supports containers. So that is kind of like the mindset where you can even have your own self-installed container that will do some magic stuff. But anyway, there is a way far in the future. What I want to say is that this is the option we went with mostly striving for consistency and trying to unify the tools as much as possible. And B also have some overlap with what other tools in the community are doing. Triple O is not the only tool doing this in the OpenStack community. So one thing that we actually wanted was to be able to collaborate with these other communities and as much as possible. Also, I'm overcaffeinated and I'm on a sugar rush, so I might talk more than usual. How are we gonna do this? So this is probably the fun part of the entire presentation, I hope, because this is where the actual lessons learned. And I don't think that this list is exhaustive or anything, but these are the ones that I remembered being problems or causing discussions among the team and upstream in the middle list. So let's forget the tools for a second. Let's forget that this is Triple O and let's forget that there are other tools doing the same thing. Let's review a little bit the architecture and how you wanna do this thing. So one of the things that we learned, but kind of like we didn't really fail here because we were already doing it as part of the bare metal deployment we use in Triple O. But one of the things you wanna keep in mind is that you want to label your nodes. This is something that you can, most like almost all COEs have, like you can put names and labels on your nodes and have sets of nodes that you will consider compute nodes and have sets of nodes that you will consider volume or storage or whatever. Do it. If you're running this on Kubernetes, for instance, don't just run your Kubernetes cluster and just throw open stack services at it and say like, hey, just go and be smart about it because Kubernetes is not going to be smart about it. You know open stack, you know where you wanna run your services and you know that Nova Compute should run in the node where you want to have your virtual machines. Or that is probably the recommended way to do it and that doesn't really exactly have to be that way. But that's what you probably wanna do. That's because more, because if that knows, if the Nova, if you don't do that and then the Nova Compute goes down, then you have like very inconsistent deployment where there are some virtual machines that are running but Nova Compute is not responding and then you don't even know the state is basically broken. It's kind of like, in my mind, when I think about compute, I think about Nova Compute and everything that Nova Compute depends on. So that's how I like to see this and it's just make it simpler to architect the entire deployment. So label your nodes, make sure that you use enough labels so that you can also tell your COE and assuming that you're using a COE. This doesn't really work with Docker unless you have another tool that will do that for you. But if you're using a COE like Kubernetes, you want to tell Kubernetes how to run the services but you want to also use enough labels so that you don't have to tell, so that you can explicitly tell Kubernetes where to run the service instead of telling Kubernetes where not to run a service. What does it mean? It means that if you're running, if you're labeling, if you only label your compute nodes, that means that for running your API nodes, you will probably have to tell Kubernetes, run these services in all the nodes that are not compute. You want to give it like probably positive filters or segments to the COE whenever you're running the service. So you want to tell it, go and run these service in the API level nodes instead of telling it, go and run the services in the nodes that are not compute. So just be smart about the labels and try not to overdo them. I think I even have a slide about overdoing labels. One service per container. We have one service per container and when I say one service, I don't mean Nova, Glance, Cinder. What I mean is Nova API is one container. Nova scheduler is one container. Nova compute is another container. LibreD is another container. So we have every service in a single container because we want them to be isolated from each other so that we'll have more control over a single service over each one of these services whenever it comes to you, whenever we have to manage the entire cluster. So this also makes, I believe, simpler the config generation because you know you want to generate in that container, you want to generate the configuration files for that specific service and you don't need to generate configuration files for more than one service because you have them bundled in the same container and also makes updating the image easier because, excuse me, because if you have two services in the same container, say like LibreD running with Nova compute in the same container, and there's a new version of LibreD you'll be forced to also restart the Nova compute service whenever you want to upgrade LibreD. The same thing happens if you want to upgrade the Nova compute and you don't really want to upgrade LibreD. So have a single service per container because it gives you more control. It's like having like a package per service in a bare metal deployment, I guess. Avoid containers placement as much as possible. I know I just said I'll label your nodes and tell your COE and where to put your containers and where to run them, but if you're using a COE, in the case you're using a COE like Kubernetes, one of the reasons for using a COE is so that it'll schedule services for you because the Kubernetes schedule is supposed to be smarter than you and it not always happens and but if you rely on it and you just tell it to go and run some of the services without telling it exactly where to run them, it'll balance the load and it'll try to run the services in the nodes that they're not extremely loaded, I guess, or overloaded with work and it'll help you scale them horizontally and it will give you the flexibility that we also talked about a little bit earlier. So telling like placing services manually is not really bad, but it's also not good to overdo that and it would be better just to let the COE make some decisions by itself, using the data that it has and the status of the cluster and the health checks and the state of the entire thing, the entire deployment. So try not to be extra, try not to be smarter than the COE, I guess. In some of these operations. Separate the bootstrap tasks. So one thing that if you run, well, actually, it's not really specific to open stack services. Like most of the services that you run, like databases, like open stack services, they have like some bootstrap operations that you need to go through whenever you want to run them for the first time. So if you wanna run Nova, you're probably going to install Nova first and then you'll have to create a database for Nova and eventually you'll have to synchronize the database and then and only then you'll be able to run Nova and I'm pretty sure I'm missing some steps in between, but there's this list of tasks, there's this list of operations that you have to go through before you're able to run this service. This is what I consider to be called the bootstrap task for these services and most of them, if not all of them, have them, at least for synchronizing the database. So you gotta run them at some point. If you have your service containerized and you don't have it installed in your bare metal, you'll have to figure out a way to run these bootstrap stacks, tasks from the container itself, which means, so when you install Nova, it'll install a tool that will help you synchronize the database, right? If you don't have it in bare metal, that tool is inside a container and one solution that some people have adopted is, well, you know, like when you run the container, we'll check whether this task and this tool have been run already and if it hasn't been run already, then we'll run it and then eventually we'll just run the service and it'll do that in a single operation. You don't want that, you don't want that because it is not, it'll be really extremely hard to guarantee the consistency of your cluster if you do that and because sometimes there might be race conditions if you run the containers at the same time, they might try to synchronize the database at the same time, so you want to have control over these bootstrap operations. You want to run them in very specific times. You want, they are bootstrap operations, right? You wanna run them once when you install the service and probably once when you are doing upgrades. Other than that, I just want to run the frigging service. So keep them in separate tasks and what we do now, the way we are doing it in triple L right now is we reuse the same container because it is the one that has the tools and what we do is we run the container with the bootstrap operations and we basically override the entry point so that we can run the bootstrap operations and once those operations have been run, once those tasks have been run, we then run the actual service. Sometimes we are able to actually run them in parallel because if you run Glance API, it's gonna be fine. I mean, unless you try to query the API, it's going to be fine to just run Glance API before you run the SyncDB. But again, we try to keep an order so that we follow the same consistent architecture for all the services. So first run all the bootstrap tasks and eventually run the container and you have to run these tasks manually using your tool, whatever tool it is that you're using. Don't put everything in the same entry point for the container because that's not gonna end well. Structure your images. So for this thing, we're actually using coli images and the coli images have a gray structure. When I say structure, I actually mean layers. You can have different layers in your container images and this is something you really want because so you can install common packages in the base layers of your images and eventually just install the service-specific packages at the very top of the topmost layer of your image. You wanna do this so that you avoid recreating all the images every time. So the problem with this structure though is that if you need to upgrade, I don't know, one of the base packages that are in the first or the second layer, you'll end up having to update all your images. And Ryan, I don't remember the number exactly but I think we're over 120 images for OpenStack, for base OpenStack deployment. So those are like a zillion of images really. Like if you try to, if you run coli build, it'll take two to probably three hours to actually build them all from scratch. So if you update one of the packages at the base layer, you'll end up having to run all the builds for all the images and that's gonna take some time. But you do want to have some structure in your images so that you can control what goes into each layer and you know what you're going to deploy at a very specific time and then you can have different versions. And it's very interesting how the layers work for the container images. You don't want to abuse mounts, which we're actually kinda doing. So you can, it's very easy once you have the containers and you run them and you need to solve some of the problems, like if you're running, if you are migrating from a bare metal deployment into a containerized deployment and you already have your database running on bare metal and then you want to run the container with MariaDB, there's a question of how do I migrate the data from the bare metal node into whatever storage container that I want to use for my containerized MariaDB. That is a question that needs to be answered and one easy solution is just mount varlib mySQL into the new container and let it access it, right? So it's very easy. It is probably dirty, but it works. And since it is easy to do that when you're using Docker at least, it might be easy to also do the same thing for many other things that you will need access for from the container. And one of the things that happened to us is that we ended up adding more and more and more mounts and bind mounts into the container. And now we have, I believe like we have many of them for different reasons, which like details that I really don't want like don't want to get into right now, but if you happen to be doing this, try to avoid using or abusing of the mounts and the bind mounts that you can put in container. One reason you want to avoid this is because if you eventually decide to move from whatever, like whatever container runtime you're using, like Docker for instance, if you ever decide to move from that into a COE like Kubernetes, then you'll have a huge problem because you're not supposed to be accessing the host path from the container. And if you do that and Kubernetes decides to just schedule your container somewhere else, your MariaDB container into a node that doesn't have the data, then you'll try to access VARLY, MySQL in a container that doesn't have your data. So there is a huge problem that is now going to be solved by just accessing the data from the host. So if you're not planning to ever move into a dynamic scheduling but service like Kubernetes, then I think you'll be fine, but again, like try not to abuse them because eventually, unless your deployment tool is extremely smart, eventually you'll want to move to something else and let the COE make some decisions for you and have that flexibility again, like that we talked about a little bit earlier. You want to set the host name into container. This was not extremely obvious to us, I think, or it just slipped our minds when we were doing the first implementation and we run into issues when we're using Puppet to do some stuff in triple O. And I don't remember exactly whether it was a Puppet issue or whether it was some other issue, but it happened to us that we were not really setting the host name and when we were running, when we ran the synchronization tasks and we tried to access the database to, actually when we tried to access the database to create the database for each service and this was done by Puppet now, I remember. And we, well, the connection was failing because it was trying to use the host name which was not set in the container. So remember to set the host names if you need and you can use the DNS, it's even better, but if you're not using a DNS and you just want to rely on your host names, it's perfectly fine, just remember to set them. It is important for your networking and your container life. Generate config files from the container. So I guess this is the only slide that I have that is very specific to triple O and the way we do things. But I think it is valuable to also mention what the solution we adopted here. And so one of the problems when working on deployment tools is actually generating the config files that will then be used by the services to run properly. And we're using, like I said, we're using Puppet for that in triple O. And the first solution that we adopted, so one of the things, so Puppet needs the software to be installed in the container or in the host to actually be able to generate the config file. So one of the problems we had is that if you try to generate the config files for a service that is not installed then it's not gonna be able to do it. One solution we adopted at the very beginning was we'll have this single container that we call agent where we're going to install all the open stack services and we'll keep all the config files in there so that whenever we run Puppet in that container it'll be able to generate the config file and then we'll copy those config files into the right containers. It is kind of like confusing, but there was this master actually agent container that had all the config files. It worked well for a single service. Then it just didn't scale very well. One of the reasons is that if you ever need to upgrade your container or your open stack cluster and you want to update the Nova API service you will also have to update your agent container because you need to update the config files in the agent container and regenerate the config files and then copy those into the new Nova container that you're running. So what we did is, this was actually a dense idea, what we did is we ditched that agent container and we installed Puppet in every container as the very base layer of the container image. We installed Puppet there and what we do is whenever we need to generate a config file we use the same container image that we used to run the services but we just call Puppet inside a container. It generates a config file for that service, for that specific version and it does it once and then we just reuse that config file in that same container. So it solves a lot of the problems that we had with the whole. And again, we want to simplify upgrades and using the agent container was actually just going to make it more complex which is something that we really don't want to do. So this worked very well for us. I'm pretty sure if you're using different technologies you're not gonna need a solution like this but if you ever run into a similar problem this is something that you could actually do and it works pretty well. Logging, oh my God, logging. When we started running the containers actually it was, we got to the point where we were running all the containers and it was like, okay containers are running. Now let's see if the services are actually running and the services are actually responding as they should be responding. And very few of them were. And then the problem is, how do we figure out what's going on in the container, what we fucked up I guess. And one of the things we found out is that when you got into the container is we actually didn't have good logging because the config files that we were generating were based on our old bare metal deployment which means that they were all configured to log into VAR log. And when you run a service in container like the standard way of doing it, the recommended way of doing things is that you will have your process logging into standard output and then you can just access that log using the Docker tools and stuff like that. Which was not the case for our services because they were configured to just log into VAR log and which meant that we had to get into the container every time, go and read the log files whenever, if they were there because sometimes they weren't even there because the user that was running the service inside the container didn't have permissions to create the file. So it was a mess. So I guess what I wanna say is that there's not a standard way or a consistent way to do logging in and out the site deployment. Not that I'm aware of. Most of the services just log to VAR log. Most of the deployments that I know of, they just log to VAR log. There are other deployments that use login services like Flow and D or an entire elk stack to store the logs which is perfectly fine. And when you're moving things to containers, this is not what we're doing but I truly believe you want to have a hybrid solution. You want to log into standard output and also log into a file so that you won't change entirely the way that your deployment works right now. And this is one of the other cases where we abused of the mounts because we basically got them, we basically mounted VAR log inside a container so that we would still log into VAR log inside a host and we would be able to reuse all the tools that we had already to analyze the open stack deployment. I'm not saying this is gonna stay like that but this is another solution that you have. Recommended way it is if you can log into standard output and also have your log files, that is probably the best way to do it. You can have like a data container where all your log files go and also have the services log into standard output. And this is great because then you can have all the tools like I mentioned before like Flow and D going through the containers and collecting all the logs for you and having them centralized in a single place if you really want. Or you can have like eventually like even have like the same open stack service login straight to Flow and D. And I think there's a plugin that you can put in the container runtime that will just do it automatically for you and there's a bit of a digress. But anyway, yeah, just bear in mind that log into standard output is great but I would also keep log into files because there are many tools that just parse files and analyze them in proper way which is at least the tools that we were using. And I guess one of the biggest questions is whether to COE or not to COE. So COE stands for containers orchestration engine AKA Kubernetes and services alike. So you have your container, you tell this tool, this COE to please run that container and it'll dynamically schedule the container for you. So one of the big questions is whether to go into some tool like this or just going to a simple container runtime like Docker or Rocket. And this rule depends on your taste and really what you want to do. Just bear in mind if you decide to use a container runtime and not depend on any COE, you'll lose some of the flexibility that we talked about before unless your deployment tool is smart enough to provide that flexibility to you. So you cannot tell unless you're using Docker swarm but you can tell Docker to just please go and dynamically schedule a container for you. You can tell a Docker demon to run the container for you in that specific node but you cannot give a Docker demon a set of nodes and tell it to just please run the container in some of these nodes and you decide what the best way to run it is. Which is something you can do with Kubernetes or Docker swarm or some other COEs. So our decision was we want to eventually run everything on a COE Kubernetes but before we get there we're just gonna start with Docker and start migrating bare metal deployments into container runtimes and eventually when once we've figured out some of the basic difficulties and challenges that there are with running open stack on containers then we're gonna start the migration into specific COEs. Things that we still haven't figured out quite yet. I think there's just one actually. This is the only one that came to mind is choosing the right storage driver. Like I said, like the first phase we're using we're mostly using Docker. One thing that we have not figured out yet is what was the best storage driver for this. Like there are good documentations we kind of know what storage drivers. So you can have different storage drivers for a Docker and the device mapper is the default one which basically writes things into a loop device which is extremely slow and is not good for production. But you have other options there. Like there's one that runs on ButterFest and there's another one that uses overlay to and there are different drivers with different performances and different recommendations for different environments like production and development, testing, et cetera. There are some that are faster but less reliable. There are some others that are more reliable but a little bit slower. So which one is the best one? Probably overlay two is the answer there but I don't believe we've done enough test on the CI environments upstream to actually have proper numbers to back this statement. And with that I'd say these are all the things that I believe were relevant to mention here and that I think they're great and we learned in the last, what, six months, seven months? I don't know, not really long but yeah. So do you have any questions? There's one. One more question. Could you kind of summarize the current state in terms of what's working, what's not working, what's targeted for which release? So I believe, so the open stack deployment, the over cloud deployment is actually, so we're able to deploy the other cloud on containers. The over cloud deployment is actually working. The CI is broken right now, I think, as of last week it broke but we've been able to run over containerized over clouds already, the target is to release all that during, what release are we in? Pike? Yeah, in pike. And we want to release all that by the end of this development cycle. And the key thing is migrating all the bare metal deployments into a containerized one so that we don't have to support two. And of course not break existing deployments or backwards compatibility, which is a key thing that we want to preserve in triple O. Liz? All right, I think there's another question. Yeah, are you going to use the COE to replace high availability services like Pacemaker? It's a very good question. And I forgot to mention Pacemaker, actually. So I think for now, for pike, at least, where the aim is to actually containerize Pacemaker or use Pacemaker remote for controlling some of the HA dependent services like MariaDB, Rabbit and Q, et cetera. And I don't know, hopefully when we move to the COE, yes, the goal is to replace Pacemaker entirely and just rely on the HA capabilities of the COE. More question about the, are you using load balancing on services like Rabbit and Q when you containerize them? Yeah, right, so we're keeping the same architecture for the pike release, at least. We're keeping the same architecture as we have for bare metal. So yes is the answer. And whenever we move to the COE, then again, we would rely on whatever is available in the COE itself like Kubernetes. You're very welcome. There's one more question. Are you working with the Atomic project yet? I saw it in the slides there for a second for kind of the containerize operating system. Yeah, I mean, like eventually, whenever we managed to containerize everything, you would be able to run OpenStack on Atomic in theory. I think we've done it, actually. Well, it won't work anymore because now we're depending on some stuff that runs on the host. But eventually, yes, you'll be able to do that. You should be able to do that, yeah. Not for pike though, just wanna clarify that. Any more questions? Alrighty, well, thank you.