 But we're supposed to put everything in containers now. I think we just thought a little bit waiting for the mic. It's probably fine. You're going to start anyway, so it's good. So thanks everyone. Welcome, and thanks for having us today. Today's presentation is about containerizing everything, containerizing infrastructure, containerizing storage. And yeah, Sean. Hi, everyone. I'm Sean Cohen. I'm a product manager for OpenStack. This summit was almost like a container summit as well for OpenStack in terms of the number of tracks. But one of the themes that I've been missing is great. We can use OpenStack on containers. We can use containers on top of OpenStack. But one thing we forgot in the middle is what about storage? Ceph is by far the number one storage adopted in OpenStack. And we basically, as we move forward with enabling containers on top of OpenStack, we're actually using containers on top of OpenStack. We need to make sure that everything else goes along. And what else is more important than having our Ceph to go with it? So that's what's pretty much the theme of today. And I want to present as well my colleague. Hello, I'm Federico Cifradi. I work on Ceph in the Red Hat storage business. All right, so I'm Sebastian. Mainly working on Ceph, a really Ceph-centric person. So working on several topics around Ceph, like Ceph in containers, Ceph in OpenStack, Ceph in configuration management systems like Ansible. So always rotating between the ecosystem around Ceph. So OK, let's get into it. A little bit of background before we start. Rem of containers. Why containers? Why are we doing this? So these are the major drives that we have at the moment. Why are we all moving toward containers? Why do we all want to containerize everything in single pieces of our infrastructure? So mainly because containers are bringing really nice and nifty features, such as packaging format and runtime. So for example, you're bringing your own application and all of its dependency as part of a container image, and then you can run it everywhere without having any dependency on the host or anything else. Which means that we can also provide up-get capabilities and dongle capabilities as well. So we can easily bootstrap new versions of your own application, so next to the first one. So you can just start a new container with a new version and just done grade the version if something goes wrong. The flexibility of the deployment. This is fairly easy to run a container. This is fairly easy to tear it down. Scalable, of course. Everyone wants to scale. We all want to put as much resources and take the best advantage of all of the hardware. So it's something really good as well. And one of the really nice capabilities of containers is that they have the ability to restrain all the resources. So what you're dedicating to a specific process is only for that process, and you can avoid any effects such as noisy neighbors and things like this. So what we're looking at at the moment is that we know that we're going to go for containers, and then the next question is how to use them efficiently. What tool can we use? What's the tooling around? What's the ecosystem available around containers to? We'll just take the best out of them. And then this is why entering Kubernetes with OpenStack. So this is just a set of features that are really connected with what we want in OpenStack. So, of course, Kubernetes has many features, many advantages. But these are the ones that we are really focusing in when we use OpenStack. So we want things like self-healing. If something goes wrong, one on the note goes down. We want to quickly be able to recover from this. So we really want to have this self-healing capability from Kubernetes, load balancing functionalities. When you use OpenStack, we have multiple APIs, and then we have multiple entry points, which means that we need a way to load balance all of the APIs. So this is what we need, and we have that out of the box with Kubernetes. Automated rollout and rollback, so we can perform upgrades sequentially from every node, services by services, and node by node, something that is also really nice because everything is fully automated. And then what's really vital for us is this really pluggable nature of Kubernetes where everything is pluggable. Nothing is really monolithic. There is a core, of course, that brings all the nice features and all the nice functionalities, like load balancing and stuff. But they don't really want to reinvent the loop or to just get everything into a single core. So that's why they have this really nice pluggable infrastructure with storage first. You can easily develop your own driver for Kubernetes. It will be integrated into the core. So they just don't want to repeat everything. They have the same for network with CNI, container interfaces. They also support multiple container engines. Well, it's still kind of a blueprints for now, but start live with Docker, and they want to go with R&D and others. And of course, we have the scheduling. There is a native scheduling function, but we can potentially plug other schedulers, as well, such as Mesos, but for now, we just stick with the paces and stick with what's the default. So now I'm going to hand it over to Sean that will be explaining how are we going to just put all the things together and how are we going to containerize of the open stack services. Thank you, Sebastian. So if you think about it, open stack in containers, there's a lot of similarities. As Sebastian mentioned, we have pretty much, oh, this has its own life. All right. All right, let's do this. No, it's just back. Oh, yeah, it's perfect. Yes, sir. All right, I got it. All right. So we have a lot of similarities. Kubernetes deals with microservices. And if we look at OpenStack, the way OpenStack services were born, look at the core services like Nova, Neutron, Cinder, they were actually were treated as microservices on their own. Yes, we have like Rabbit EQ and other layers that are not microservice in OpenStack, but the core services have the same approach of deployment. And if you look at what Kubernetes does to applications in containers, we're pretty much doing the same thing. So as you saw, the scheduler, it has the pluggable interfaces very similar to OpenStack mindset. So when we put these two together, there's a lot of similarities lines. And it's more than that. We can actually turn OpenStack itself into a microservice-oriented architecture with our services. And basically, we let Kubernetes what it does best, which is managing the applications. But then, when it comes to, if you look at the Kubernetes loads, Sebastian touched like scale up, scale down, the changes that container lifecycle goes through, sometimes in a matter of minutes, sometimes in hours and days, it actually needs a scalable infrastructure to grow. And in a sense, in our regards, this is where somehow Kubernetes is limited. Because think about storage, all the plugins you need to write directly into Kubernetes today. But if you look at OpenStack services, we have already solved our problems. How many drivers we have just on Cinder? 70. 70. Manila, about 30. Not to mention Neutron. So we have hundreds of hundreds of already integrated plugins into OpenStack. So when we look how we can basically take the power of cloud and charge it to Kubernetes, and this is what we're going to do. So we basically leverage days of deployment we already have in OpenStack, and the scale, and basically the power of containers all together. That's the main benefits we get out of the box. Now, in terms of day two, we basically would like to share the scheduling functions between Nova and containers. So it's not like, yeah, we're placing Nova now with Kubernetes scheduler. We're actually leveraging both. So how do we do this? What's the secret? How can we deploy OpenStack services fast and easy, as Sebastian indicated? So we started a quick history lesson. We at Threaded started already trying to take this problem a couple of years ago, back in 2014, with the initiate of the COLA project. Basically, the work has started around the Paris summit, and I'm very happy to stand here in Barcelona, right? And to declare that, first of all, we already have done a lot of the work already. So some of the stuff I'm going to showcase right now is available. If you go now to the COLA OpenStack website, you have guides for everything I'm going to showcase. You can actually start doing today. So yes, COLA deals with, I'm not solving the containers. This will be solved. You can really run containers of open stack cleanly. What we're doing is how we can use the power of containers for deploying OpenStack, and that's what it's all about. So COLA basically solved that problem and enable us to take most of the services we already have in a microservice fashion in OpenStack and basically deliver it for our deployment. So we're taking the container's benefit for our OpenStack benefit, right? This is what's radical here. It's not just the ability to serve the application workloads on top of our cloud infrastructure. We're actually doing that upside down. We're using the same technology to simplify our life with the cloud infrastructure. So it solves both the manageability of the container infrastructure on top, or actually leveraging it for the cloud infrastructure. And we're basically optimizing to an image-based approach management where we leverage the current OpenStack projects. We're leveraging the heat templates and YAML files to define the service and pods. You can have a Cinder API pod. You can have a Keystone API pod. So we're basically translating all the services to basically containers approach. And what this transfers, and how many have just raised here, how many do you have to go through old-fashioned upgrading OpenStack? Just raise your hand, all right. How long, that process took you? Two long weeks, right? So we're talking about seconds. This is like changing the rule of the game for us. And some of the small new services that help us do it is a good example. So Courier is a project that basically just takes all of the network drivers from Neutron and enable them for the use in containers. Similarly, we have Fuxi, which is another small project, but has very key. It basically allows us to take all of the 70 plugins that Federica mentioned and just connect them to our cloud. So instead, if I now have all the choices in my back-ends, right, in my data center, I want to basically use them for OpenStack, right? Instead of going and sit on my vendor, right? Write a plugin for Kubernetes. I just have Cinder, it's already integrated, it's there. I can just use it. That's what's radical here. We're basically really unleashing the power of OpenStack with containers. And with that, let's do a zoom-in into what specifically we've done around the stuff. Federica? Yeah. Thank you. Buon tardé. Como va el tema de todo el día? Yeah. Voy a volver a continuar en inglés antes de crear un pánico con mi español reo platense. Oh, I'm los que no hablan inglés. You know, hablan español. Wrong direction. Whoa. So it's, I'm not touching it. Thank God we have keyboards. Yeah, really. So, call a coverage for containers. Call a coverage for storage. We cannot manage a presentation. Call a coverage for storage. All the services are covered. There are a number of options around SEF. You can do the obvious minimum deployment requiring three nodes. There is a developer-friendly deployment that goes for a single node. Obviously, it has no resilience and it's not something you could support, but it can be convenient in development. And it also supports use of external clusters so you could configure a call a deployment against an external SEF cluster if you wanted. Now, call a itself has a funny reputation in terms of when you talk to the users, are you using call a? The answer usually is no. But if they are using containers, you ask them, where does the code come from? The answer is call a. So it's a project that is borrowed very heavily from. Just this morning, a customer told me, I'm not using call a, but I'm stealing liberally from it. It's a place that apparently is very popular for borrowing code for creating your own container deployment. So it's more popular than the straight answers would make you think. There is one more thing that I would point out in terms of storage, and particularly SEF integration with calla, and it is that currently calla maintains its own playbooks to do a SEF instead of using upstream. So perhaps that's an opportunity for better integration between calla and the SEF community. So upstream for SEF in containers is the SEF Docker project, which has been surprisingly successful for an infrastructure project, 500,000 poles plus from Docker. This is actually Sebastian's project, but since he is modest, I'll do the bragging for him. This has been extremely successful and it's been out for almost two years. So it has seen quite a bit of action so far. It supports most major OSs. It is what you get when you do Docker pull SEF, that is where the images are coming from. And we have, this is also what you get when if you're a Red Hat customer, we have a tech preview of a container using our supported bits that is also built this way. So this is really the root of all SEF containers today. So how does it work? It is a single image, single Docker image that you can provision any SEF demon from. So you Docker pull the single image and then you Docker run with a number of environment parameters saying I want an OSD, I want a Mon, I want an MDS or RGW. These are the usual SEF demons, so I'm not gonna explain those, but there are a couple of more esoteric demons up there that not all of you may know about. There is a gateway for NFS to RGW that is coming into SEF and that deployment of that in a container is also supported, as well as RBD mirror which is the disaster recovery asynchronous replication between SEF clusters that was introduced with SEF 2.0. There's also an Iskazi demon that is coming. This is not supported by the image yet, but it will be short. So besides choosing what type of demon you also pass configuration, saying where the monitors are for example, and you can choose between many different options for building your OSD where you can co-locate the journals, you can have dedicated journals, you can configure DM crypt to encrypt the underlying storage that the OSD is using. You can deploy BlueStore for testing purposes or if you're building the system in another way, you can just point to a directory and assume that the setup has been done independently. Currently, the primary deployment method is Ansible and in the Ansible configuration conveniently, the running demons are managed by SystemD. So SystemD does the watchdog activity to see if the demon crashes to do the responding and so on. And there is experimental support for Kubernetes. So this is at a glance where we are in terms of what is available today for containerized Ceph. Now we've seen OpenStack and we've seen Ceph and let me have Sebastian put it all together. Going back, all right. So now that we know that we can containerize OpenStack, we know that we can containerize Ceph, it's time to put all the things together and to see how we can deploy all of them. So we actually have three, I kind of identified three methods to... Oh! Okay, I'm back, all right. So the first one, the one that you might be familiar with already because it's an OpenStack project, it's TripleO. So at its core, TripleO is using heat to orchestrate the deployment and heat has a really nifty mechanism, like a hook mechanism. By default, it's using puppet, but then you have hooks for Ansible and you also have hooks for Docker, basically. So one of the options that you could do is just to use TripleO and deploy your containerized infrastructure by relying on this Docker hook. So just reusing the exact same tool that you're using to deploy your non-containerized environment. Then we have Ansible and we all love Ansible. I really like Ansible. And for us, it's really kind of a de facto standard because it's easy to learn. It's so user-friendly. Tons of interface, it's Python-friendly. Well, basically everyone loves it. The only thing is, if you want to use Ansible, then it's kind of a flat deployment because then you describe all your hosts, you say, okay, this host is going to run a monitor, this one is going to run an OSD, or rather, a gateway, an MDS, and then you just deploy it. So once it's deployed, it's just up and running, but there is no lifecycle management out of it. If something goes wrong, if something breaks, then only system can help you because we are treating containers as services, so we use unit files from system D to run containers so we can treat them as services. But despite of doing the watchdog process, as Federico mentioned already, there is nothing more that we can do. So basically it's a really flat, it's not dynamic, but it's easy. So if you're not ready yet for Kubernetes, but you want to get your hands on containers to see how they work, how they interact with each other, when something goes wrong, how you debug it, and things like this, so it's a really good way to start. Ultimately what we want to do and what we want to achieve is because currently Kubernetes has some limitations that we're going to see in a minute, ultimately we want to use Ansible to deploy the infrastructure, so to deploy Kubernetes, and then we want to use Kubernetes to just deploy and maintain everything. So as I said, Kubernetes is not fully compliant yet, so it doesn't really comply with all the requirements that we have when we are deploying OpenStack Clouds. And the big question is, are you really, really ready for containers and also is Kubernetes also ready to support and to be as well integrated as we do for OpenStack Clouds? So this is a gap analysis that we did, we run internally, and we just listed all of the options that we want to see by default when we are deploying OpenStack Cloud. The first one and probably the most important one is to have network isolation. In OpenStack, when we currently deploy it, we have networks for APIs, for internal tenant communications, for storage, for storage replication. You can add as many interfaces as you want to bring this isolation. And the default networking model from Kubernetes relying on Flannel doesn't really allow you to have this really complex network isolation, so because by default it's just a single name space and you currently have dedicated interface. And what we really want to achieve here is to have multiple interfaces within the container so we can say, okay, this interface going to be for APIs, this one for accessing the storage, this one is for storage replication. We will see further in the presentation that we have options to do this and we're currently working on that, but to me it's currently one of the best, the main pain points. Then moving on to having the ability to disable this overlay network by simply using host networking, which basically means that we are not running any network name spaces within container, but we are exposing the network functionalities from the host to the containers. So when you run your own containers and you use host networking, then all of the demons will be listening on the IPs on the host on the physical machine. So this is actually something that we can do and this is one of the work around that we are gonna use from the beginning because we don't have this really proper network isolation segmentation. And we want to be able to use IPv6, this is currently OpenStack Services, have the ability to listen on IPv6 and then we really want to have this as it become really popular now. Currently within Kubernetes, the current implementation is really rough and so it's more considered as a work in progress. I think it just came out with the 1.3 release, but it's not really robust and complete yet. Since we are doing load balancing and we have public API endpoints, we really want to have the possibility to do SSL termination. This is something that we can do with the balancing service and all the ingress routing basically from Kubernetes. So we can do this and it's nice. We need to have data persistency, so bringing persistent storage to containers. As explained earlier, we really have, Kubernetes is really pluggable and we have several interfaces, so we can plug, at its core, we can really plug storage technologies like CIF, like Ice-QZ, and we already have drivers for this, but we can also rely on Fuxi to connect Kubernetes to the Cinder OpenStack environment so we can just consume all of the drivers that are available. The idea here being of, you have an OpenStack service that is up and running and then you want to save and store its configuration so its config file. So if something goes wrong, you just want to move this container to another host and we can easily reconnect the storage. So in the case, for example, of something like Galera or RabbitMQ, we can just rebootstrap. Well, we like to use CIF, so what we're going to do here is just map a block device and then if something crashes, you just restart the container somewhere else, you just link the storage and you attach it and then you're up and running again. We want to have some nice order announcement when we do Tristur Bootstrapping because we know we're not going to deploy Nova Compute before the database, so we really want to start with CIF, Database, RabbitMQ, then Keystone, and all of the other components. So we really want to have this nice order announcement capability and then we have it within Kubernetes. Not replication, of course, because we want to have a specific set. Let's say we're going to do controller's replication from Kubernetes, so we just say I want this pod, the API pod to be replicated three times across these kind of hosts that are being labeled as controllers, for example, and Kubernetes will do that for us. Pod's monitoring is a function that is currently built in into Kubernetes, but it has its own limitations. It doesn't really support native TCP, for example. It's purely HTTP. There are projects for that, so in the case we want to do this at a TCP level, then we might have to build another container, like a pod that is monitoring. There are always ways to work around that, so this is one of the things we can do. Load balancing, it's really out of the box, something that we already discussed. Pod fencing is really critical for us because we, I mean, Kubernetes can be, let's say, not smart enough to understand that even if I asked for a replica three for my replication controller, so Kubernetes will try and do its best to give me the desired state, the state I asked for, so if this keeps failing, Kubernetes will keep on trying to do it, but yeah, we don't really want to do this forever, so ideally we would like to have this kind of fencing mechanism. If we, after, let's say, five attempts, then we just want to kill and fence the node, and so we can start investigating what's wrong. There is, we just don't need to restart and try to restart it forever, doesn't really make any sense. So, going back for a second about this networking program that we are having, to me once again, it's really vital that we have the ability to segment, isolate all the different networks within OpenStack. So there are several ways to do this and thanks again, because of the really pluggable nature of Kubernetes, we can have CNIs, so a container of network interfaces, and to me, it's probably the best way to solve this issue. With that, we can use different CNIs that we can connect to a specific SDN, and then this will just be in charge of providing the network, so we won't be using Flanet anymore, but you will be relying on the SDN that you're currently using. And then for this, we will have the possibility to have multiple interfaces within the container and then finally achieve what we want to do, having several interfaces, listening on several network, being exposed and configured because they point to a specific physical interface and we have this dedicated network, dedicated VLAN, whatever you want to do. But it's a little bit of a chicken and a hag problem because when we bootstrap, let's say Kubernetes is ready for a second and it's fully compliant with everything that we want to do for deploying OpenStack Clouds within containers. We have a bit of a sequencing problem because when you start deploying OpenStack, then Neutron is not up and running yet, so how should you configure Kubernetes to use the CNI and to bring all of the networks? So one of the things to do is either you bootstrap and configure Neutron by yourself and then you can connect Kubernetes, but then you're using two different methods to configure everything, so it's not really ideal. One of the other options could be bootstrap, some kind of a fake and ephemeral Neutron, for example, before deploying the OpenStack environment, so everything is being set up on the machines. You have containers up and running that are providing the network and everything's ready, so ready to be consumed once you want to deploy OpenStack and you want to leverage the CNI functionality with Kubernetes. There are several plugins at the moment and we still don't know yet which direction, which one we're going to use, but it's likely the best way to solve that problem. So now that we are all really amazed by this and you all want to go with containers, but you're already running non-containerized environments and you might be wondering, how am I supposed to just migrate from a non-containerized platform to a containerized platform? And we have several solutions for this. So these are like two potential ways to address the migration. Either you use Kubernetes, either you don't. It's not really a problem because we all want to have a really smooth transition path from non-containerized to containerized, so let's say for a minute that you don't want to use Kubernetes yet, but you want to keep on using Ansible or maybe Triple O or maybe any other configuration management system. So one of the things that you could do is simply start stopping services and then you just start your own containers so they can join. So if we take the example of a cloud controller, you can just, because we have three, so we have a crew room and if one goes down, then we're still up and running. So you can easily stop one service and then run the proper automation with that. So if you use Ansible, you can say, okay, Ansible, stop the service and then generate the system, the unit file where you have your container being declared and then run this unit file. So stop the service, disable the service from system D and then simply run your container, bind mount all the proper directories, do the proper connections and just start your service. So eventually it will just go back in crew room within all the, well, the rest of the controllers. So that's one idea. If you're ready for Kubernetes and well hopefully Kubernetes is ready for all of us as well, one thing is just fence a node and you basically, so you basically kill one node, you start configuring your template for Kubernetes with all of your application, you declare everything as a cloud controller, you do the necessary bits, maybe you just reinstall everything so you have some kind of a containerized ready OS like OS or Atomic for example, where everything is already there for you, like the Kublet and all the Kubernetes processes. So once you did that, you can simply start by strapping your first containerized node and then you continue, you keep on doing this, you shut down the other node, you had, well, just replica two, if it's a controller for example and then Kubernetes will do the rest for you. So these are potential and like really general of course and we're just like scratching the surface here. We have, obviously we're gonna have issues with virtual machines for example, how should I be migrating my workload for this? One of the options for this could be either you just say, okay, I'm so cloud native, I'm so cloud ready that I can kill any of my hypervisors but that's not gonna be the case of course. So one of the options could be just live migrate or just do an evacuation of the hypervisor. So yeah, that's just like a brain dump. All right, so from that, now I'm gonna be just describing some architectural example with containers. So hopefully this is clear for everyone. Yes, it is I guess. So this is more or less what the deployment, what the ideal deployment with Kubernetes will look like. So at the very top you have your Kubernetes masters responsible, what's just the brain of Kubernetes, Kubernetes core with HCD and all the Kubernetes processes responsible for replicating for just checking, for scheduling, providing the API and so forth. What's really important to note is that red boxes or open stack components, blue boxes or Kubernetes component and green boxes or self-related components. Then the only thing that is really changing here is if you look at the open stack controllers, I guess. Okay, no pointers. If you look at the open stack controllers, we have the Docker engine because by default Kubernetes relies on Docker but at some point we're gonna add like more runtimes like RunC or just system DNS phone or any for rocket or anything else. But now let's assume that we keep on using Docker. So you have Docker, you have Kubernetes like the minion from Kubernetes just responsible for interacting with just the entity running the parts, listening on the parts, doing the self-healing of the parts just monitoring all the parts. And we bring Curia and Fuxi because we want to really bring all the features and expose all the open stack features to this container word. And then all the boxes at the top like APIs, databases, queues, LBs would be or actually LB is wrong because yeah, one mistake. We're not gonna use LBs here because we have load balancers from Kubernetes. So yeah, I'll make sure I'll update the slides for when we'll be sharing them. Other than that, all the components are being containerized, so all the APIs, everything. Same goes for SAF, we still lead like Curia because we wanna expose and connect the SAF containers to this open stack environment. We wanna have two interfaces, one for public communications and one for replication. So OSDs, object storage team, are just running in containers as well. Open stack compute node, all the open stack related components like such as Libre, Nova compute, Neutron, OpenVs, which are running within containers. It's more like privileged containers because VMs are also running into containers. And yeah, that's more or less the general picture of the containerized open stack cloud deployment. Now this one is more like an announced version of something that we used to do already. It's the hyper convergence where you collocate compute and storage resources on the same node. So basically your hypervisors now will become hypervisors and storage so they will be providing resources, CPU memory plus storage as well. One of the nice things here is that we get proper resources isolation. So we're not gonna have any nodey neighbor effect. Everything is really restrained to its own, well, C group namespace within the container. So every resources that are being allocated from a container, these are just the resources available. So it's more, this one is more an announced version because we used to already do this collocation but now that we have containers, it's easier to deploy and it's of course easier to do upgrades on the side. So you don't need, you can upgrade your set cluster without doing anything from your, to your open stack environment for example. So what's next? So just to give you a little bit of the container roadmap for CEP because COLA is a, well, it's a project on its own, it's moving quite well. I think they just discussed that because COLA has, is really opinionated on the way they do the deployment. So as far as I remember, they started with Kubernetes and then they abandoned it. Then they moved, they tried using Swarm, they tried to use Mesos, and finally they came with something that works which is Ansible. But now there are, I think they're discussing bringing COLA to its own project. So COLA is just going to be COLA and providing images to containerize all the open stack components. And then they wanna have COLA Ansible, just the deployment piece with Ansible. They already have COLA Kubernetes providing all the templates to deploy and containerize open stack platform with Kubernetes. So there is not much to say for that. However, for CEP, we have plenty of things to do at the moment. As explained already, it's really working well now. It's really robust from a pure containerized perspective. But what we wanna really focus on in the next few months is like strengthening and doing a lot of QA on the Kubernetes prototype that we currently have, like running it for several months, experiencing failure and see what's wrong and how to fix this. We really also wanna really improve the way we do CI because if you're bringing new code and if you don't bring CI, then it's not going to work really well. So we really want to improve our CI testing and basically framework of test. We currently are running some privileged containers with CEP basically because we need direct access to block devices. This is something that we can easily change. It requires a little bit of work, but this is something that should be done in the next few months. All right, so takeaways. Ansible plus COLA are both really good candidates to start smoothly when if you wanna go with containers and start deploying a containerized environment for OpenStack and CEP. Support for CEP is here and needless to say that if you look at all the previous surveys from the OpenStack Foundation, CEP keeps on growing in terms of adoption. So I think they're running it every six months and every six months we're getting more percentage, more traction and more usage from POCs to dev environment to production. CEP has become really the de facto storage backend when it comes to backing up all the OpenStack components. So this is why we are heavily investing in containerizing CEP as well because we are taking the assumption that if you're deploying OpenStack Clouds, then you're also deploying CEP and then if OpenStack is already containerized, then CEP has to be containerized as well. As mentioned several times, Kubernetes is the right solution when it comes to containerizing everything and managing your container platform. It's just not ready yet, but I mean, Google is investing a lot. As far as I remember, Red Hat is one of the best contributor as well to this project. So we have been heavily investing into contributing to Kubernetes. So this is happening and this will happen in years. And we are really investing into fixing this networking issue that I already mentioned, this proper networking isolation. If you're interested in learning more on all the subjects we already discussed, if you're interested into CEP Docker, CEP Ansible, Kola, Kola Kubernetes and all the things, we just gathered several links and several resources that you can access videos available as well to deploy a containerized CEP with Kubernetes and with Ansible, for example. With that, I'm not sure how much time we have left, but we would all like to thank you for your kind attention and I think we'll be really happy to take questions now. Yeah, no need to take pictures for every single slide. We should have told you that from the beginning, I guess. All right, any questions? I guess we nailed it then. We'll be available here if you want to see us one-on-one. Thank you very much. Thank you. Thank you.