 OK. Hi, everyone. So my name is Damian Dobrovsky, and I'm a cloud engineer in Cura. Cura is a new European cloud provider that focuses on security and compliance and builds clouds mainly for heavily regulated industries. I'm also a part of OpenStack Ansible team, where we try to make OpenStack deployment process as easy and flexible as possible. So because of the industry where we work, some of our customers are, let's say, really sensitive when it comes to security. And we get some demands to encrypt all the internal traffic in our clouds. So today, I'm going to tell you about our journey to achieve that. Generally, basic OpenStack Ansible knowledge is recommended. But if you don't have that, don't worry. I will briefly go through the deployment process, OpenStack Ansible deployment process. We have only 15 minutes, so today I'm going to focus only on encrypting the control planes traffic. OK, so we started with a list of things that can be already encrypted pretty easily with OpenStack Ansible. So the first thing, worst to mention, is the traffic between API clients and Adry proxies. So technically, OpenStack Endpoints. That can be done fairly easy. Then we have OpenStack, the communication between OpenStack services and infrastructure services like Database and RabbitMQ. What can be also encrypted is lower life migrations in case you're using LiveYard. So when you live migrate instance, between one compute to another, you can encrypt that traffic using vEncrypt. Then we have traffic between Nova console proxy and comp. And compute nodes in case you're using VNC console. And for those of you who use OVN, you can also encrypt traffic between Neutron server and OVNRD. So these things are already done. But the question is, what else we need to encrypt, to fully encrypt our control plane? So to understand that, let's look at this picture. This is a typical traffic flow in Adry proxy. I used Adry proxy as an example because it's the most common software to provide load balancing functionalities in the cloud. And the only software that will allow you to configure everything automatically in OpenStack Ansible. And I also took Glass as an example. It could be any other OpenStack service. But I chose Glass. So at the beginning, you have this is a green area. This is communication between API client and Adry proxy. So it's basically done. It can be encrypted. We even provide a possibility to obtain less encrypted certificates using Acme protocol. This is done. But then we have a yellow area. This is communication between Adry proxy and the service backends. And this was not possible in OpenStack Ansible to encrypt that traffic since anti-lob release. And we are going to focus on this yellow area. So going to the next slide, we started with the initial plan to achieve this challenge. First one is to use Ansible role PKI to generate certificates. We already have that. We use that for other services. So the work is basically done. So it generates certificates, sign them with a custom certificate authority, and distribute these certificates across the environment. It's done. And also we had to implement a feature to the role responsible for UWSGI configuration to be able to configure it properly to support TLS. Because most of the services stay behind UWSGI. So it sounds pretty simple. And in fact, it is pretty simple, but only for new environments. When you already have an environment and want to enable TLS backends traffic for this, it becomes way more complex. So this is actually our first challenge, how to enable TLS backends on already existing environments. So let's go through the OpenStack Ansible deployment process very quickly. It consists of three main stages. First is to prepare host for deployment, set up host. Then we have set up infrastructure where the infrastructure services like database are being configured. And finally, you can deploy OpenStack services. Set up host is not relevant for this talk. So let's focus on set up infrastructure and set up OpenStack. And the most important point here is where the ADRI proxy are being configured now. So it is during set up infrastructure stage. That's where all the services, ADRI proxy services, are being configured upfront. And also I, again, chose glance as an example to let you know what happens if you want to enable TLS backends for a glance, just an example, on the environment that is already running, the production environment, but doesn't have, that does not have TLS backends enabled. So first of all, you define variable responsible for this. And you proceed to running playbooks. So you go to the set up infrastructure stage, you start running playbooks. The first playbook you run is ADRI proxy install. And that's when you instruct ADRI proxy to communicate with glance over TLS. And congratulations, you just broke glance. Glance is not available at this moment because the OpenStack service is not ready yet to accept this TLS traffic. So you basically have downtime. Then you go through the other deployment steps, then set up OpenStack, you configure Keystone. This is a typical playbook order, the default playbook order in OpenStack Ansible. You run these playbooks until you reach glance playbook, and that's when you get your glance service fixed. And of course you're probably thinking you can change the playbook order. And that's pretty obvious, and it will help you. But in many cases, users may want to enable TLS backends for all these services during OpenStack upgrade, for example. And in that case, changing playbook order won't really help you. So you need to come up with some other solution. Another issue is the variable scope. And I don't want to dig too much into the details. This is a fragment of an Adrey Proxy service definition. You don't really need to understand all of this. But what is really important here that to configure Adrey Proxy service for glance, we need to access glance variables. But because we run this playbook for Adrey Proxy hosts, Adrey Proxy is not in a glance group. So technically, Adrey Proxy cannot access these variables. And as long as the defaults we define here match the real values or user has overridden them globally, then it's not a problem at all. But user may also override these variables only for a glance group. And it may lead to unexpected problems. And we have a solution for that, since Antelope really is. And we named it Adrey Proxy's separated service config. So basically, what we did is to move Adrey Proxy service configuration out of Adrey Proxy playbooks to OpenStack playbooks. So let's visualize it. That's another comparison. What happened, what was going on before Antelope release and what is going on after Antelope release? This is the first difference. Previously, at this point, we were configuring all Adrey Proxy services upfront. But now, we only do that for the base service and also some custom services that you can define in your configuration, but they are not really related to the OpenStack. It may be Prometheus or something else. That's the first difference. Then everything looks exactly the same. You go through the setup infrastructure stage and you reach setup OpenStack stage. And that is another crucial difference, because before, we configured only Keystone at this point. That's an OpenStack service. But now, we configure Keystone and its Adrey Proxy service. And we do that for each other OpenStack service, basically. So that helped us to solve both of the previous challenges. So we minimized the downtime during TLS back-end transition. That's the first thing. And because we configure Adrey Proxy services in a different place, we don't have a problem with variable scope anymore. And here we have another challenge. I mentioned that TLS front-end could be encrypted for a long time in OpenStack Ansible. But what was always problematic is to let me just remind that TLS front-end is the traffic between APA client and Adrey Proxy. And what was problematic to actually start encrypting that traffic? So technically, it's about changing the protocol of OpenStack Endpoints. And I picked Keystone as an most obvious example. Let's try to understand how OpenStack services communicate with Keystone. So basically, all of them have Keystone Endpoint defined in their configuration file. And it's fine. But what happens if that endpoint changes? I mean, in terms of URL or even the protocol, you would need to reconfigure all these services. But you can't do this for all of them at once, or at least it's super hard. So you need to find another solution. And our solution for that is to use some magic in Adrey Proxy that allows our users to temporarily accept both encrypted and unencrypted traffic on the front-end, so the green area here. Temporarily, during the transition, and after the transition, they can turn off this feature and able to do the transition without any downtime. So yeah, the solution for TLS back-end transition was this Adrey Proxy separated service config. And solution for TLS front-end transition was to allow our users to temporarily accept both unencrypted and encrypted traffic on the same port on OpenStack Endpoint. There is one more major change we implemented in OpenStack Ansible during Antelope is available thanks to Jonathan Roser from BBC. So basically, Adrey Proxy maps are just files storing key value pairs. And they may be used for several different use cases, starting from raid limiting, blue, green deployments. Additionally, they can be used to, let's say, link the client IP to the specific back-end. But we are using that for linking URL with Adrey Proxy back-end. And this is the difference, because previously, all traffic on the ports 80 and 443 was forwarded directly to Horizon. But it causes an issue because some environments may not even have a Horizon deployed. But still, we need to handle traffic on port 80 somehow. For example, to obtain a less encrypted certificate using HTTP on Challenge, right? So we decided to change this behavior. And instead, just blindly wrote everything to Horizon, we put a base service on the front that based on URL using regular expressions routes traffic to the appropriate back-end. So let me explain the default configuration we have right now for the base service. First of all, all Acme challenge requests are routed directly to less encrypted. That is later used by Certbot to obtain certificates. For security TXT, so security TXT is a file that allows websites to define security policies. All the URLs matching these regular expressions are routed to the back-end that hosts this file as just a static file. It's much simpler these days because previously, security TXT file was hosted by Keystone and available through Horizon. It was so complicated. Now it's just a static file in Adrey Proxy and served directly by Adrey Proxy. All the other traffic are routed to Horizon if it exists, obviously. And there is one more common case that is not enabled by default, but can be used by the users. You can define your own rules here. And you can do this, for example, to expose all the opens again points on the same port. So basically, you can have Nova, Keystone, Placement, everything available on the port 443, so to say, and routed to the appropriate back-ends based on the URL you define. OK, so I know it was a lot of information, but let's try to summarize it. First point is that Adrey Proxy front-ends can temporarily accept both encrypted and unencrypted traffic on the same port during the TLS front-end transition. Traffic to Adrey Proxy back-ends can be finally encrypted now since Antelope release of OpenStack Ansible. Adrey Proxy services are configured individually now, so Adrey Proxy Playbook are no longer responsible for configuring all of them upfront. And Adrey Proxy maps can be used to route your traffic based on the URL, but the usage of Adrey Proxy maps is not limited only to this use case. You can, for example, define your rules to apply rate limiting on your endpoints. Yeah, that's what has changed recently in OpenStack Ansible. I'm not sure. Maybe you have any questions? Yeah, let's do it. No, no, everything I mentioned here is already merged to OpenStack Ansible, so you can find it here. Any other questions? No. OK, so thanks a lot for your attention. Have a good day. You can contact me anytime you want via email, IRC, or even here. Thank you.