 Hello everyone, thanks for joining us and welcome to Open Infra-Live. Open Infra-Live is an interactive show sharing production case studies, open source dust, industry conversations, and the latest updates from the global open infrastructure community. This show is made possible through the support of our valued members, so thanks again to them. My name is Kristen Barrientos, and I will be your host for today's show. We're streaming live on YouTube in LinkedIn and we'll be answering your questions throughout the show. So feel free to drop your questions into the comment section and we'll answer as many as we can. Some of the most popular episodes on Open Infra-Live are the large-scale open stack show where operators of large-scale open stack deployments to come and discuss operational challenges and solutions. Today the large-scale open stack show is back for an off-steep dive with Society Generale, one of France's largest banks. Joining today's discussion, we have Belmiro Mariera from ESA, Thierry Carrez from Open Infra, Arno Moran from OVH Cloud, and they are joined with Guillaume Allard from Society Generale. To get it started, I'll hand it to you Belmiro. So hello everyone, I'm really excited with our guest today, Guillaume from Society Generale. Welcome. Thank you Belmiro. Yeah, and with a bit all of Thierry and Arno. So I think we can start, Guillaume, tell us a little bit about you before we go through your use case at Society Generale. So hello everybody, I'm Guillaume Allard. I work for Society Generale, one of the largest French banks. I work in the IT department in charge of the infrastructures of all the groups around the world. And I work as a principal engineer for the private cloud. All right, I'm really excited with this episode because it's the first time that we have a financial institution in our show, but we'll talk about all they are using Open Stack. But before we go there, Guillaume, tell us, is it true that banks still run Fortran and cobalt codes? And now we are trying to put it on top of Open Stack? I think it's true, part of the code should be running on that piece of hardware. More seriously, I believe that most of the core banking systems of the world are still running on mainframes. The strategy, most of the time, is trying to build microservices around mainframe to put part of the service outside of the mainframe. And if possible, in the cloud with modern architecture, CI, CD, and so on. All right, and it is great that you are doing this transition to a private cloud infrastructure based on Open Stack. My first question is really, I would like to understand the journey that Societege Hall is going through. So as a financial institution, I believe that you are extremely risk-averse. So you always choose very careful your software solutions and you are very conservative on this. I'm just guessing. Let me let me know if it's true or not. So that's why I'm curious about this jump now to Open Stack, to open source, open if a product. So yeah, I'm very curious to know more about this journey. So you're right, there is a so bank is managing risk, definitely. And also, banks are evolving in a business where regulation is key. So we have some constraints that we must respect. However, IT is really key for banks and there are a lot of IT. So it's sure that not 100% of the IT services have to be on the mainframe, thankfully. And the bank have to modernize their IT and to execute their digital transformation. And this is why Societege Hall started to deliver private cloud services back in 2014. We started to build a private cloud based on VMware. And so from scratch with a little team of people, IT of Societege General. And we get this private cloud growth year after year. And in this cloud journey, there is also an initiative to consume public cloud services when possible, depending on regulation, but also on technical feasibility to go outside of the bank system. No, you can't hear you, Belmure. Just let me ask a question then. You started with, I imagine, a very small subset of IT software in Open Stack. Did you build new software directly on Open Stack? Or is it a software that you were using on another infrastructure before that you moved to Open Stack? Or is it new stuff that you are building on top of Open Stack? So the Open Stack usage in the private cloud started in 2018 at Societege General because the private cloud was exploding. Between 2014 and 2018, we reached 30,000 VM and the software cost related to the private cloud was exploding. So we proposed to build a new offer based on Open Stack. So it was a cost factor. That click moment was a cost factor to move to a different product? Well, it was not only a cost factor, but definitely at this time, back in this time code, it had a place to go. And it was a key factor, I would say, not the only one, but a key factor. All right. Because we see that the volume exploded and we had to, when we make projections of the cost, it was necessary to introduce a new offer less costly. And the goal was not to remove VMware. It was to propose something else for our users. So we decided to open a new compute services, initially based on Open Stack in the private cloud. And we target, basically, we target cloud-native services. So the applications that were able to consume cloud with, you know, this cattle behavior and no more pets. And we decided to go with a very basic offer with low SLA in order to be very clear for our users that the risk is higher in this part of the private cloud than it would be on VMware. So that was the way we handled the positioning of an Open Stack saying, okay, maybe it will not be as robust as VMware we already know and we already master. Because also, back at that time, we have very few people that were knowing how to run Open Stack in production. We had to learn also. Yeah, but that is a huge transformational cultural change in your organization, right? You go through something that people believe is extremely reliable. It will never fail or it can never fail. So I have all these pets that no one can touch. They are mine. Please go away. And then you change this completely. So I imagine that you got a lot of pushback when these ideas came or this was very well accepted and everyone was very happy to move into this cattle model. I think we had the change that during four years we had the first version of the private cloud that was there and part of the users already started their behavior change. And they reached that point of this private cloud, which delivered managed services had also difficulties to answer to the real cloud native usage because when you deliver managed services and you need to register the instances in many systems, maybe to have backup, to have a lot of things, the VM spawning gets pretty long time. So it's not exactly the same service that you are able to deliver. And so I think it was a good time to introduce this offer. And as we kept the previous offer online also, there was no pushback because people that liked the previous one can still consume the previous offer. So if I summarize, you originally did it for cost reasons as a way to support new cloud native workloads. And at the same time, you kept your original infrastructure, which was much more traditional VMware pet based. And are you seeing now that you're a few years in, are you seeing, I would say, movement from the old way to the new way? Is it like, is it helping with driving cultural transformation inside the organization? Or is it still like there are workloads that stay on the traditional side and workloads that are run on the cloud native more programmable infrastructure environment? Are you seeing those as two completely separate things? Or are you seeing more and more transformation of the basic thinking into more of a cloud native transformation within the organization? Yeah, my feeling is that things have really changed since we introduced OpenStack in Société Générale thanks to this new offer because it was like an enabler, really enabler for Afra as code deployment. Yeah, and basically, when people start to play around with APIs for infrastructure, they start to imagine what they can do with that. And the way they consume infrastructure is changing. And year after year, the consumption and the culture has changed. But we still have more traditional and more managed services to be able to answer to different needs of different people. So since 2018, we have now almost 30,000 VM running on OpenStack. So we reached a size where VMware was at the time we started. And the VMware footprint has stabilized for the private cloud. The VMware footprint has stabilized around 40,000 VM. So we can see that the usage are moving more and more to OpenStack. Maybe one of the enablers also to move the usage on OpenStack is that we deliver the IT services for the people that propose Kubernetes services for application on top of OpenStack. So the more applications are consuming Kubernetes, the more they are consuming OpenStack even if they don't directly consume OpenStack. All right, so OpenStack is your Kubernetes enabler for all these creative applications. Yes, and also the opposite, Kubernetes is a way to accelerate the transition to Slack. So did you have to do anything to make those, I would say the raw OpenStack APIs more consumable by the bank user's audience? Did you give them access directly to the APIs or did you build other types of services to help with that? So what happened is that in 2017, when the private cloud was growing, we started also a transformation of how we build these cloud services. So back before 2017, there was like one only team doing all the cloud services. And that team became too big and like a bottleneck. So we started in a agile way to go. And now we are running something like 40 feature teams. Each of them in charge of different services like Postgre Database, RabbitMQ clustering, and every services has to respect a kind of a standard to expose their APIs so that the consumer have consistent user experience when they go from an API to another. And one of these standards is the usage of the group IAM for authentication. So on our side, on top of OpenStack services, we build and we maintain our own API endpoint, which acts as a proxy. And this endpoint is in charge of authentication over the group API, the group IAM, sorry. And also to push logs in the data lake and to respect kind of a format that we all expose one swagger with a lot of hoods, et cetera. So that enables a very consistent user experience for the consumer over the different APIs. Some of them are unmade. Others are based on OpenStack. We still have the VMware, which is also that kind of customer endpoint. So that's a very specific way to expose API for Société Générale. And so your team internally is acting as an infrastructure provider for the rest of of the group. How isolated are you from, I would say, the way the infrastructure is consumed? Is it completely separate teams or is it like much more mixed teams between the infrastructure providers and the people that deploy applications on top of that infrastructure? It's a dedicated team here. So the teams are really separated. But we are trying to have those teams talking a lot together and to synchronize the agility between the teams in order to get the goal for the business, you know, the goal is to deliver good services together. And there are a lot of interdependency between different feature teams. Yeah. Looks like you have a huge infrastructure. More than 40,000 virtual machines in VMware, more than 30,000 virtual machines on OpenStack. It's huge. So how are you managing all of this? So my understanding is that you have the teams within Société Générale to do this. But in terms of tools to help you to deploy all of this infrastructure, all these architecture that you need to build from scratch, how are you managing all of this? So I can talk about the OpenStack deployment here because this is where I work all the day. So we deploy OpenStack with a colon on Siebel. And basically, we are a lot of on Siebel teams in Société Générale OpenStack team. So we use a lot of on Siebel for deployment, for configuration management. We use, of course, we put on Siebel playbooks in Git. So the work is doing PR in Git to make evolving the playbooks and the roles, et cetera. And for operation, we like to build containers. So we like to generate a little admin container with these on Siebel playbooks and roles so that operators can run the container and have the latest version of the role to do their work. So we do that for colon on Siebel and also for safe on Siebel, also for our own bare metal on Siebel deployment tool, et cetera. So you're rebuilding every container even from colon on Siebel? You mean all colon containers you are rebuilding them internally? So the COLA containers, yes, it happens that we rebuild them because we actually, we have some downstream code. But for the tooling, we build another container which contains COLA and also all the inventory for production or for a given region so that it's easy to operate a given cluster in production. Do you have a lot of downstream change or are you, do you consider your close to what upstream is doing? So we accept to have downstream changes and actually this is something I like in OpenStack, being able to fix ourselves, something that is broken or to make evolving also OpenStack. I believe that there is a bit of Société Générale now in OpenStack after four years and during the last year, we made a great job again. I think in the last, in the Antelope, we were able to push a lot of code in OpenStack and to share that with the community. So our strategy around that, usually we have a need and we try to do it downstream inside Société Générale. And as soon as we are ready, we propose that to the community so that we get feedback on this change. Sometimes you know you're doing change but there is another way, easier way to do and this feedback is also a good thing to have when you do infrastructure so you avoid doing maybe unnecessary changes. And when we feel like this change will be accepted by the communities and we are okay to deploy in our production and we will take time to work with the community for the code to reach the maturity where it will be merged upstream. And the goal for us is to avoid to maintain this patch internally forever so that in the next version we'll have the feature of the fix upstream. Hi, sorry to interrupt. We do have a question from the audience. How do you migrate VMs between OpenStack and VMware environment? So that's a very good question and actually we don't. So the strategy is to push infra as code as much as possible and we are used to tell our users and if they are not able to rebuild their infrastructure on our OpenStack cloud it means that maybe there are at risk if sometimes there is an issue on the infrastructure. The residency will not be done by the OpenStack and by the infrastructure so they have to be ready to react on this incident and the worst case will be to rebuild their services in another region or in other availability zone. So they need to be able to redeploy by themselves. So we do not migrate VMs. Most of the time it's either a new need that comes on OpenStack or people that have infra as code that are able to redeploy their infrastructure on OpenStack. That is very interesting what you are mentioning because you come from a culture that is very at base is my understanding. Now we are moving to a more cloud native applications where the user has the responsibility basically to take care of the reliability of its own application. So let us know more about your architecture, the availability zones that you make available, the regions that you have for users to be able to do this by themselves. Yes, we have four regions, two in France. So one region in Paris, one region in the north of France. Then we have one region in New York and another in Hong Kong. Most of the region today have two availability zones. So it's one OpenStack with 2AZ except for north where there is only one AZ. There is no only one data center there. And in Paris where we are progressively deploying a third AZ. And actually that's a challenge to to transform a 2AZ deployment in a 3AZ deployment without impacting running workloads. That's part of our challenge. All right. So we talked a little bit about the history, the story that is making you move to OpenStack a little bit about the architecture and about the software stack that you are using. You are using Ansible to deploy OpenStack, but then what are the OpenStack services that you are offering to our users? So we are offering basically a compute network and storage. So services that are exposed to users are Nova, Neutron, Cinder, Glance, and I believe that's it. And we rely of course on Kiston to do that. Octavia load balancers maybe? We are now for load balancer. We are using AVI networks as of today. And we are studying the introduction of Octavia. So it's completely out of OpenStack API scope, right? You have to teach your users to use another external API for load balancers. Yeah, and actually it's a pure homemade one. We don't expose AVI APIs directly. So it's not my feature team. It's a feature team that rely on OpenStack that delivers the services, but I know them pretty well. And they expose their own API and their back end is AVI networks. And the load balancers are running inside the OpenStack VM. So that's how it's done today. And they are studying Octavia for the future. Okay. And about Kubernetes. So what are you using to orchestrate the creation clusters? So I don't have a lot of details here because there is a dedicated team that is doing that for the group. But they are consuming OpenStack API to create their VM, basically their workers and controllers and so on are running on top of OpenStack. So for me, it's more like one of my biggest users. But I don't really know exactly how they're doing that. All right. So you are moving into OpenStack since 2018. So 2018, which was the OpenStack version that we had in 2018? Initially we deployed Queens. Oh, Queens. Okay. So you are not in Queens anymore? No, we are now in Uthuri. All right. Okay. We are now in Uthuri since 2021. So we were able to upgrade from Queens to Rocky, then to Stein, then to Uthuri. And I, well, the previous upgrade were okay, pretty okay. And initially we had only two regions. So I think the Hong Kong region has been upgraded from Stein to Uthuri. But New York once has been deployed directly in Uthuri two years ago. And every upgrade has been a new story for us due to the size of the cluster growing more and more. So we do have a question. Oh, sorry. No, no, sorry. Yeah, we do have a question from the audience. What are the best practices or processes that you follow up upgrading the OpenStack cluster? Maybe I can let Arnaud or Miro share about the best practices because I'm really... Yeah, but it's not about us. It's about you. But we know that it's always kind of difficult, but every upgrade could be different. You said it previously. Maybe you have some stories about previous upgrades you did. You said some of them was, each upgrade was a new story. Do you have some pain points that you can share? Or is there any process that you replicated across clusters, but maybe it doesn't work every time? Or are you shooting down every API? Are you letting your other users aware that you are going to upgrade? I imagine so. Or do you handle the fact that maybe you will lose some network during the upgrades or suffocate? So in the past, as a deployment, we were smaller. We just communicate a lot. So basically, people were aware that the upgrade will occur. We do region by region. And you know, with Cologne Siebel, you are able to run the playbook and do it online. So that's what we have done in the past for all of our upgrades. The last one, so the upgrade from Stein to U3 in the biggest region in Paris was a bit more complicated for us because OpenVay switch was impacted. So there was a little network cut. And it went pretty well in the other regions that were smaller. But for Paris, the service integration was a bit longer. And I can remember that part of the RabbitMQ cluster were in split brain. So it has a little impact for users that was not expected. Yes, we learned a lot doing that. Every operator passes through that, yeah. The network splits after the MQ. Yes. So now the Paris region is really much bigger than it was two years ago. We have now almost 500 compute nodes in Paris. So it's pretty sure that we cannot do the same way we have done two years ago. So we are now building a new procedure to go. And I wonder if we can focus on control plane, upgrading all the APIs, database, and so on. And then maybe trying to do a rolling upgrade of the compute node, evacuating the VM, an automate that can do that for us running for many days. Yeah. All right. Are you used to do evacuation or to use live migration about your VMs? Or is it something you're using? Yeah, we are using a lot live migration. We rely on Ceph for storage. So we have a Ceph cluster per AZ. And the VM can go on all the compute nodes on the same aggregates pretty quickly. So you don't use any local storage at all? No. No instance is using local storage, okay. That could ease a lot your live migration. Yeah, exactly. I would like to go back to the upgrades because I think we still have a lot to go deep on in that area. All right. So you already know Suri. It's great. We talk a little bit about the process upgrade. But what was not clear for me was you are running a lot of open stack services. Do you try to upgrade them at the same time? Or do you start with Keystone? So this week is Keystone. Next week is Nova. How is your process? How do you see the upgrade? Until now, we keep all the services at the same version. So we are in this colon Cable framework when we can launch the upgrade and it will manage the order of the services. You do it through colon as well, the upgrade and you manage my list. Okay. I think Cinder is another colar repository. So we split Cinder from the rest because there is a dedicated feature team storage teams that manage Cinder and the back-end storage. And usually they are doing their upgrade after us. So we do Keystone, Nova, Neutron, Glance, etc. And when everything is stabilized, then the other team is doing their upgrade for Cinder. All right. Well, upgrades are very interesting. I would like to remind everyone that we had the full episode of Opening for Live, large infrastructure seek only about upgrades. So yeah, go back in the library and you really watch that episode. Yeah, but it's great to hear your experience with upgrades. Thank you for that. And so you've grown really, really fast between, like you said, your deployment has grown extremely fast. And I obviously have a lot of questions around scaling. But I was wondering on the size of the team that you have to handle that very large infrastructure. Is it something you can you can tell us about like how big of a team do you have for running those 30,000 VMs? Hey, yes. That's a good question also, because initially we start the OpenStack services in the same team as the VMware services. So this was the compute team. And I think when we started the team was around between 12 and 15 people managing the VMware deployment for Private Cloud and starting new deployment with OpenStack. And we finally split this team into one for VMware and one for OpenStack in 2021, I think. So we pop out a little team that was in charge only of OpenStack. And the initial team was five people, so me and four people. And at that time, we had something like less than 10 KVM. But the going was really fast already. And it was really hard to hire people in France. I have to admit that experts in OpenStack are difficult to find. But now the team is, I think we are 12 or 13 now. So we managed to grow the team finally. We are still growing. But it's coming better now. Okay. So in the subject of scanning, I suspect as you grow your OpenStack deployment from one region to multiple regions, and then a lot of compute nodes in a single region, you encountered some issues. So did you face any, did you have any scaling story to tell us? Something you realized by just adding more and more servers and suddenly something fails? What's your, the first issue you got into as you scaled up and scaled out your deployment? So I think we had many, but maybe I can share some more, the one I have in mind, more significant. Our initial deployment, and that is still true for the smallest region, runs all the service in the same physical nodes. So we have everything on the same physical nodes. Usually we run over three, four, or five physical nodes sharing this hardware for MariaDB, RabbitMQ, and all OpenStack services. And one of the scaling issues we had was to the contention we reached on this node, and that impacted a bit specifically what are with RabbitMQ, the internal RabbitMQ of OpenStack, when a lack of resources. The usual suspect is there. Well, he's a very sensitive team. When everyone touches Rabbit, it's the large-scale thing we always have these kind of faces like, hmm, again. So finally, we managed to move RabbitMQ on three dedicated big servers. Really, one of the biggest we could. And now it's running very fine. So it means you have one Rabbit cluster for all your services, right? Yeah, all of them. Okay. Yes. But how big is your region? Because that option for me, yeah, it's a way to deploy everything running in the same cluster. But also it creates the risk that if Rabbit fails, all your services are affected. But that depends on how you balance this, like the size of your region or cell and the services that you have there. Do these calculations. What is for your right balance, considering the number of nodes and be ready to create a new region because I'm exceeding that thousand nodes that now I know that I need a different region. How do you manage this? The biggest region in Paris is a bit less than 500 compute nodes. So I believe we are still a room to grow before spawning a new open stack region there. So we are trying to keep only one region because maybe mainly to facilitate the user experience, they don't have to choose one cluster or another. So in terms of scalability, you always create regions. You don't go through the sales approach, if I'm understanding correctly. Exactly. We don't use sales for now. But maybe at some point we should study sales because I feel like it would be very interesting to put at least one cell per az that will bring a kind of better isolation between az. So that is something we would like to study in the future. I'm asking this because one of the common questions that we get in the large-scale SIG is always, how big should be my region? How many nodes open stack will support in a region, in a cell? And there is no right answer for this. So that will depend on your use case. It's interesting to know that Société Générale is using 500 compute nodes and still thinking and growing this number in one region. It feels like the usual number is somewhere between 100 and 1000, like the cutoff number and depending on who you ask and how much volume of change there is, like API calls that renew the VMs, it can be from 100 to 1000. So 500 to me is a bit in the midrange of where you start having conversations about splitting and organizing into more azs and more regions. You can still scale up. So I think we had a question. Yeah. Yes, I'm back. From your point of view, what will the next large-scale issue be that you'll face and how do you plan to handle them? That's a good question. So we try to anticipate as much as possible. And so back in the past, we were more reactive. Now we are trying to be more proactive on that. In our biggest region, we believe that Neutron server starts to be kind of a bottleneck. So our option now to get rid of that is to maybe put more hardware for Neutron. And that will come also with a bit of configuration, configuring the number of workers. And we also monitor, we have a look at the slow queries that arrive on MariaDB. And we have a few queries that are getting slow when we add more compute nodes. So for that kind of issue, we are trying to work upstream, trying to fix the queries in Neutron. But I have to admit that one of the difficulties we have today is to simulate the workload on our lab environment, because of course we don't have 500 compute nodes there. So we also started to work on OS profiler. We had a few contributions in the last month to re-enable and to improve OS profiler on OpenStack, because we believe that it could really help us to find the bottlenecks. You said you were using OpenVswitch as a Neutron driver, right? It's not OVN, it's OpenVswitch. OpenVswitch, yes. It's consuming a lot. Yeah, on OpenVswitch, maybe it's not really a scaling issue, but more something related to our usage. We are pushing a lot of security groups with a lot of rules on the VM, and part of our users are really dynamic, creating like 300 VM in a few minutes. And this VM will run calculation for one or two hours, and then they will delete everything. And the OpenVswitch agent is really slow for implementing the rules in the OpenVswitch and even slower for cleaning the rules when you delete the VM. And sometimes it becomes a bottleneck. So when you have to create, I don't know, 10 or 20 ports, a Neutron port on the same time, on one compute node, the latest port will take maybe five minutes to be ready, which time out for Neutron and that. For this issue today, the only solution we have is to reduce the number of rules and to try to dispatch as much as possible the VM creation over. To spread the creation of instances on different computes instead of stacking them? Yes. I can imagine that it's consuming a lot of CPU and a lot of rabbit messages as well. Everything related to security groups, we still have issues as well on our info about this. And especially with the remote group mechanism, when you are using remote groups, it's spreading a lot of messages around the infrastructure and it can be very painful first to debug and second to manage at scale. Maybe maybe we were going to talk about something like this as well. No, not like this, but I think that we should interview you about this. Because it looks like a very interesting topic and it looks like you are suffering this issue. What was the reason for OVS over OVN? For a large scale, do you find problems with Rabbit and Q? Well, the main reason is that we started with OVS in 2018 and OVN was not really ready at that time. So now it feels like there is a kind of a move to OVN. I saw also that last year, one year ago, approximately, OVN released their first time long-term support version, which is kind of a signal that maybe it's ready for a large scale deployment. But the complexity for us now will be to find the migration path without impacting our users because we try to avoid deploying a new open stack and we need to migrate the current ones. I know it's possible. Some people did it in the upstream community, migration from OVS to OVN, but I don't know if it's too able at scale, to be honest, and that's still a challenge. Have you tried OVN in a lab or is it still not yet in a lab? No, still not yet in a lab. We are doing study. That will come with a different usage of the network also because as of today, we are not giving the possibility to our users to create their own networks and the VM are deployed over routed provider networks only. No vex LAN. So you're not using any routers as well? No, no routers. Everything is managed outside by the network fabric. So you don't have any L3 agent running on your... Exactly. Lucky boys. Yeah, which may be easier to migrate to OVN. So as of today, we are studying OVN and also the introduction of overlay and to be able to let the user create network. So it will take time for us because it's a lot of interaction with the network teams. That will be a study. Your network needs to be provisioned first on your infrastructure, on your backbone infrastructure. The prerequisite for your users to have this before coming to your open stack infrastructure, right? Yes, which sounds like it's not possible for a public cloud, of course. But for us, we have kind of network areas where entities. This is something, a prerequisite we have to build before a given entity can consume the service. But it's shared between different projects and different applications. This is why OpenStack is a great project because it allows different deployments to be done like this. You can have different networking stack and still you run OpenStack on top of it. So it's nice. So I wanted to ask you, like after all the experience you had with OpenStack, what's the biggest pain for you like using this? Is it something technical? Is it more something around the lack of the difficulty to hire? What's your biggest concern right now with OpenStack? I will say that this specific usage we have with networks, maybe today it is maybe our biggest pain because it generates a lot of provided networks. And we use AirBack to share these networks with different projects. So we are a very easy consumer of AirBack and Neutron. So we reach some limits in terms of slowness. So Neutron API is impacted by slowness due to AirBack. And also we are using segment with the network, with the redeployer networks. And we have also some slowness due to the combination of high number of segment and high number of compute nodes. Because we have done the choice to propagate all the segment over all the compute nodes so that there is no dedicated compute node for a dedicated usage. Everything is very generic. And I believe we are reaching limits here. But we are also trying to work upstream to fix that. In Antelope, we push code to be able to deploy many some networks for a given network on the same compute node. So before it was limited to one subnet per network per compute node. And we contribute upstream to go above this limitation of OpenStack. So I believe we will be able to continue to fix these issues. I hope with the help of the community, of course. I think we have time for one more question. Yes. So do you have any scaling issues with Neutron agents, L3 and DHCP? How many networks slash routers do you have? So we don't have any L3 agents, as we said before. We have, I think, we have in the biggest region, we have something like 110 networks. But they contain at least two subnets, one for each AZ, maybe sometimes more. So it's around two and three hundred subnets. So DHCP, yes, we have issues with DHCP sometimes. Because the way the DHCP agent or Neutron DHCP agent is done as of today, maybe it's difficult to scale for a large number of subnets, I think. At least in our deployment, when we restart this DHCP agent, it takes a lot of time to be scheduled to create the network space and so on. It looks like it could be improved. If you have a big infrastructure, things look like very slow. And for sure that there are a lot of things that could be improved in OpenStack. So Guillaume, we went to a lot of topics. This is being great, talking with you about the Société Générale Infrastructure. So I'm really interested now to understand how do you see this infrastructure evolving? So you move from a very conservative infrastructure, move to OpenStack. You are running Zuri, lots of OpenStack services on top. How do you see this evolving during the next years? That's a good question. I don't have a crystal ball, unfortunately. Maybe something we should try in the future is Ionic. Today, we don't choose Ionic at all. And maybe that's a move we could try at Société Générale is having a bit less of virtualization and providing bare metal to some of the use cases, some of the workloads like Kubernetes workers, maybe. And also for our own needs to deploy complete nodes, that would be, I suppose, something to try. And in a more generic way, what I can see is that the demand on compute is still growing very fast. So I believe that in the future, we'll have to deploy again more and more compute. And the scale will be even bigger, probably. So we'll have to automate everything as much as possible, including what we talked about previously, upgrades, batching. When you have a large number of compute nodes to manage, you have to upgrade the firmware, the operating system. And I believe automation is really key in delivering these services. That's my conviction. All right. Guillaume, it was a pleasure to have you in the opening for live, large-scale, deep dive. Thank you so much to talk about Société Générale Infrastructure and the artwork that you guys are doing with OpenStack. So thank you so much. Thank you. Thanks. Thanks. Yes. So we are out of time. And I wanted to thank all of our awesome speakers today. And I appreciate you all for joining us. And thanks to our audience for being very interactive and asking great questions during the show. Please don't forget to join us June 13 through the 15th for the Open Empress Summit in Vancouver. Registration is live and prices will go up on May 5th. So we have a lot of excellent content in store, just a reminder. And the CFP is the CFP for the forum is still open. So if you want to attend, please get your forum submissions in before the deadline, which is on April 21st. And then a last big, big thank you again to the Open Empress Foundation members for making the show possible. Also, if you have an idea for a future episode, we want to hear from you. Submit your ideas at Open Empress, or at ideas.openinfralive. And maybe we will see you on a future show. Thanks again to today's guest. And we'll see you all on the next Open Empress Live. Thanks.