 Hi everyone, hope all of you are doing well in this particular context and submit. I'm Victor Couturier and I work as a Cloud Engineer at Société Générale, one of the oldest and biggest bank in France. And I am here in our new campus in Paris to share with you our journey in OpenStack contribution and how does contributing to Open Source truly increase the delivery of our team. So to begin with a little bit of context, I am part of the Cloud team of Société Générale, which is composed of about 10 devops, and the purpose of the team is to expose cloud services to customers, and these customers are mainly developers of the bank who want to use their own application. So that's all about Private Cloud. This team, six years ago, originally used the VMware stack to propose our services, so it's all about ESXi on a very high suit. So it is about 40,000 VMs who are running today in this environment. But in the past three years, with the growing of the need of infrastructure to host cloud native application, we launched a new OpenStack-based offer, which power, I think, around 5,000 VMs today, and it is growing. So as the context is done, let's begin with presentation with a little story. So go back in 2019, where our OpenStack deployment contains only one availability zone in one data center in Paris region. It is based on SEF, so the storage is a SEF cluster for our bus image on VM, and we just receive our new servers to open a second availability zone in Paris region in a second data center. And all users were very excited by this new availability zone, but there is one last thing to fix, I think, to open this availability zone, because to ensure good storage performance to customer VMs, we really need to have a local SEF cluster in per data center, because nobody wants its VM run half on the local data center and half on another data center. So we need two SEF clusters, and in order to ground these blazing fast provisioning with OpenStack and SEF, this final copy and write, we also need the Glance image to be present on local SEF cluster. So in our case, the Glance image should be deployed on the two storage clusters. And at this time, it was the first version, Glance supports the multi-store, and multi-store in Glance at this time was at the very beginning. So you were able to configure multiple store, but you need to choose only one to send your image. So you weren't able to choose multiple store to send your image. So we have an issue outside, and in order to fix it, we solved about three solutions to deliver finally this second availability zone. So the first solution is to build custom code around the project. So here it is OpenStack Glance actually, and it is actually the standard way of doing it, because when you are using, for example, proprietary software, you don't have access to the code. So it is mandatory to develop code around the proprietary software, and you don't have the choice, in fact. The second solution is to modify the code of the project, but without releasing it and submitting it to the community. So it is kind of working. And the third solution is to do the same, but instead of totally working, submit the code to the community and try to get it merged. So the first solution, I think, is the most acceptable to implement your business-specific logic. For example, in our case, it is inserting OpenStack VM into our CMDB or enforcing security group infrastructure rules, so business rules against our security groups. So that's very convenient, but it will require you, it will require you the biggest amount of work because you will need to develop everything. It is your code, you will need to maintain it forever, and it will be forever long to do it. The second solution is always the worst case, actually, because even if it is easier to develop custom code inside the project, it will be really a pain to upgrade, and it will be so painful that each upgrade will pay ten times the initial investment. So it is a very bad idea to do that. And the third solution is, I think you will love it because it is the last one, and it is the one we implemented as Société Générale, is to actually modify the project, modify OpenStack in our case and submitting it to the community. And at the end, in the long term, it is a solution, it will require the less workload because part of the code you will develop will be supported by the community forever and you will be able to leverage on all the code existing in the project. So that's one of the best solutions we found, and it will also develop a lot of soft skills, like by example a better appropriation of your everyday tools, or maybe you will get a lot of improvement in your development skills because you will develop against the experimented developers, and obviously you will desacralize the open source contribution process. So to emphasize this, let's take a very short example about one of the first contributions we've made, and it was in It. So It is an infrastructure-as-code tool of OpenStack for deploying cloud resources, and internally we are using it basically only for deploying base blocks of the cloud, so flavors, domain, external network, and all that kind of stuff. And in fact, it is very convenient because it is just plainly available and you just have to store your email in Git, then regularly apply your Git configuration to ensure everything is properly set up in all your environment and all your region. So it's very convenient to use it for that. But by example, here is an example of each template for networks. So it is plainly available, it is quite simple. You just describe all your resources in this list, and each resources is described by a list of properties, which describes the resources. So it's kind of simple to store and to write, but the problem is we are relying on the tags attribute in the Neutron network in order to know to which project we will need to share which external network. So we are putting some tags on the project, tags on the network, then we will share the project to the right network, the network to the right project. So it is very convenient to do that, but in fact the tags attribute, despite it is available in Neutron, it is not exposed through it. So it means you can't use the tags attribute as a property of your YAML, it's not allowed, mainly because each resources wasn't updated when the Neutron network implemented the tags framework. So our solution was to look into GitHub, into the heat source code and find if we can understand it, or maybe add a video. And here is what we found. So here is the file we modified. So it is a provider net class in the provider net.py file, and here everything is file. And as you can see, there is only a list of properties, which is all the aload properties in the HTML, mapped to the related attribute in Neutron IPI. So it's quite simple. And later you will find a detailed list of the properties. So as you can see, there is no presence of the tags attribute. So we can't specify tags. So we try to add tags in that list and try if it works. So here is the commit with mail, not much line of code, as you can see, it is only about adding tags to the list, then describe it in the detailed description of the attribute. And we tried that. And guess what? It worked. It worked. So in only a couple of line of codes, we save ourselves a huge amount of custom development to handle tags outside of our heat stack. Because here it is only 10 line of code and it's done. We can just use our tags in our heat stack. And we have plenty of examples like this. Each time, we choose to enhance a bit more the product rather than having to deal with the problem and spend a lot of time to find a workaround. And especially with this commit, in less than one afternoon, the code was pushed upstream and deployed in production in our environment. Because you can still push your modification into your production at the same time it is under review by the community. It's not a problem. And in the same afternoon, we developed that code, pushed it to production, declared and repaired the bug in-storey board, the issue tracker of heat, and on board the people in the team on the contribution process. And guess what? One week later only, it was merged and ready to ship in the next open stack version. So that's a very short example of a quick contribution we saved, which saved us, sorry, a lot of time. And for example, here I have the screenshot of a commit idea I had by preparing this presentation for the summit. So it shows the complexity of these things. It is clearly bugs we just found and we just fixed, like any other bugs we can find in your own internal repository actually. So now let's talk about how we implement that in the cloud team. Because as I said in the beginning, we don't want to add extra workload to the team because we have already so much things to do, especially the devops of the team. So we don't want to add extra workload for the contribution. So we thought about a simple workflow to truly integrate the contribution process into our everyday work. And to begin, we are working in agile with a scrum methodology. So like always in agile, everything begins with a grooming and a sprint planning. And here our product owner, hello, product owner, he will submit us a new task. So we will look at the task and we will ask ourselves a first question. Does it exist in the product? And if the answer is yes, okay, go ahead, we will just use the feature in the product and we will perform the task. And if the answer is no, we will ask us a second question. Does it make sense to add it? So will OpenStack be better if we add this feature we need? And if the answer is yes, we will transform the task to a contribution task. And in fact, in terms of agile, it is just a task, so it's changed nothing. But the point is we will not develop two times the code. So not once for the Société Générale, then transform it to a contribution upstream. So in fact, we will just develop one time the code and at the end of the task, submitting it to the community and use it internally in our production. But it doesn't matter, it is under review by the community. So we will develop only once the code. And another thing is if the answer is no to this question, so it does not make sense to add it to the product, we will ask ourselves, why do we need it? So it is a good way to step back and to think about why do we need this feature, which is not better for the product. So maybe it's a good time to rework it and to sort about how we can fit more easily into the product. So with that workflow, we truly integrate the contribution process without extra workload. And we are able to industriise the process of contributing to open source into our everyday work of the team. And now let's go to what have been achieved today with that workflow. So first, we have two features developed in glance. The first one, the multi-store import, I have talked about it earlier in the presentation was released in usury. So from usury, you are able to deploy an image to multiple store. And the second feature, which will release in Victoria, is the same provisioning capacity of glance. So it will reduce significantly the time of the upload of the image, then the space they will use into your backend. So that's the two main features we developed for glance. And we also developed a new API micro-variation for Nova, which is just about listing stuff, but it is always cool to say that you have developed an API micro-variation on Nova. Then in addition to that, we just developed 26 commits, which are already merged in the community in various projects like Cilometer, Colour, Colour Ansible. I think there is some keystone too. So various projects, in addition to glance and Nova, obviously. And these 26 commits bring us to nearly 4,000 lines of codes contributed by Société Rénérale in the OpenSack community. And it put us at the second place in terms of financial company who contribute to OpenSack. So just behind China and on pay, and I hear that these guys have a lot of OpenSack deployment on quite big. So congratulations guys. But yes, we are proud of this 4,000 line of codes too, contributing to the community. And you can consider this KPI in stack analytics in real time. Everything is extracted from StackVidex. And now let's go to some takeaways of this presentation on what we learn with contribution to an open source project, not only OpenStack, it's applicable for everything. I think we also contribute in Grafana or Prometheus. So the first takeaway will be, do not try to change the world. So you don't have to seek the thing to contribute or willing to do something important. The point is, we are talking about collaborative development. So each thing is important. If you notice something to improve, it is time saved for the next people who will contribute another stuff. And that is collaborative development. Every commit matters. The second takeaway will be to respect the product goal on its roadmap. So if you need to look at it with a neutral eye, so take a step back and do not think specific, but standardize your way of thinking. And when you are working in OpenStack community to publish some code, you are not working for your company anymore. You are working to enhance OpenStack. So you need to take a step back and to think specific, and to think standardize the way. And it is very important when you build a system, we need to go at scale. So that's a good point actually. And the third one, the third one is my favorite by far, is you will gain a lot of kindness in other people's code. And as we appropriate it, we are no longer in a customer-vander relationship where we will blame the product for not working properly and we will waste less time correcting the problem directly than complaining that it isn't working. And I think that's a very good point because you will gain a lot of kindness and you will enhance a lot your development skills with that. Okay guys, so I hope you find this feedback interesting. And if you have any questions, let's answer it live. Bye.