 So, welcome everyone. My name is Thierry Cares, I work for the OpenStack Foundation and I want to introduce the Le Bon Coin team here. So, I'm Sonia Oustard, this is Guillaume Chenuet and this is Benoit Bezac and we work in Le Bon Coin in the Engineering Productivity Team. So basically we are taking care of the CI, the CD and any developer, any tools that can, is the work of the developers. So to start things off, I wanted to give you a quick perspective on the history of the tooling we've been using for OpenStack, had the chance to be at the very start of the project and it evolved a lot, and I think it gives perspective on the type of tooling we've been building and seeing it adapted beyond OpenStack is really awesome. So once upon a time, we started from day zero with getting in OpenStack, which is the idea that no human should ultimately merge the changes onto the code repository, but you need to have a machine do it and check a number of things for you. From day zero, we were using tooling that was built on a Wintoo tooling, so Launchpad was used for code review, and we used a piece of software called Tarmac, which some people in the room might still remember of fondly, that was doing a very, very basic serial getting, and by that I mean it would check that you provide a commit message, I think, and would run very, very basic checks on your approved change. It's once the human approves the patch, it would go through Tarmac for these tests, and then Tarmac would also merge the branch, ultimately into the busyDAR repository, which you also might remember of, was the distributed VCS that Wintoo and Canonical was supporting. So that worked well in 2010, but early 2011, we wanted to have more tests, and there are so many good ways to integrate tests into Tarmac. So we started introducing Jenkins here to provide more complete serial getting, so we'd be able to run a body of tests on every approved change to make sure that everything was all right there. And Tarmac would still be used, called from Jenkins, to do the merging of the branch. Another side benefit of using Jenkins here is that it would watch the busyDAR repository for changes, and we would have the ability to also run jobs once the change merged, so all the post-jobs, post-merge jobs would be defined starting from that early 2011 period. During 2011, we migrated from using Launchpad and busyDAR to using Garret and Git. And so that changes slightly the perspective, because we could rely on a stronger integration between Garret and Jenkins through the Jenkins-Garret plug-in. And so that led us at tests, that check tests, that would run on every proposed change. That was a really changing experience for the OpenStack developers. They wouldn't run the tests locally anymore to be able to detect issues. They would just throw the patch to the system, and the system would report issues with it, which, given the complexity that OpenStack started to reach around that time, was really the only way forward. And obviously, Jenkins would also be connected to Garret in terms of post-merge activities, so we would have post-merge jobs based on Garret events as well. On day three, so by the end of 2011, the problem is OpenStack was growing really, really fast, and we were hitting the limits of serial getting. Our tests were running for about an hour, and that meant that we could not land more than around 24 patches on the main pipeline every day. So you may have seen in the keynotes that we're currently running around 282 changes per day, so you can have an idea of how much of a problem we're having by being limited to 24. So Zoo introduced a completely new padding to solve that problem, because by the end of 2011, we were basically stuck. Getting system was blocking the project velocity, so there were just dump it, stop doing tests. This is the only way to move forward, and that's where Jim and Monty and others came up with a concept of speculative getting that let us manage to run a larger number of changes. We don't do parallel getting. We don't throw parallel tests. We check that they actually can land on top of one another, which avoids introducing regressions. So that's always key to enabling the velocity that OpenStack still has today. And from that day, it was like automate all the things. We started to apply that padding to everything we did. Documentation is built from things. We do release requests now through a git repository with jobs that are run on Zoo to actually do the tagging and the publication of the releases. We also automate release notes through a Reno, and we'll also cover that during the talk. We have release highlights now, the top three things on top of your mind on a project to try to shape the release messaging. This is all driven from changes that are made to git repositories and that are collected through this system. But I won't steal the spotlight any longer and let the Le Boncoin team explain how they took this tuning that we built for OpenStack and use it elsewhere, which is awesome. Okay. Thank you. So let's start with a quick introduction of what is Le Boncoin, because Le Boncoin is not really famous out at France. So in France we are really attached to flea markets to classify ads markets. So the idea was to connect people to allow them to exchange second hand items. So Le Boncoin was created 11 years ago with this idea. By the way, if you wonder about the meaning of Le Boncoin in French, it could be translated to something like a good opportunity around the corner. So today Le Boncoin is still evolving, it's still growing, and still trying to give their users the best experience possible. So let's have a quick look at some numbers for Le Boncoin. Let's see how important this website is for us. Le Boncoin, it's 20 million of unique users per month. It means it's almost half of the total French population. One in two French people is going to Le Boncoin each month. There is currently an average of 20 million ads on the website and a creation of 800,000 of ads per day. It means that, for example, this is the beginning of the presentation, it's almost like 500 ads that have been created on the website. It's also the fifth most visited website in France, behind Google, Facebook, YouTube and Wikipedia, which makes it the first, the most visited French website. So it's kind of a big responsibility to take care of all these users and give them the best experience possible. So we needed to build a very strong CI accordingly to these numbers. So let's start with an overview of the CI. Most of our 150 developers are working on a gerit, so they create around 70,000 patch sets per month for almost 500 reviews, which means it's almost three patch sets per review. This 5,000 reviews leads to the build of 16,000 packages. Some of these packages are built for testing purpose, but most of them will be deployed to our different environments, QA, staging, production, and it leads to the deployment of 20,000 packages. So we recently switched to microservices, so that's why we have a lot of packages to deploy. To take care of all these CI, we are a team of seven numbers, currently the engineering productivity team. So now I will let Sonia tell you a little more about how these CI has been built. Yes, so let's talk about the Le Bancois Odyssey, because we will talk about the monsters we encountered for our CI and how we sharpened our tools thanks to OpenStack. So during the four years of the team, we always try to keep it simple, to be coherent, and to be more maintainable every year. So we will talk about the CI evolution since the beginning of the team. You may wonder why we present only four years, because Le Bancois is 11 years old, but it's because the team didn't exist before that. So we will present our journey year by year. So it's 2015, it's the beginning, and we only have two things. The engineering productivity team has been created, thanks to a huge demand from the backend team, they wanted to improve their quality. So we introduced Garrett, so that they can improve the quality of their code, thanks to code reviews, and also Jenkins, so they can improve the quality of their features, thanks to more testing of their features before the integration. But previously you saw that we were still missing some stuff, we were still using the UI to configure Jenkins jobs, and obviously there was nothing. So now we introduced tools for the documentation, like Renault and Sphinx, and Git review to ease the use of Garrett, and the Jenkins job builder for the jobs description. So Renault, to explain more, it's a release note tool where you write YAML files, listing the new features you want to add in your code, as the upgrade notes, etc. So it was a quick win for us, because it was a good way for the developers, for example, to talk about the PostgreSQL tasks they wanted to update to the C-Satman team, which was doing the production deployment afterwards. Sphinx, it's in RST, so it's text-based, and it's easier also for the developer to put a new documentation along with their code, as they commit it along the way, and its version, and it can be commented. So it was also very simple for us to present it to the developers. And Jenkins job builder, it's also in YAML, and we love YAML at Le Boncoin. So it's a way to describe your Jenkins jobs with different steps, and it's user-readable, which is nice, it's better than XML files, and also it can be reviewed by other team members, so you have a high maintainability, and also you can recreate it very easily in case your Jenkins instance is just dead. So we will continue with Guillaume, who will present the 2017 year. Let's speak with the microphone. As you may know, 2016 was about formal tooling, but it was not enough as the company is still growing, so we need to find other tools to scale our CI. So let's start. Our first idea was to manage more Jenkins slave workers, so at the beginning it was working because our master instance was strong. But after some time, the Jenkins master was not performant, and the scheduler started lagging. So we tried another idea, which was adding more master instances, so it's a good idea, but it's not working because Garrett is not able to manage the same label with different Jenkins master, so it was a good start, but not enough. So the solution was Zool. So Zool was already used by OpenStack and Wikimedia Foundation, so it was working and in production. It was a good point for us. Zool is, as you may know, it's a gating project system, which means you can run jobs in pipeline. I will speak about pipeline later. And it's performant, so it's cool. What do you say? With Zool, you can do serial gating, running job with parallel, so you can vote on different labels, so a lot of features. Let's talk about pipelines. So if you already know the OpenStack CI, it's kind of the same pipeline. We have three big types of pipelines. We have the check pipelines. We have build pipelines for when you have a review on Garrett with new patch sets. Three pipelines are triggered. The first one is about build, so if your application is on go, you build your go binary. The integration one is about testing the code with unit tests, and the last one is about quality, so around the code linter or everything else. Once your change is merged, we have the post-merge pipeline, it's a simple pipeline only to build your application, upload the artifact, and also build and publish your documentation. And the last one is about tag, so if you want to create tag directly on your Git repository, you can do it, and Zool, we add the same as the post-merge, will create and build and upload your artifact and documentation. So 2018 was the year of next level up with Zool by switching to version three. So you may wonder why are we migrating to Zool v3, so there are several reasons. The first one is because it's scalable and distributed, which is good because it's easy to add new components to follow the constant growth of the CI needs. The second one is because now they are using Ansible playbooks, and at Le Boncoin we were already using Ansible, so there was no learning curve, we were already using YAML again. There was no more Jenkins, so it was also good for us because Jenkins instances were a bit of a pain point at this point. And when we saw that you can reach the same result, but without having this brick in the stack, then why not? Also there is now a GitHub integration. Now you may wonder why we talk about GitHub as previously we only talked about Garrett, but some of our teams in Le Boncoin are using GitHub, so it can be a good opportunity to reunite everyone under the same CI too. And also Zool v3 is more up to date because OpenStack was obviously moving from V2 to V3, so why not follow them and stay AG. And also Zool v3 is community driven, so it's way easier to communicate with people, find them on IRC and have any information you want. So I will give Benoing. Yes, so let's talk a little about Zool v3. I know in the room that some people may already know how it's working, but let's have a very quick look. First of all, what's in between version 2 and version 3? So first of all, Jenkins slaves are now replaced by an OpenStack cloud platform, so no jobs previously run on slave nodes are now run on virtual machines on OpenStack. As Sonia said, jobs are now executed with unseable playbooks, which is nice for us. And over OpenStack cloud platforms, there are other components, Notepool. Notepool is in charge of the virtual machines lifecycle, is in charge to keep a pool of virtual machines ready, and is able to provide it to Zool. Zool is a component which listens to Garrett's events and read pipelines and launch jobs in consequence. So another nice thing is that jobs can now be emptied directly in the code repo so the developers are now able to write directly their own jobs, their own tests. So let's see how, let's have a little more detail about Zool components. There are four main components in Zool. The Zool web, which is in charge of displaying the graphical interface UI. The Zool merger, which is in charge of git merging operations. The Zool executor, which is in charge of executing unseable playbooks on virtual machines. And the Zool scheduler, which is the core components, which is listening, Garrett and trigger jobs. Under that, Notepool, so the component in charge of managing the virtual machines pool on OpenStack cloud platform, we have two components. The builder is the builder is building images. So we specify what we want in an image, sorry. And the launcher will get the image build and start a virtual machine on OpenStack cloud platform. Well, how we set this in place at Le Boncoin? So we have two entrimized data centers. So we decided to naturally split, more duplicate all the components. So we added to a web component to scheduler and more and more merger and executor, because it's an easy way for us to scale, to follow the company growth, for example, if the CI is struggling because there's too much job to execute, it's really easy for us to add a new merger or new executor to handle the overcharge. And we also wanted to be able to keep a CI fully working in case of a data center loss. So even if we lose a data center, you'll see that we have all components in the other one, so it will still be working. Now, I'll let Sonia to tell you why we choose to migrate. Okay. So now you saw the big changes we did. And there is an overview from just the beginning where we had only two tools, two services, not a lot, and 2018 with all this tooling. It's quite obvious that this follows also the increase of the number of developers in Le Moncoin, because at the beginning, we were like 50 developers and then 150, so we had to adapt. And along the way, we increased the developer satisfaction because now everything is more flexible, it's more sustainable. And also, we increased our own satisfaction because the system is now more robust and more coherent and more maintainable than before. And also, we learned a lot of things, a lot of trials and errors, and it's always nice to know that you improve your knowledge. So now, Guillaume will talk about the tips and tricks to achieve this. Thank you. So with this long journey, we have learned some tips and tricks. So the first one and on my side, the most important is read the documentation and the code repository. OpenStack have a lot of repository with inside all information about how to set in Zoom or using it, so feel free to read them. One of the tips should be explicit naming convention, it means name your job inside here with the same prefix, for example, as your pipeline. It's easier to find them. About monitoring and graph, it's very important to have it. With this tool, you can find some problems or improve your CI. Benchmarking also is also important. When we start the Zool V3 migration, we start with a proof of concept. So it was important for us to benchmark if you need some CPU, RAM, and other stuff like that. As Zool V3 is now a real project with a real open source project, feel free to keep in touch with the team on ISE. It's very important and you save a lot of time also. And as each company has a legacy of other boring stuff, fine tuning of tooling is authorized, you can set your legacy stuff with Zool. It's working. So that's for the tips and tricks, and thank you. And if you have some questions, feel free to ask. Thank you. No question? So regarding OpenStack, what projects are you using? And when you build or when you test, do you create a full environment of your website, including network? About OpenStack component, we are using the classic one, no only compute networks and the classic platform. And about Le Boncoin, it's different. We have our legacy. It's building all the sites with all components. It's like a monolith. It's not very interesting. But now, as we are working with micro services, we are just building and testing these micro services and dependencies. Is that a question? No? Thank you.