 So, I'm not the first French accent that you are going to hear today, and not the last also. So, I'm French. My name is Sylveste. I'm in charge of the release management team at Mozia. So, basically, we are in charge of making sure that Firefox ship in a great quality. So, before this guy, I had some time off, and I was doing that stuff. So scuba diving, the diver in the middle is in the room, by the way. So, I was doing that, and I was also doing that stuff. So, this one, I climbed the Mont Blanc a few years ago, and the mountain that you see is called Mont Blanc du Tacu. It's a 800 meters wall that you climb at around 2 in the middle of the night. It's super dark. You only have your light, and you have avalanche, and people die every year over there. So, the point I want to make here is that Firefox is basically that. So, we work with a sheet load of developers. We do a lot of things in parallel. So, Jean will show you what we did with Quantum. Florian, the next speaker, is going to tell you how we did that with performance. My team is taking everything, and we make sure that we ship Firefox in a great quality. So, it feels like climbing this mountain every six weeks. That is a lot of fun. So, first I'm going to tell you what is Firefox in terms of code and scale, and what we do, and this kind of thing. So, first, we make a release every six to eight weeks. So, in the past, it was every six weeks. We work during Christmas and so on, and we say, okay, we are not going to do that anymore, so we decided to move to a flexible schedule. So, now we ship every two months, basically. So, it's a crazy rhythm. Like, in the last year, we shipped seven major releases, including the ones that Jean-Yves just presented, and we shipped 26 minor releases. It's probably one of the biggest software ever created by Mankind. It has some legacy and some technical depth. We announced, well, 20 years ago, an escape. I know that they were going to open for their product. We are still based on that product. Since the start, well, since we have the VCS history, we made almost 400,000 commits by more than 5,000 developers, 18 million like the code, and we did 60,000 commits last year. And by more than 1,000 people. So, this scale is huge. We have a lot of languages. So, the first one is obviously C++. We have JavaScript. We use HTML also in our product. And on the last line that you see, the spike. Well, this button, no, sorry. This button here, does anyone know what it is? It's a big spike. Yeah, exactly. We are starting to ship rest, and it's amazing. And you should not use C++. You should rest. You should do rest. It's way better. We have also some assembly, some JS. We have some bash. For MS-DOS, bash scripts, this kind of stuff. So, we have a lot of languages in our product. And to give you a scale, this is the number of patches which landed in every nightly cycle. So, again, six to eight weeks for the last 10 releases. So, 58 releases at 10,000 patches. So, remember, less than two months. So, the reason is crazy. We have a lot of languages, a lot of technology, a lot of new features, fixing bugs, introducing new bugs, fixing performance regressions, introducing new performance regressions. So, we do that all the time. And we have a few tests. So, that is the number. You are among the first one to show them. I think most of my colleagues don't know this number because they have been extracted a few hours ago. So, every time I push a new commit in Firefox, it is 1,500 hours of build machines that we do. From my perspective, it's about eight hours, something like that, to have the final result and know that I didn't break anything. But we do that for pretty much every commit. And in November, we use November because a lot of people went on holidays during winter. So, that is the latest full months that we have. We had 300 years of machine time. So, imagine the scale. So, imagine that the browser that you are using, the scale, it's every month like that, basically. And it's increasing and increasing every year and every month. So, we use a lot of resources. And even with that, we are still shipping with some bugs sometimes. So, how we do that? So, we have basically, mostly, three kind of QA. So, the first one is that we catch issues during the development phase. Then we have CI, automated test, test suite, and so on. And we have humans, basically. So, I'm going to start with the nice part, the humans. So, playlist testing. So, the web is a crazy platform. I'm sure that you are all aware of that, but it's always nice to remember that HTML, you can do, you can make plenty of HTML mistakes, but at the end, you will still have a webpage. If you do that with C++, your compiler is going to insult you. If you do that with Rust, the first line that you write, you are going to be happy when it compiles. CSS is the same. It's hard to understand. I have to ask a colleague who knows everything about CSS at Mozilla to help me with some websites sometime. JavaScript. We all know the advantages and drawbacks of JavaScript. I won't bitch about this language too much. We are also managing a lot of image format, video formats. Network is crazy. We are maintaining four operating systems with different architecture. So, it's the matrix of combination of every case that we have. It's impossible to test automatically. So, you need humans at the end to test and to give us feedback. So, in my team, we manage the train model. So, we have, I won't go into details. I went at first, like, four years ago or three years ago, and I did a keynote on that. So, it's an hour. I can talk, like, four hours about that topic, if you want. But I won't. So, we have a train model. We have stuff landing in Nike every six weeks. We create beta and then we move that to release. And we take batches from central to beta and to release. It's very typical. So, we rely a lot on the user on the peer release channels. So, we have Nike. We are doing also some experiments. So, sometimes in Nike, we are doing another channel. We are going to send to some user a new feature, and we are going to disable it. It is what we did with Stylo that Jean has presented before. We enable that to 10% of the population on Nike at the beginning. Then we fix some bug. We move to 50, then to 100, and we did the same for beta, and then we did the same for release. So, it really helps us to catch a lot of issues. Now, we have two Nike per day. In the past, we were doing only one. Now, we have so many commits, and we have a lot of people in Europe. So, we prefer to have a Nike also for European in the morning. We don't have to wait until three in the afternoon every day. And we do a lot of user on Nike. So, the Nike population, a lot of company would kill just to have this user base. So, on beta, it's even bigger. We have millions, and on release, it's way bigger. So, it's crazy the scale and the number of users. Here, again, you and I could talk to you about that for an hour about how we do manual QA. So, we have people who are testing the feature. I know how they work, and it's very impressive, the kind of issues that they can find. Like, if you click there and there and there and there, you are going to have this bug. And they have some amazing steps to reproduce. And we are using those results to decide if the feature is ready to move or not to the next train. We do that all the time. Last year, Pascal presented at FOSDEM. His proposal about Nike and to reboot the Nike community. I'm very happy to say that it was a success. We doubled the Nike population. We reported almost 1,200 bugs. And then the Twitter account, which is a good metric on how people react to what you are doing, jumped to a huge number. So, it was very, very helpful in 57 to make sure that we ship a great quality product. We have also some, a bunch of people who are helping us with Zoom also, support media to gather feedback from users. So, I'm going to show you some of the stuff that we have to discover new issues, like new crashes and these kind of things. But in some cases, like crappy graphic card and some antivirus issue, sometimes we have, like, my fireworks on issues, blank pages. And we don't have any telemetry to identify that. We don't have automation. So, we rely on user feedback. And those people from the Zoomer team are able to collect that feedback and bring that to us and say, okay, what should we do? And in most of the cases, they are able to pinpoint exactly, okay, they have Casper key version X and it is causing that issue. I'm mentioning this one because we had that in 58. And it was one more time because of an antivirus. We have also WebCompat. So, again, we rely on the community to report some bugs, like fireworks on Android is broken for Twitter. When I press this key and I go in this page, I'm going to have twice the same later. It is at bug here. So, we reach out to website, to web developer, and so on to fix the issue. It's JavaScript or CSS or HTML. I always say it's very complex. So, we have a team who is doing that. So, now I'm going to move to one of my hobbies. I'm also involved in Clang. So, I'm a compiler guy. I was complaining about C++, but I'm involved in a C++ compiler myself. C++ and C are very hard languages. So, at Mozia, we have some amazing engineers and sometimes they forget to change the memory or they forget to new check, and this kind of stuff should not happen. That's why we are developing Ratsby, by the way. So, if you don't know about Ratsby, you should try it and we play C++. So, to mitigate that, we have developed and we deployed some tools to identify issue. This one is an actual bug that I fixed in NSS, which is used by Chrome, Java, Firefox, Red Hat and a bunch of companies. So, they introduced a new argument and someone just forgot the comma here. So, it is totally standard in C. What C is going to tell you is that they are going to concatenate the two strings. For humans, you know that it is stupid, but for a computer, it makes sense. So, it is this kind of issue that we are finding in the code. So, we fixed most of them and we have automation to catch them. They are stupid languages. So, we are also catching issue in our API. Our base code is huge. So, we have some tools which are going to check that you are not shooting yourself in the foot. And we are also trying to limit the code legacy, C++11, 14, introduce some new cool feature to simplify the code. So, we are warning to the developer saying, okay, you should write that this way instead of this way. So, in terms of tools, we are using Clang analyzer, which has been, as far as I know, developed initially by Apple. We have our own shaker, which are also based on Clang, on LibClang. So, in that case, we are checking some security issue, bad usage of the API, best practices. And we are also using Clang.td. And if you do C++, I recommend that you use that tool as part of your CI. Clang.td is amazing. And it is telling you some best practices, coding style, performance issue, upgrading your code automatically from old version of C++ to the new one. So, we are contributing to this project. And at the end, when the code lands, we have also cover it. So, it's very expensive, but they have a free version for free and open source software. So, that is basically where we are at. So, here, I know that someone is going to ask me the question. So, I'm going to answer before. It's because of a neither file, which was with everywhere in the base code, and cover it, he was thinking that it was the same issue. It was duplicating the issue. So, that's why we have sometimes some big spike. But we have been fixing slowly and slowly the backlog. Some of them were security issues. Some of them were performance issue, memory issue, and so on. So, it is a work that we are doing a lot. And fortunately, Rust is fixing a bunch of those issues, like an initializing variable and threading issue and this kind of thing. We have also, as I said before, we have a lot of languages in our product. So, we have now some linters or static analyzer, depending on the way you want to call them, for Chara script, for Python, for Chara, for bash, and for typos. And what we did is that we introduced them as part of the review process at Medea. So, here, it's a custom checker from ESLint that recommends you to use another class or object instead of the other one. And we do that for every commit. And for every commit, we do c++, Chara script, Python, not yet Chara, but bash and typo. It takes about 12 minutes to run everything, and we do that for every commit. So, now, I'm going to talk about some things that most of you are already familiar with, is crash analysis. So, in Firefox, when you hit a crash, you are going to have a window in the past that was breaking your browser, now it's just breaking your tab. And if you click on send, please do it. It's very helpful for us. We have a lot of automation, which are going to use those data to identify what are the priority developers should work on. So, we're going to send the stuff to a platform. We are doing some voodoo magic. So, basically, it's clustering the signature to make sure that we know that it is the same issue at the end. And then, we have been working for the last two years on the tool for Cluzo. The French in the room knows that it is a detective, and he is trying to find clues to fix his study investigation. So, what we did is it was a stupid idea. And as far as I know, not many big projects are doing that. So, we look at every new crash that we identified in SoCo, our platform. We are going to extract the backtrash. We have that from our tooling. And then, we are going to look at the recent VCS history, so, Merpuriology. And we are going to look at every patch which arrived and which line, which file, and which line that touched. So, at that moment, we are going to match the two and see, okay, this patch, which landed yesterday, it's touching exactly the same file as which is mentioned in the new crash. So, we have been able to report more than 200 bugs. And I don't know if you realize, but it is huge, because usually, this kind of bugs are going to be fixed only in beta or in release when they impacted a lot of users, but now we are able to fix them within 24 hours. So, it's huge. And if you have big projects with crash analysis, I recommend that you spend time doing that. Please. And everything is open source. You can reuse it. Please contribute. Same. As far as I know, it is the first time that someone did that on the scale of Firefox. We have code coverage. I know that it sounds pretty easy, but on the scale of Firefox, it's huge. And if you wanted to do some JavaScript code coverage, it was pretty hard. You had to use some Java application, which will not maintain and so on. So, we introduced a support in SpiderMonkey, like, two years ago. To do that, we were going to announce that a few weeks ago for them. And we had also to introduce a support of code coverage in the REST compiler, and we had to patch GCC, LLVM, Clang, CompilerRT. And we had to develop a new tool to replace L-Core, which was a Perl script, and it was taking 24 hours. And obviously, because we want to do that every day, it doesn't scale. So, some guy in my team, who is in the room, we developed that in REST, and it's taking five minutes. So... And once again, it's open source. It's not specific to Firefox, so you can reuse it. So, in terms of results, we realized that C++ is about 55% code coverage. So that means that 45% of the code in Firefox is not automatically tested. So, we have some third-party stuff that we compile and we ship, but we are not using them, like some old codec or some libraries, but it's still a pretty bad percentage. But in JavaScript, the percentage is pretty high. So, in terms of comparison, LLVM, Clang, and LLDBs, the code coverage is 80%. But it's way easier to test a compiler than a web browser. So, as a side effect, we also developed a tool to identify which file has zero code coverage. So, either way, it's a bug. So, either it's dead code or you don't have any tests. So, you should do something about that. So, we started that not a long time ago, and we already removed 65, and we realized, okay, they are useless, but there is still, as you can see on the right-hand side, like the Graphics library, we have a lot of files which are not tested. Many of them are because of third-party code that we embed, and we are not using it, so we should investigate how to at least remove them from the peel phase. Everything that I explained is also to limit the code legacy and the technical depth. Every small step is going to improve the quality of the product at the end. It might seem trivial, but if you consider that Firefox has been living for, like, 30 years with Netscape before, and we hope that it's going to be the same timeframe in the future, you have to limit the introduction of dead code and stupid bugs and programming mistakes. So, I will go, I'm mentioning that quickly. I'm not a specialist in fuzzing. We have a team dedicated to that. So, I'm using here, it's a... So, fuzzing for people who are not familiar is just basically sending stupid things to the compiler or to the JavaScript engine or to some API and finding bugs. So, for the last two years, we discovered 600 security issues, thanks to that. So, again, if you are a software developer and you are not using fuzzing and you can use it, you should investigate. We have also some other bugs practices. So, we are compiling clang-trank every day, and we are using it to compile Firefox because new warnings and new checks are being introduced on a daily basis into compiler. It is finding some issues. Sometimes, bike-shading, but in many cases, it has value. And we also report bugs in the compiler. So, on here, like, six bugs or seven bugs that we reported on GCC and GCC8. So, we are also helping the community and other compiler to win-win situation. We have automation also. So, we have a crazy CI. So, when you are a media developer and you do Firefox, you are very familiar with this web interface. So, every letter means a test suite or a subset of the test suite. Every black element is a platform, basically. So, we have optimized Android, non-optimized Android. We have Linux optimized. We have Linux PGO. We have Linux Debug. We have that for Windows with different platforms and so on. So, this explains the 1,500 hours. So, we are doing that all day. So, we have also some here. It's hard to see, but here it's orange. Here it's red. So, orange is intermittent issue because we have so many multi-threading things. In some cases, we have bug which occurs once every 1,000 rent, for example. So, we have a rent and we have people who are going to flag that. Rent's been usually an error. Not always. Sometimes it's an orange. We don't know. So, it is a scale of the Mozilla CI. We do that at every commit and we have as developer, we have the capability to trigger, okay, I just want this test suite on Windows 32-bits optimized. We have this capability and we use that a lot because we know that it is costing a bunch of money to the company. Despite all that, so, I show you everything that we are doing. When I made the slide, I was like, okay, we are still shipping with plenty of issue, but someone in my team did some stats and realized that, like, our biggest issue, the red stuff is antivirus. So, antivirus can be also malware or security software, but we are fighting all the time because we don't control that. It's the same. The blue stuff at the bottom is hardware vendor driver, understand graphics drivers. So, it is really the biggest issue that we are. We have a huge test suite. We have a lot of people testing the software. We have a lot of tooling, but really what is costing us a lot of time and a lot of money is third-party application who are trying to improve the life of our user, but actually they are not. They are either crashing fireworks, or slowing down, or showing a blank page, like I show. And we are facing that all the time. So, it's the same with Chrome. Chrome, Google announced a similar initiative a few months ago and they are fighting that. If your Firefox or Chrome are crashing on windows, usually windows, it's probably not our fault. It's probably not Google's fault. It's probably because of your crappy antivirus. And they are really doing some nice teaching, like security software. They are going to plug into our DLL and they are going to gather a lot of information on what you are doing in your browser. And in case we change the binary layout, security software doesn't know where to plug and it's going to crash fireworks. So, from the user perspective, like, oh, but they are still ship the new release. They are stupid. They bought my fireworks. Well, it's not our fault, but from the user perspective, it is. And we have also, as I said, it's impossible to test everything in the web ecosystem. So, sometimes we ship this web equation. One of them is Tilo, the CSS, the REST CSS rendering engine. We bought the web version of our code. I promise we didn't do that on purpose, but we are trying to fix that. We already have a patch and so on. But it is really the complexity of what is shipping a web browser. So, I know it took fast, but I am done. And I'd like to thank the people who did the stats for me. I bothered a lot of people. Three of them are in the room. And we are using also the audience to say that we have two interns in the team, one to work on static analyzers, the other one to work on code coverage. So, join us. And if you want to contribute, feel free to reach out to me.