 Hello, Mr. London. Yeah, that's okay. He followed here from the last two thousand and five. In 2011, he joined the Argentina team as tech writer. He then moved to the other team and now has found his calling as the new event manager. He's going to talk about quantum and how we made it fast. Thank you. Thank you. Do you hear me? Yes. Okay. Today, I'm here to speak about something I'm really excited about. That was a big achievement that we did last year at Mosea, and we are quite proud of it. It's about Firefox Quantum. So as Alex said, I'm Jean-Yves Perrier, part of Developer Outreach. I have a Twitter account too. The story begins by looking at processors. So processors for about 40 years were just doubling their speed every two or three years. It was more low. Lately, high-end processors still seem to continue more or less at this very same speed, but more and more, the processors that you have doesn't double their performance every three years anymore because you want cheaper processors, or sometimes also your processors are on your phone, and you don't want to have a high-end processor that will just drain you by three and five minutes. So performance on computers is no more driven only by having more performance CPU, but by having more cores and by having more concurrency. At the same time, what we are doing with browser has changed. If we go back in the early 2000, what we wanted for our website was, okay, we want a few images, we want text with a nice color, maybe a flame logo somewhere, but that's more or less all what we wanted. Today, it's not the case. What we want today is to have 60 frames per second videos in high resolution, and we want this to run in a virtual reality environment, that means on both of your eyes, with your phone used in the car box. So that's not exactly the same thing that we want from a browser, and at the same time, the CPU has not just given us the power in one go to increase what the browser is doing. There are very old pieces of programs. Firefox, from Netscape, and the Moziaco base has been open 20 years ago, and if we look here at the robots that we use for the launch of Firefox 3, it's a friendly robot, but I'm not sure, and we were not sure that this robot is suitable for the 21st century and for virtual reality. It's a nice, friendly one, but is it capable to do all what we want to? So we decided a few years ago to think again about it and to design something eventually new. First, I would like a little bit to come back to some how a browser is working. So we have the rendering engine, and the rendering engine is a very complex piece of software. The browser looks simple from the outside, but in fact inside is something very complex. We start by having to download everything from the network, HTML, CSS, JavaScript, with all the problems that the network can have, latency, and so on. So that means caching, dealing with stuff like this. Once we have the files in totality or in partiality, we have in fact to read and understand this file, so we have a parser that in fact will create an internal tree that will have all the nodes and how they should be displayed. So this is a DOM tree. From this, we have to apply the CSS. And for the CSS, we have to know the structure of the DOM, but also the cascade and to know to define what properties will be applied to each of the nodes of the tree, which is what the style engine is doing. Then we download also images and other live content. And when this plus what's come out from the style engine, we can in fact put the boxes on the screen or define where they should go on the screen because then we have all the height, weight, width, and so on of the different elements. But that's not all from this, then we have to paint. Apply the right filters, I write the right colors on the different elements. And finally, we have to take all these elements that we have all over the place and hide what is behind and only display what is in the front. And then we go and display on the page. And this loop here, we have to do it 60 times per second. Because each time you use JavaScript, it can modify the DOM and we have to recalculate all the things. So that's a key point that the browser and the rendering engine is doing. And doing this 60 times per second in such an evidence is a hard problem. So several years ago, Mozilla decided that to tackle this problem and especially to tackle this problem with a large set of engineers, we need two things. We need to have a test browser where we can experiment new algorithm. We created several, that is an experimental browser, several things here. First, it's written in Rust. Rust is a new language that has been designed to have less problems than C++, especially when you have a large set of developers working on it and volunteers, non-volunteers, and so on. So that means that we need something robust. It's also designed to test everything around that has massive parallelism. We see we have more cores, but we don't have more CPU power. So parallelism is a key in our opinion for the future of the web. And also it's a rendering engine. So we didn't put a significant UI. It's not a replacement for Firefox. And the last and most important thing, it's a test engine so we can break the web. And this is important because we cannot test thing with all the details. This will be too long to test an algorithm if it will have to work in all cases. But by breaking the web, we can validate if this algorithm is valuable to go further or not. At the beginning of last year, we decided it's time now to bring a lot of things that we have learned over the last five or six years with these projects. And to put it into Firefox and this was project content. We wanted to solve stability problem, Firefox was crashing too much. We wanted a new shiny theme and we wanted it to be extremely responsive. Project Quantum was divided in several projects and the first of them was the compositor. So the compositor is the last bit of element where we put all the layers together. And this is something that other program also do, especially operating systems and games, and in fact GPU are optimized to do this kind of operation. So we decided to offload to GPU the task of compositor, to the GPU, to the task of compositing the page, and we did this in 2016 already. And in fact, we noticed that a good deal of crashes that we have on Windows were caused by this because there are bugs in graphic drivers and that was making the browser crash. Was especially important on Windows and Linux and not on Mac because Mac has only a few graphic cards and better drivers. So the idea here was to isolate the compositor in its own process. So the process of the compositor may crash but not the whole browser. It's not perfect but it's better and then of course to blacklist bad drivers. Second thing that we were doing is to import from Stylo from several the new CSS style engine that is called Stylo. So style engine is basically, sorry, taking a file. And for each of the declarations in the file sort them by specificity and calculate which one goes to which box. This is something that you have to do for every node. So theoretically, it's something that is not easy but prone to be easily parallelized. One processor for node will be the perfect case and we can do this. So several test cases with Stylo and it's not that easy because you have to be sure that each of the threads that you are using in fact has the same load. So thread has to steal task from the other when they are empty and so on. Because of course you have maybe three, four, five cores and not as many as you have of node on a page that can be in the thousands. I want to thank my colleague, Lynn Clark here, who just made this amazing drawing and really sucks at drawing. And she also wrote several blog posts on the hacks.modia.org explaining all this. And these are really, really good posts if you want to learn more about this. Another thing that we added with Stylo is a style sharing cache that we took from Chromium and WebKit and we changed it a little bit because it was a bit old in the fact that it wasn't taking enough care about pseudo classes. So it wasn't as efficient anymore. So we changed it and make it more efficient and we were able to continue to use it. We keep something from Firefox like the rule tree and we put all this together and we had a brand new style engine much more efficient. The second set of problems are set of problems that come around not having the browser blocking, doing nothing, having the on mark, it's a beach ball of death. You don't want the browser to freeze. And most of the freeze on the browser come by having too many things on the main thread. So the main thread is what has to run 60 times per second. But of course, when you run this loop with JavaScript, you are not sure that the JavaScript will be short enough so that it stops in 60 times per second, it's a few milliseconds. So you're not sure that this will work. So we have to take everything out of the main thread that we can take out. The first thing that we already spoke about is the compositor. The compositor can be taken outside. The video decoding is no more done on the main thread. Plugins are gone, but it was the same. We have other worker and so on. So the idea here is how to optimize this to run in a few milliseconds maximum. The project that took care about this was called quantum flow. And the key point here is we decided to approach performance in Firefox as a system. Instead of optimizing stylo, the style engine, the different part, we have to consider the whole system so there is no bottleneck. We measure a lot of things, and after we did a change, we measure again to be sure that it has the impact we wanted. And Asian blog posts really, really went through all the details. There are 28 of them, which means 40 weeks more or less, that explain all the details that has been done. And he coined the expression death by a million cuts. We had bugs where we had to fix five, 10 other bugs so that we win the times we wanted. And in fact, each of them individual are not really noticeable, but all together because it's a system had a big impact. A few examples here of what we fixed, we wanted to have better scrolling. So when you scroll, you don't want, especially on a phone, you don't want suddenly the scrolling to stop for two seconds while we load the rest of the page and then continue. Because, oh, it stopped, you remove this, it's a bad experience. So what Apple did several years ago is when there is nothing to display anymore, it was displaying a checkerboard that, in fact, you slow down when the checkerboard appears instead of completely stopping. It's a much better user interface. And we applied this on Android, but also on the desktop and on all kinds of scrolling. So it works for the touch interface, but it works also when you scroll with a keyboard or the mouse. There is a limitation. It doesn't work for horizontal scrolling, but this is pretty rare still now on it. Another big thing is, okay, synchronous IPC. So we have several thread, several processes. And communication between the processes are sometimes synchronous and sometimes asynchronous. And when they are synchronous, in fact, the main thread is just waiting for the answer, not doing anything. So we have to fight this. It's obvious that we had the same problem here when accessing the hard drive or the disk. It's very slow. So we have no more access to the disk from there for a long time, but we had to do the same with communication with process. And for weeks and weeks, we have every week looking at what was the most offender in this synchronous communication. And little by little, they went down. And one of the big offender was cookie writers. So we completely rewrote the way we write cookie to the disk so that it's asynchronous. It was two engineers for several months, I think, five or six months until it was rewritten. And in fact, it's removed a good chunk of the problem. So suddenly, especially in big sites that make a big use of cookies like Facebook that writes cookies, I think, us tens of times each second. So they log a lot of things with cookies. This was making a big difference. We changed also some algorithm. Not necessarily the complexity of the algorithm, but also the locality. The same algorithm written differently hits the cache more often and is more efficient in today's web pages. And we work also a lot on the garbage collection. Garbage collection was halting the browser, especially during videos. So we make it generational and also more incremental. That means that only the latest things are checked, and we can stop it and start it later. So there is always a budget. This makes also a lot of video reading much smoother. We remove most of the timers because they are fired at bad moments. We replace them with a callback when the browser is idle. We redesign the UI. I will not go in detail here because we have a talk about this later. And finally, we got a new browser. Still a robot. We like robots. But it looks a little bit different nowadays. But we didn't stop this. We launched this new quantum in November. And last week, we launched 58, the next version. And we have done already more things. We have done off-main thread painting. So we have a big project to make painting extremely efficient in the future. But this project will not work all the time. So we remove some parts of the main thread activity about painting already now so that if you cannot choose the GPU for it in the future, it will be already better. We also know throttle background tabs. It's a difficult activity because you cannot throttle or pose every background tabs. Because if you are listening music in the background tabs, you want to continue to listen to music. So here we are mostly defensive. We experiment. And we try. There is a page on MDN that explains what we are throttling in fact. And there are more to come. So we have a long tail of improvement here to do. So the quantum flow project is not finished. The synchronous inter-process communication will continue. We have an exciting project coming out hopefully later this year, which is quantum render, web render, where in fact the painting will be done by the GPU, which is in fact rewriting a big set of Firefox in Rust again and changing this as we did with TIDO. And this has a big impact prototype that are really promising. Once also, I saw people speaking about specific process for web extensions. So web extension will not have a bad impact on the browser or stricter JavaScript budgets so that tabs have two milliseconds or three milliseconds for the JavaScript and then stop it and go to the next. So Firefox Quantum is not the end. It's not the end of a project. It's in fact the beginning of a new era where we can build again on the browser to have more feature and more performance. And performance is part now of our daily life to measure it and so on. Thank you. You can help us install Firefox Nightly and report bugs. Spread the word about using Firefox. Test in Firefox if you are doing website, please. That's way we can keep our users. And also, stay informed. Follow Firefox Nightly, for example, on Twitter. It tweets every big landing that comes a few weeks ahead. Thank you.