 So I work from Mozilla Research on this project called Servo. Now, this talk is going to be one of those dangerous live demos, because this is Servo running. The slides are in HTML and using just enough features that Servo supports. It supports, for instance, key events so I can change slides. So the goal of the Servo project is to create a new browser engine that sort of achieves this next generation leap in performance and robustness. And I'll talk a little bit about what that means. One question that you might have is, or first of all, our high level plan here is figure out what is even possible. So just to give a little spoiler, we don't know yet. We have achieved some things, and we think there are a lot more that's achievable, but we don't even know quite where the limits are yet. And then this isn't strange for Mozilla Research, but maybe for other resource organizations. Our end goal here is to build a product ready engine. So we're not building a prototype that will then get thrown away, and the knowledge from that prototype will then get merged into Gecko or something. We're trying to build a real engine. Because it turns out, if you want to find out, if you can make a fast parallel browser, it's not good enough if it's not faster than the things that already exist. There is a bar that we must clear even to walk in the door. So if it's a little slower, but we can show it has parallel speed ups, it's not interesting. So why would we go through all this trouble to build a new engine? Well, there's a lot of reasons. One is that all the current engines are really, really big. So I made this slide for the first time I talked about Servo, and this number was 6 million lines of code. And that time was a year and a half ago. So we've had a 4 million lines in a year and a half to Firefox. And also, I realized when I was going through this talk that I only included the C and the C++ code here. And Firefox is written in JavaScript. So there is a lot of missing stuff being accounted for. But the point is that you can see this both ways. It's both a reason not to write a new engine, because look how much work we have to do. But also, if you want to refactor or experiment in this code base, it's really hard. It's just too big. The other thing is, browser sufferers all technology suffers from this path dependence problem. And path dependence just means that the history matters. So these things were designed 20 years ago. I think the first Mosaic browser was in 1993. They were built for computers that had single processors. Clock speeds were still rapidly increasing. And I'm sure you've heard all this before. Importantly, we were in a utopian web back then. For one, there was not really that much JavaScript. Like documents were documents. And you were mostly browsing around. And you didn't have to worry. Only Microsoft Word users had to worry about other people's documents hurting them. But now we all have to do this. Like we're under constant attack and surveillance and all these things. And browsers were sort of created in this more utopian era. And they've also accumulated lots of things as they've grown up. So for instance, there have been lots of sort of proposed web standards that got implemented in browsers and then later removed. And we don't think about these things anymore. I mean, some easy examples are like marquee and things. And that one doesn't have this problem. But lots of that stuff made browser developers make architectural decisions way back then that are now no longer valid. But a lot of that code is still there. Like some of this has to do with Firefox still has support for Windows 95 or Windows 98 until it gets deprecated in five more years or whatever. But there's a lot of stuff that's in Firefox that exists because of that. So we don't want to have to deal with any of that stuff. Another reason is that C++ makes us sad. So all the browsers are written in C++. And this is mostly because there's not anything better. That is somewhat debatable. But the problem is that C++ is wildly unsafe. I mean, it is so unsafe. And most of the browser security bugs that you have ever heard of are directly related to how C++ is unsafe. So just for some context, this is David Barron up in this picture. And he's one of Mozilla's, I think, three distinguished engineers. So he's one of the top engineers in the entire company. And he put this sign up next to his desk, which hopefully you can read. But if you can't, it says, you must be this tall to write parallel code. So even our best engineers don't think they are good enough to program in the security model well. So we'd like to rewrite a browser in a safe language. But no such language really existed. So we decide to create one of those. So not only are we making a new web browser, 10 million lines of code, but we're also going to take on C and C++ directly as well. So Rust, hopefully you've heard of it. There was a talk here last year about it by Nico, is designed to be both fast and safe. So C++ gets fast right, and like sort of Java can get some of safe right. But we need them both. So we need memory safety, which means we need to be able to do concurrency with no data races. We need to be doing automatic memory management, but we can't have garbage collection because we need control over when things get allocated and freed. But we don't want to be responsible ourselves for freeing memory because we'll forget, and every time we do a use after free, that's a security vulnerability. There's lots of other modern features in Rust. I encourage you guys to take a look at it. It has hands, it has traits. You get all the nice things from functional language like pattern matching and lambdas and all these beautiful, beautiful things. We also have algebraic data types and things like that. These are all great things to write a browser in. So we wanted performance and robustness, and I'll talk about performance first because that's probably the thing that gets people the most excited. So what does this mean? There's a lot of ways to sort of evaluate how fast a browser is. So page load time, how fast does it take since you enter in the URL bar for an actual pixels to be painted on your screen, the JavaScript execution speed, which is what everyone seems obsessed with these days. There's also things like responsiveness, how quickly it responds to user interactions, things like that, and power usage. And what is the drain on your battery? So the top two of these things, the first two, these are things that traditionally people have competed on. So at first everyone wanted the pages to load faster, now we want to be able to play triple A games inside our browsers running in JavaScript, and people have been doing this for quite a while, and there's still some things that we can do here. Servo in particular is not rewriting the JavaScript engine because that seems like it would be even more insane. So we're not really working on that problem. But we're really concentrating on these bottom two, responsiveness and power usage. Just, I'll talk about power usage a little later, but just to give you some idea of responsiveness, said Firefox is written in JavaScript. It has one JavaScript engine running. So that means that your browser Chrome and every tab that you have share the same JavaScript process, which means if you want to scroll your page, but there's another JavaScript process on the page that's taking a lot of CPU, well even if you're in a different tab, like you're still gonna get not, it's not gonna scroll, like until the JavaScript returns control. So there's a lot of work to do here. And even in Chrome where you have tab separation, and you don't have the problem quite this bad, there are still plenty of cases, like for instance if you have two iframes in the page, and since most ads are served in iframes, any rogue JavaScript in those iframes will make the JavaScript in the page slow and cause all kinds of problems. So there's a, these are some of the things that we care about. Unfortunately, if we wanna make things fast, we have sort of this guy who gave us a bunch of bad news a long time ago named Amdahl, and he asked this little rule that says, if you wanna make things fast by putting in multiple CPUs, then there is a limit of how fast you can go based on how much you can't parallelize. And so sort of the corollary to this is if you wanna actually make a browser fast, there's so much of these things in there, you basically have to parallelize everything or nothing will be fast. So not only do we try to do that, but we try to parallelize at every level of the stack. So some browsers have started doing this already, so for instance, Chrome has tabs on multiple processes, so you can just run different pieces and different processes and parallelize that way. You even have things like the compositor being out of process, which I think some versions of Firefox now do. Plugins have been out for a little while. The next thing you can do is you can try to take the major subsystems of the browser and run them in their own threads. And some of the browsers do this, or I think all the browsers do some of this already, but mostly for like media decode, so decoding videos, decoding images are done in their own threads. But you can imagine extending this to things like running JavaScript in its own thread, doing the layout in its own thread, doing all of the painting in its own thread. There's also algorithmic parallelism where you take one of these subsystems and come up with a parallel version of the algorithm, and we're gonna talk a lot about this in a minute, because we, Servo does layout on every node in the DOM basically at the same time, instead of just running the whole DOM on one thread. And you can also do things like painting and rendering on the GPU, which are also massively parallel. The other browsers are sort of starting to look into this, but it's not quite working yet. And then there's also the very lowest level data parallelism where you try to take the same operation and apply it to lots of copies of the data at the same time and get big efficiencies that way. And of course, this has always been the case for things like media decode, but we're trying to use it in other places as well. So we're gonna talk a lot about this algorithmic parallelism specifically in layout. And one of the reasons we think there's a lot of work that we can do here and a lot of benefit that we can bring is if you, I think the one on the left is Pinterest and the one on right is a partial screen capture of TechCrunch, but all the new sites you're probably familiar with have similar structures, but these pages have a lot of structure. All of these little blocks of images contain basically the same thing. Don't interact with each other. So you can imagine that the browser could actually calculate where those things should be on the screen all at the same time. And it can, but this sort of gives you some intuition of what this works. Now why this is important is, so here's something, this is a timeline view of a webpage renderer just looking at, image decoding and layout operations on a four core machine. And you can see that we're using 100% of all four cores. So we're doing this in parallel and modern browsers already work like this, right? But the problem is, is that we don't get to see anything until the purple task is done, right? So we can decode all the images that had a time on those other cores, but until the purple task is done and nothing appears on the screen. But if we can parallelize the purple task as well by using algorithmic parallelism, then we can get to something like this. And it takes the same amount of time to do the work. And we're still using 100% of all the CPUs, but you get to see the page 25% sooner, or 75% sooner. So the responsiveness of this is gonna be much better and people are gonna feel like their webpage is faster. Even though the complete page hasn't loaded for the same amount of time, you get to start seeing something. We need to do some experiments with this. This is a diagram stolen from the CSS specification about their box model. And if you're familiar with HTML, you probably know about this, but each of these elements on the page has its own padding area and then a border and then a margin area, which is outside of the border. And most of the times when you use this, and actually there's special syntax word in CSS, the top and the bottom borders are the same, or the side and side borders are the same. And no matter if they are or not, when you're calculating layout properties in this, a lot of times you're interested in what is the entire width of a bunch of elements. And so you're gonna be adding in both the left and the right border at the same time or the top and the bottom border if you're calculating the height. And it turns out that you can do all of this, you can do those calculations all at the same time. So if you need to add all these things, you can add them up simultaneously with SIMD instruction either horizontally or maybe you could do both at the same time. So you can kind of take advantage of these things even in layout. So the way we do layout is quite a bit different. Interesting, I think, so I'm gonna talk about it a bit. This is a little DOM tree. This is how HTML elements are represented inside the browser. And we actually don't do layout on this tree. We make another intermediate data structure called a flow tree or I think Blink calls it the render tree and Gecko calls it the frame tree. But it looks pretty similar to this and so I'll just use this as an example. And what current browsers do is they wanna lay out this bit of the DOM tree, they'll start at the document and they'll call this virtual method called layout and they'll say document layout and magic happens. And that magic consists of basically the document thing looking to see what its children are and calling layout on them and so on down the tree. But the important thing here is like all of these elements in order to find out where they are on the screen and how big they are need to access the data from other places in the tree. So for example, in the P element, in order for it to know how high it's gonna be, it has to know how high the A tag is or how many lines of content are there. And for the page to know how wide the P tag was gonna, for the P tag to know how wide it's supposed to be and you just know how wide the body is because it's gonna be at least as wide as the body, minus its borders, margins and padding and stuff like that. So what they do in the browsers is when the P tag needs to look and see how big body is or how big the A tag is, it just goes and asks it. It says, hey A tag, how big are you? I call get with or I call get height or I call get whatever your children are. And that's not that big of a deal when your tree looks like this but when you start having floats and absolute positioning and Z indexes, the data access patterns that the browser is doing are basically unpredictable and crazy complex. And this makes it very hard to parallelize. So now if you want to lay this out in parallel, you have a problem. You have all of these different things that are gonna be calculating and mutating their information at the same time and you need to make sure that the P tag doesn't update once and then something else updates and then it updates and overwrites it and now your stuff is all in crazy positions on the screen. And you can't do anything as sort of naive as we'll just put locks around all the data, you mutate because this is already really fast in the current browsers and you cannot make it slower. Like people will notice and no one will use your browser anymore. So most of these sort of naive solutions don't work. So we do something a little different. So we don't know how to parallelize this with the data access patterns that are all kinds of crazy but we do know how to parallelize simple tree traversals. So I mean, just to go top down for an example, if you wanna parallelize this tree traversal you can do your operation and then do operations on all your kids simultaneously and then as soon as they're done you can do operations on their kids simultaneously and that's pretty easy to understand. The trick is you need to make sure the data access patterns match. So for instance, when you're going bottom up, like for instance in this first reversal, you're not allowed to access any of your ancestors data or mutate any ancestor data. You can talk about yourself, you can modify your own data because you know and you can't access your siblings data obviously but you can do your own thing and any of your children, you can mess around with their stuff and everything is just fine. And when you go top down, you have exactly the opposite set of data access requirements and that you can't look at your kids and you can't look at your neighbors but you can talk about your ancestors or mutate their properties all you want. So we factored out this crazy data access patterns into basically three traversals. It's slightly more complicated in servo but this gives a good approximation. We first go from the bottom up in parallel and we call it the bubble widths but we take the intrinsic widths of all the things that we know about and just propagate them up the tree and then the next pass goes top down also in parallel and it's called assign widths and this is where we actually assign how wide everything on the page is going to be and once we know how wide everything is on the page is going to be we can go back bottom to top also in parallel and start assigning heights. And actually servo is smart enough that when it gets to say the bottom of one of these traversals it can immediately start calculating that it doesn't actually wait for all of the traversal to finish. As soon as it gets to a leaf node on a top down traversal, it can immediately start running the task that's supposed to go bottom up and it will know that I have to wait for my parent to get done in the top down traversal before I can start going farther up the tree. So this turns out to be really, really, really fast. There are a couple of problems with this is that like I said, you have to make sure the data access patterns are sound like in C++ this would not work real well because you would always have to be looking out in code review for somebody who did something they weren't supposed to do and when it mutated something over here. So in the Rust programming language we actually have different types for these elements when you're in different kinds of traversals so that you are unable to call those methods. So those methods aren't defined on the type that you have. You only have the methods defined that are safe to do in the traversal that you're in and the system behind the scenes, there's a sort of a trusted kernel of code here that switches the types behind the scenes so you don't have to keep track of this but it means you can't really make a mistake. And then the other problem is is that not all our lovely spec editors and designers of the web weren't thinking about parallelism way back then and of course they designed all of these structures that don't have clean data access patterns. So CSS floats is one thing, if you want to float something left well that affects all kinds of things that aren't in the immediate subtree or in the immediate ancestry. So when that happens what we basically do is we hold off processing until we get to the root of whatever subtree that this affects and then we do an in-order traversal there. So we just sort of save up all the work. Of course this is terrible. All of the work we do is sort of in these in-order serial traversals but it turns out on most web pages that we've tried and seems like on the internet that these parallel hazards are either in rare edge cases or don't affect enough of the page to really have too much effect that we don't still get a big improvement, a big performance win. So some of this is, I like this image to explain this. In World War II, I think it was the RAF was losing a bunch of planes to German and aircraft fire and so they wanted to put armor on the planes and they were trying to figure out where to put armor on the planes. So they got a bunch of mathematicians involved in this problem. The first idea was okay, first where all the bullet holes are on these planes, let's put armor where the bullet holes are and now they won't get a shot. And this one mathematician was like, no, no, you guys are thinking about it all wrong. These bullet holes didn't make the plane die. Like these planes came back, put the armor in the other places because if a plane got shot in those other places we didn't even get to like mark it down. So this is totally counterintuitive and in the same way if you want to, like one way to make web browsers faster is to go look and see what everyone else where the slow parts are or what the teams at Mozilla or Google are working on and start really concentrating on those places. The counterintuitive thing that we discovered is what you need to do is you need to go look at the areas that are so hard or so terrible that they won't even look at them. It's not like the Gecko team or the Chrome team have never had this idea that they're gonna parallelize layout but it's a non-starter when you realize that you have C++ to work with and 20 years of legacy code that you would have to change. So we try to look in the weird corners. So let's talk about robustness a little bit. This one is more important in the modern web than it was way back then. It means a lot of things. So we want to prevent security breaches. We don't want to be able for someone to take over your computer or make your browser crash or any of these things. We want to make sure that if something bad happens either malicious or non-malicious that we can isolate the failure. So for instance, if you're watching YouTube in one tab and you're moving some money around in your bank in another tab that the YouTube video causes the tab to crash that you don't crash your other tab where you're doing something important. And beyond this, we want to do things like we want to loosely couple all the components in the browser because this makes it easier to refactor. It would be nice if at the end of this research project not just a new browser engine was created but one that's easier to maintain and add things to. This also means that there's not these big dependencies between pieces. So if people make mistakes the sort of the boundary conditions are more well known. And then we want to do things to prevent and tolerate programmer error. Like you saw like David Barron doesn't think he's good enough to write multi-threaded code. So who are we trying to get in here? And browsers are big. It takes a lot of people to write them. Mozilla has a relatively small team but we're a thousand person company running a browser and we're like, I think the smallest person on the block with a large margin. So if you're gonna have a thousand programmers working on building new browsers like they're going to make a thousand times whatever mistakes and you want to make sure that those mistakes aren't all gonna turn into zero day security vulnerabilities or make users very sad. And just like with parallelism, you need this at every level. So people design security stuff in layers and this basically we're trying to add more of these layers. So we have process isolation which I think Chrome started with the tabs in different processes. This is nice for all the reasons you're probably familiar with. And then of course you want to if someone does somehow mess with one of your tabs, you want to make sure that the amount of damage it can do is minimize. So you want to sandbox things. But then we get into things that we're trying to add like memory safety. We want to make sure the programmers and obviously if you have a malicious programmer you have no hope. But if you have a normal programmer who's just trying to do their job like we can prevent as much as possible them destroying your computer. Hopefully with the type system if not that then with these other nice layers. We'd also like to have this beautiful world where when you wrote code that had a bug in it it just didn't compile in the first place. And we can't quite get there but we can certainly get a lot closer than C++ gets us right now is I'm sure you're and you want your hard drive. I will do that. We can make a lot of things that you shouldn't be doing actual compiler errors which makes everybody safer. So a good example of this is we wanted to see how effective our Rust programming languages memory model would be at preventing these kinds of errors. So we actually went and looked at all of the security critical bugs in the web audio API. And it turned out that all of them were array out of bounds errors either because someone was careless or because of integer overflow or any number of other reasons or they were used after freeze which freed some memory but something else is still trying to use it and then you can do some stack smashing or whatever. All of these errors would be prevented had this code been written in Rust. It turned out. And the reason we looked at web audio is web audio has no security properties. Like there's no reason for this to be like trustworthy code or anything else. Like this is just some set of functionality exposed by JavaScript to web programmers and shouldn't have any security implementations at all but there were 34 critical vulnerabilities in it. So if we can prevent all of these, that'll be great. The other thing is there are tons of new APIs like this appearing all the time. They don't have security properties of their own but careless implementation or even careful implementation can still leave you with bad bugs. And we think we can prevent those. So current status. So we've been working on this for a couple of years although the team sizes has grown over that time. In March of 2013 we were able to pass ACID-1 which is a pretty good basic web layout test. In March of last year we were able to pass ACID-2. We regress on ACID-2 every now and then so it may or may not look the same if you try the current master but that one tests even more crazy things and then the ACID test stopped being interesting so we start testing real sites. So these are sites that are from our static testing suite so these are basically static snapshots of web pages that we downloaded from the Alexa Top 100. So here we have the Guardians of the Galaxy page on Wikipedia, the CNN homepage I believe, a Google search for the word servo and some random red thread. And you can see that servo can already render these pages pretty close to what they're supposed to look like and I believe these screenshots were taken last September and we've added tons of stuff so they probably look way better and I should make new screenshots. So we have over 100 CSS features supported already. We have landed mixed blend mode, filter, text rendering, outline offset, word break. The reason that these things probably don't mean anything to you is we don't have, we don't go after well-known features. We look at the use counter data that Google publishes about how often these are used and we kind of go in popularity order that web pages actually use. So while these may not sound like amazing things, these are used in almost all web pages. We have pull requests open but not reviewed for things like text overflow, text line justify, CSS counters which are pretty cool, text shadow and we just got CSS 2D transforms which I can use when they land to update my slide deck to make really sweet things happen and there's a whole bunch of other pull requests that have new stuff. We're aiming to have sort of all the features used by 50% of the web sort of, so they order them by how many pages that they look at use this particular feature so we want to get to the 50% mark this year. I think that Patrick has made it a personal goal to get to the 75% mark by the end of this quarter so we should make a lot of progress there real soon. And then architecturally, there's a lot to consider so we need to have all of the parts and all of the optimizations that traditional browsers have so that we're just as fast. So we just finished incremental layout which means there's two ways to speed something up. You can either make it there or you can just not do it. So incremental layout is about just not doing it so you're skipping calculating things that haven't changed since the last time you laid out the page so that's really important. We also have partial writing mode support so this is important because writing modes basically flip web pages onto their, in some other orientation. So inside servo we don't ever talk about top and bottom and left and right because that doesn't have any meaning in this new world with writing modes because somebody might have flipped the writing mode to right bottom up and now when you talk about the top of something it's really at the physical bottom. So internally in servo, in Firefox, everything is still called top and bottom and you have to keep in your head which writing mode this code is working for and then translate it. In servo, things are called inline start, inline end, block start, block end and we don't talk about width and height, we talk about block size and inline size which are things that stay the same no matter what writing mode you're in. So we wanted to make sure we had that in there from the beginning so that people didn't get confused. We've done a lot of work trying to unify and make safe the memory and interaction interface between the systems code in Rust and JavaScript. So if you don't know like all the JavaScript APIs you use but all the apps written in C++ or Rust generally for speed reasons and C++ and Rust don't have garbage collectors and JavaScript is garbage collected. So for instance what Gecko does is you garbage collect all the JavaScript objects and on the C++ side those are all reference counted but then of course it's very easy in JavaScript to get yourself into a memory cycle which then wouldn't be free so they have a cycle collector that goes around and tries to break cycles. So we actually all of the Rust code in servo is managed by the JavaScript GC so it tells us when we're allowed to free stuff and it has hooks to trace through all the Rust pointers and we've done a lot of work on making that interface safe which is pretty hard. We'd use programmable compiler lengths for instance we have several layers of security just in that set of stuff. We're trying to make servo embeddable from the start all the other browsers seem to be removing embeddability and we've done this using the Chromium embedding framework which I can't remember who made this I think it was the Steam guys or something to embed Chromium into the clients and use the same. It's 32 bit only right now but we're working on adding 60 bit Android this year and recently Michael Wu at Mozilla got all the servo code booting up on the flame reference Firefox OS device so now we have boot to servo although it can't do very much yet except I think it just displays Asset 2 as soon as it comes up. So we've tried to get all these sort of big architectural questions answered as early on in the project as possible. So what does this look like? Now we have pretty graphs. So the layout performance this shows servo running on one thread on two threads and four threads and this breaks it up between what I've been calling layout is called reflow here. Their style recalculation which is processing the CSS information and display list construction which is creating a list of commands for the graphics driver. And you can see where this is for the Wikipedia Guardians of the Galaxy page and you can see that we make some improvement the more threads you add but it's not as much as we would like. This on pages depending on you know sort of how many parallelism hazards that HTML contains. So here's the same thing for the CNN homepage version of that you can see we're spending lots of time in display list construction. And then for Reddit we actually get extremely good results which I'm sure Redditors will be happy this is a browser they can use. And then compared to Gecko we're doing amazingly well so far although I have some caveats about that. So here you can see Gecko's version of those same three things and you can see servo with just one thread is already faster. We're trying to make these as realistic tests as possible and this should be taken with a grain of salt because as much of the web as Gecko does we're not sure yet if we're way faster because we're really that good or if it's because we have control of the compiler and can help along our own optimizations or if there's some edge case that we haven't handled that actually slows it down. But the good news is like we have to be at least as fast as Gecko and so far we seem to be doing very well in that regard being improvement which is great. So it's the same thing for it's even bigger for CNN because Gecko takes a long during layout. So this is the thing where we'll probably always be faster because since we're able to parallelize layout we can do it way faster than they can and that page is layout heavy and Reddit we just do amazing on. I don't know why that is off the top of my head. This is a timeline view this is all servo it just shows you the different like how much time we spend in each functional area whether it's network or JavaScript or layout. So in these you can see like CNN especially has tons of JavaScript so we can expect not a huge amount of improvement there. The reason to show this here is that you have to parallelize all of these things. So for things like Wikipedia and Reddit you can see that parallelizing style and layout is gonna help a lot and for CNN parallelizing lots of things will help a lot but especially JavaScript if we could ever do that. Here's the thing without network activity and without JavaScript. This is basically the things that servo could actually make a useful dent in and you can see there's a lot we can do here that's not just layout. Okay last thing is power usage. We found that if you run CPU cores at lower voltage and of course this is gonna vary by CPU but these were on MacBook Pros. If you turn off turbo boost you lose 30% of your performance but you get 40% more battery life but it turns out that servo's parallelism is well enough that we can bridge that gap and make up all of the performance and everything faster. So even though we have 30% less clock on every CPU by using more CPUs we can't adjust this fast but use 40% less power. So this was, we just started looking at this and we were like floored at how good this was. And we'll try to do more. So in the future this year we're gonna try to do some kind of alpha release in 2015. We're also, because we can't sort of boil the whole ocean in a day, it's like we only have a team of I don't know, 10, 12 people working on servo and even if we had 50 people like Gecko has way more than that and the web keeps getting bigger so we will never catch up. So instead of trying to catch up or thinking they'll stop and then we'll do a rewrite or whatever we're starting to, we're gonna start landing rust and servo technology inside Gecko so we can start training up Gecko developers and stuff like that and get some of the benefits of the research that went out into the production as soon as possible. We also need to add a bunch of CSS features and we sort of prioritize the things that should have architectural implications. So pagination for example is where you have text in a more sort of newspaper editing style where it can flow through some element and then stop and then the overflow goes into some other element over here which has all kinds of architectural implications. Flexbox as well has a bunch of layout implications and then we work on the things that most web pages use. You can help us. It's very easy to start contributing to servo. We have, it's all on GitHub. We have an issue tracker there. We make regular forays into the source code to pick out issues that would be good for people's first time contributing to the project. We define this as most of the work is making a pull request and learning how to use get and these kinds of things. There's so much work to be done here because if you have a DOM API that you know how to use it probably doesn't work fully in servo and you could add it. So there's a lot of low hanging fruit all over the engine. You can find us in IRC on mozilla.org and if you need to ask me questions if you don't have time here or you have more things you wanna ask then there's my contact information. I'll answer any questions that anybody has. So thank you, that's a really interesting talk. How fast do you compare to Chromium? So I don't have benchmark numbers to show you against Chromium but we compare to, all the browsers are easy which is everything except Safari and we're faster than all of them. Take it with the caveat, support the whole web and it may turn out tomorrow that we did something stupid but as long as we involve those people in the decision, we're faster. I understand sometimes with Rust you'll need to write unsafe code that it doesn't have the memory protection then you use that to write certain abstractions that then you can other developers can use and as long as you gotta write those small sections right then it'll be safe but if some of them you will get wrong and so you mentioned 34 bugs in web audio and you'll get revolved and you might not. Yeah, so we use two big pieces of C code and this will grow because of WebRTC is like millions of lines of code. We're probably not gonna rewrite those in Rust certainly not before this thing actually ships if it ships but we use the spider monkey JavaScript engine and we use the Skiya rendering library which Google uses in Chrome and if there are bugs in that then we can't save you but hopefully we can save you from everything else. We also have things in the Rust language that we write in the unsafe language like for instance all of this fancy pointer stitching that we have to do to make this JavaScript menu have to write that and because it's specifically marked in the code as unsafe if there is a bug we know exactly where to look like the and we try to keep that boundary as small as possible. So like it's not a panacea but it's another layer and hopefully it's a very big layer that will stop things. Hi, like all software the last 5% is where like it all falls apart. This is especially in the web like it's easy to write a web browser that does like 90% of the websites on the planet but it's at last 10% where everyone's a special unique snowflake and does something different. What's your thoughts on that? How are you gonna deal with that? So the glib answer is Gekko and Chromium have this same 5% problem we do because the web keeps growing and they certainly have features they don't implement well. We, as I said when we work on these features we're trying to figure out as much as we can figure out what that 5% will be and work on that first so that we know that that is not gonna invalid the project. But you're right in that it's gonna take a lot of effort to catch up and get there and we're not exactly sure what that looks like which is why we're starting these projects like getting Rust and server to Gekko now and try to get it from both ends. So we're gonna try to write a browser from scratch we're gonna try to take what we can and put it into Gekko and like that I think we'll get us to here and there'll still be this giant gap and I don't know what happens there but hopefully profit or something. So I still don't know so I hope that we're able to come up with that. I mean the sort of hand wavy high level language argument is like Rust makes you more productive so you'll be able to catch up faster but there's always gonna be some section I think and we're just doing the best we can to identify what we think are gonna be those really hard things and do them up. You mentioned that you're gonna be landing Rust and Gekko soon but as you also said like Firefox is not just Gekko so I was just wondering what your plans are in terms of bringing in using Rust along with the existing code base especially since a lot of Firefox is JavaScript talking to C++ via the IDL and so forth. Do you have some kind of plan to let the existing JavaScript use Rust or vice versa the way it is now with JavaScript and C++? Not in Gekko so our first project is probably gonna be to replace the BMP decoder and the idea is that just like we have these easy bugs that we've identified in servo to help use where the hardest part is like going through the process like we wanted something where the hardest part was getting through the build system of Firefox. So we picked an easy component that has almost no downside if we mess it up but we have a bunch of ideas. We wrote a couple libraries in running servo that are much more spec compliant than things that other people have done. One of these is we wrote one of the first new HTML parsers that's been written in a while and most of the like the one that Gekko uses is machine translated from a Java reference. So we wrote one from scratch in Rust that we think is much better. We also have a URL parsing library and some other stuff like that and that's the kind of stuff that we would start trying to work on after we get over the initial hump of the BMP decoder. I don't know what comes after that. Obviously there are huge chunks of Gekko that would be really hard to replace like it gets no easier to replace layout with our layout as opposed to them writing their own. So I don't know what we do to clear those, the middle 50% of the difficulty but at least we can start attacking the problems you're working in. Okay, sorry guys, we're out of time now for questions. So I'd like to thank you very much for your presentation. Thanks.