 Okay, hi everybody. So this is going to be another security talk and Carlos, my excellent colleague this morning, he talks to you about the security model in general and how to work with the security team. I'm going to be doing something much more specific, which is what happens when security goes wrong. And specifically, I'm going to be talking through one particular vulnerability we had, how it was caused, how it was found, what happened about it, sort of thing. But I want to start off very imminently with a quote from one of our excellent vulnerability rewards program researchers. And these are people who find and file security bugs, thank you. And he said, I also want to thank you for being the unsung heroes of internet security. I know the whole Chrome team is keeping the hands of hell at bay from instantly rootkitting our machines. And to that, I seriously have to take my hat off to you all. Thank you for everything you do and for working so hard to keep the entire internet and its users safe. I put that out there, completely unsolicited comment. Because we're fighting the good fight. We are the goodies here. And you've got to remember that when we in the security team are giving you a hard time. Thank you back to him and all of our other researchers for working so very hard to find problems we have introduced. This is the case study I'm going to be talking through. If you're wondering what that number means, that's just an example of the sort of confusing terminology we like to use in the security industry to put off the rest of you. Or specifically, it's a globally assigned number so that people can talk uniquely about vulnerabilities in public without getting confused about which one's which. So this is a number we assign for externally reported Chrome vulnerabilities and other software packages have the same. This is the timeline and as with everybody else's slides, the fonts have gone slightly wrong. On the left-hand side, we've got four different steps or four different things that happen together to cause this eventually to occur. And I'm going to talk through those four different stages. Now two of those four were in fact security improvements. But together those four things were the preconditions to this problem occurring. It was then discovered and it was actually used to attack our users for real. I'll be talking through that as well and I will be talking through how we discovered it and fixed it. Sam, could I borrow you? I can't see my notes. Can you see if you can figure out what's up there? How are we so bad that we have vulnerabilities? And I've stolen one of one of the excellent slides from Steve this morning to explain that. We've got a really, really hard job. We take all this crazy web content. We take an entire industry, an ecosystem of web content. HTML, CSS, JavaScript, WebXR, WebGL, TLS certificates, speedy, quick, a vast array of insane bits of data from all of the other companies out here outside this building and the entire rest of the world. And we're going to try to run that safely to make these pixels and interactivity and MIDI and all the other things we make. That's very, very hard. I think that would be the hardest job in the security industry in itself. But we make it that much harder by running it on the user's own device where they've got their bank details and photos of their kids and all of their most precious data. So we've got to get it right and sometimes we don't. But that's because it's really, really hard. It doesn't help that this industry is changing and evolving all the time. So all the time we have to parse new formats and things like that. This is how it's supposed to work. We have a website which delivers some data to the browser and then your code runs and handles that data. Of course, perhaps there's somebody evil who has crafted some malicious data on that web server to deliver to the browser. And it's very important that you think about where your data comes from. That's the main point here because that's where the attacks will come from as well. You can be fairly confident that the badness is coming from the web server rather than having been injected on the wire, as Carlos talked about with TLS. And all of the stuff that we do around that you can be moderately confident that nothing bad is being injected on the wire. So think about whether your connection is over TLS as well. However, a modern website is of course very complex and involves iframes often for adverts for other things as well. So there might be badness in one of those iframes even if there isn't in the main website. The reason that's important is the user might have some level of trust with the main website but they may have less trust for some of those iframes. So you can't necessarily rely on what the user trusts to make decisions about what may be safe or not. So things for you to remember here when you're trying to write secure code. Above all, remember where your data comes from. Think about which bits of your code are parsing and handling untrustworthy data versus data that you trust. Perhaps you can cryptographically guarantee some of your data comes from Google and then you might handle it differently from data that comes from some untrustworthy website. And HTTPS is very important. We do a lot of stuff to make sure we can rely upon it. So I've talked about the data which is used to exploit the bugs that we all introduce. And now I want to talk about those bugs. We have bugs. Here's the number of high and critical severity security bugs we fixed over the past few years. You can see those numbers are pretty high. High and critical, that means these are bugs which we feel alone could be enough to compromise our users. So we also have lower severity bugs, mediums and so on, which together which can be chained together to compromise our users to steal all their data or get code running. But each of these individually we thought was probably enough to cause badness for our users. The good news about this is we catch most of them in head and in beta before it gets to stable. I want to talk a little bit about how they're found. You can see the big blue stripes. That is clusterfuzz. That's our automated fuzzing. I'll talk a little bit more about that Carlos already did. The yellow stripes however are also very important. That's our external researchers who are constantly trying to hammer on Chrome and find creative and cunning ways of breaking everything that we do. And we are really grateful for that. They're extremely helpful people. We obviously give a reward. Many of them even donate that reward to charity. They are absolutely awesome. They're very helpful. We're extremely grateful. A little bit more on clusterfuzz. Carlos talked about it so I'll make this quick. It evolves data. This is the one bit Carlos didn't mention. I think super cool. It will try and evolve the data that it injects into your code to increase code coverage until it finds the mistakes you've made. So it will monitor code coverage and try and exercise all your branches and all your functions until it finds what you've done wrong, which I think is awesome. Fuzzers are tiny bits of code which take that data that clusterfuzz builds and injects it into your code. So if you're writing a parser or anything that handles new formats of untrustworthy data, please write a fuzzer. They can be two lines of code because it just takes that data that already exists and puts it into your code. There is a fuzzing 101 which we have not had time to do in this Chrome University but please check out the recording there if you want to know how to do that. The other thing that's really important about clusterfuzz, if you get a bug from clusterfuzz you might think it's not that important. It's generated by a robot. Why does it matter so much as if a human had filed it? But the whole point is if a robot can find the bug that makes it all the more serious than if a human can find it because no cunning has been applied here. So it's if anything even more serious than a bug reported by a human and if clusterfuzz can find it then definitely the internet of real humans can find it. Okay back to our example. So I've said that the badness tends to come from crafted data. In this case the crafted data was some malicious JavaScript code designed to exploit some bugs in the JavaScript C++ bindings. There were four different sort of precondition steps that caused this bug to be possible. I'm going to talk about all four of them. As I said already two of them were actually security improvements. So by no means were there any mistakes made in this process. This is just stuff that happens with the emergent behavior of this crazy web ecosystem that we try and handle. So I'm going to talk about a couple of different JavaScript APIs. One of them is post message. That's a really really old API from 2009. It was a security improvement added to allow one window to send messages safely to another window. Very very good. A couple of years later it kind of grew the ability to transfer objects to that other window with this parameter at the bottom. Transfer and it says it's a sequence of transferable objects that are transferred with the message. The ownership of these objects is given to the destination side and they are no longer usable on the sending side. Please remember those last words, right? I imagine some of you can start to see what goes wrong here. Those objects are no longer usable on the sending side. The next API we care about came along a year later in 2010 which was File Reader. File Reader is an API to shout it out. Amazing. Very good. Read files. You guys know this exploit already. One of the things you can do with that class in JavaScript is to read the file data into an array buffer which is a JavaScript type of object that's obviously an array buffer. We'll see what went wrong with that. The final stat was actually a security improvement but also a binary size improvement around the bindings between JavaScript and C++. I want to talk a little bit about them. I'm sure you've heard the metaphor of a duck or a swan that's swimming calmly on the surface but is crazily thrashing around underneath the water and that's kind of what we've got. So if JavaScript is a managed, sane, sensible, safe API without direct hardware access, no sensitive access to cameras or your files or whatever, all the access is restricted by permissions and then you've got C++ which as we have seen is not like that, has pointers and things. Those pointers should never be visible in JavaScript land. However, pretty much all the JavaScript APIs are going to be backed by some kind of C++ implementation. Here is the C++ implementation for File Reader specifically for that Reader's array buffer function. This is the only C++ I'm going to show you in this talk. You can see the top function is called File Reader Sync Reader's array buffer and it is just the implementation of that Reader's array buffer API. It pretty much immediately calls through to array buffer result which is the bottom code and you can see I've stripped out some of the lines here but as you can clearly see there's a catastrophic bug in this line of code. I hope you can all see that. No? Exactly, I wouldn't be able to spot it and that's kind of my point. We are all going to introduce C++ memory safety errors. It's just a thing. We're going to do that and that's why we in the security team are constantly going on about fuzzing and sandboxing and things like that. You will introduce C++ memory safety errors and I'm going to show you how this error manifested and how it was exploited to turn into a powerful attack primitive. You can see it pretty much just creates a C++ object called the DOM array buffer. It's pretty much all it does. Here's the memory layer behind that in a fictional computer which inexplicably has 19 bytes of RAM. We've got a JavaScript object at the top and that as I say is backed by a C++ implementation. The JavaScript object is the array buffer. The C++ implementation is that DOM array buffer object we saw already. That however does not store the actual data. Instead it points through to a backing store which stores the actual data. So far so good. That little black dot there is a C++ pointer. Pointers are not a thing in JavaScript. That value, that memory address should never be visible to JavaScript. Now supposing somebody calls that API twice. You've now got two JavaScript array buffer objects. Each of them backed by a C++ DOM array buffer object but for efficiency we don't make another copy of the backing store because of course the file might be four gigabytes long. We don't want to make an extra copy of that four gigabytes. So we have two DOM array buffers each pointing at the same backing store. Now along comes post message. Do you remember what I said about post message? The object that is transferred is no longer usable on the sending side. And what that means in practice is that C++ would deallocate that DOM array buffer off it goes. And here was the mistake. The DOM array buffer believed it owned the backing store. So it also deallocated the backing store. Sad times. So you can see what's happened here. We've got a valid JavaScript object backed by the C++ DOM array buffer which is now pointing at some memory that C++ believes is not allocated and is not assigned to anything. That's called a use after free or so far it's just a free but we'll see that the attacker can use that. Key points to take away. You're not clever enough to write a safe C++ code. Sorry I'm definitely not. If I'm not you're not. Or the other way around. But anyway I'm just a TPM. I can't be trusted to code to talk. But it's not your fault. C++ is really really really hard. It's really hard to get it right and that's why as I say we constantly look at mitigations and make sure that we do the right things in different processes and so on. So back to what happened. All of these preconditions existed and the bug existed by 2015. Somebody found it and I want to talk a little bit about who found it. We don't know exactly who. We also don't know when. We know it was sometime between 2015 and 2019. So you might be wondering who are these mysterious researchers who find these bugs and maybe you're thinking they're spotty teenagers in bedrooms. Maybe some of them are. Sometimes they're nation states. Sometimes they're an entire industry. So all of these little stars represent companies that are involved in buying and selling security vulnerabilities. So that's really important to understand. The adversary here is an entire well-organized industry. And many of these companies have have great ethical scruples and will only sell these exploits to highly democratic governments and so on. Others maybe not so much. It's of course subject to market forces. Now I know this is a tiny font which you won't be able to read. So I'll read out bits of it. This is taken from the website of a company called Zerodium who have a price list for exploits in different software packages. At the bottom here you won't be able to see it says $10,000 and at the top it's $1 million. So the different rows are different purchase prices for different types of vulnerabilities. At the bottom we've got things like Drupal, PHPBB, antivirus products which are presumably cheap to buy vulnerabilities in because either they're insecure or they're not used very much. At the top market forces means it's $1 million to buy a Windows remote code execution with zero clicks. We in Chrome are in the second row with that little purple box $500,000 to buy a Chrome vulnerability which allows remote code execution and local privilege escalation. That's quite a large amount of money relative to most of the packages here but we are extremely popular. We don't know the extent to which that price is because Chrome is very popular versus because it's very secure but either way market forces result in that that number popping out at the end of it. Yes so that's us. I would point out for those of you can't see the other browsers are down in this row. Somewhat cheap. No but I don't necessarily want to claim that's because we're more secure. It may just be because we're most popular. More popular. But anyway that's the kind of adversary we're up against in the entire industry where these vulnerabilities are bought and sold. Make no mistake our bugs are exploited. There are some excellent blog posts out there. These people at Exodus Intelligence have three fantastic blog posts about Chrome vulnerabilities. They are beautifully written. I would recommend reading them. This one's very recent from Blue Frost Security the 8th of August. It says exploiting the escaping the Chrome sandbox via an index DB race condition. I don't know but the first couple of paragraphs suggest that they knew about this vulnerability and we're perhaps sitting on it and then we we fixed it through a refactoring and that's when they published it. I don't know. That's speculation. You can read the blog post yourself. So somebody discovered this vulnerability. We don't know how. We don't know exactly who. We don't know how they traded it across the industry but then somebody came to use it. Who came to use it? This is the exciting bit. I get to reveal. The answer was a South Korean attack group used it to attack targets in North Korea and I want to thank our threat analysis group for allowing me to share this with you today because normally we keep the stuff really close to our chest but they wanted you to understand we really are contending with adversaries. You really do use our stuff to attack other people. It was delivered to these North Korean parties via watering hole websites and email campaigns that that that took their users to that location. Let's look at the actual attack code that was used. There's two small changes on this page from the attack code that was actually used by one one party to attack another party. I'll come to what those changes are in a moment but you can see it's pretty simple. First line it creates a file reader because JavaScript is basically insane. I now have to talk about the last line of code where it's going to use that file reader to read readers array buffer but in the meantime it's it's lodged a callback for progress updates. Reading a file might take a long time so therefore there's the ability for JavaScript code to get progress notifications so it can update a progress file or some such. Within that progress callback it's going to ask for reader.result and that's going to give it that array buffer that we've talked about at so much length. It's going to do that twice so it's going to get two array buffers and what we're going to get here is two array buffers each backed by two C++ DOM array buffers and they are backed by that single backing store that contains one copy of the data. At least sometimes that's the case sometimes it's not. Sometimes JavaScript is going to give you two identical objects so the attack code first of all going to check that it's going to check that it's got two different objects. It's time-independent but that's fine because the attacker can repeat this as much as they want and if it finds that it does indeed have two different objects which are backed by that same backing store then it's going to free one of them by using post message. Bogs your uncle that's an English expression. You end up with this buff to object which you can then use to write to unallocated memory. We're going to put the number 42 here. That number 42 is one of the two changes I have made from the real attack code. The other one is for reasons that I genuinely have no idea about it was necessary to call readers array buffer twice for the attack to work but apart from that that's the exact attack code that was used. Pretty cool, right? So just to reiterate this is the memory situation we have we have buff to which is an array buffer backed by a dom array buffer pointing to some memory that's not used and we're going to put the number 42 in there. You can see now why I use the number 42 because it fits in this tiny box. And they can read or write whatever they want to that memory space anywhere from eight up to 13 they can put whatever they want there. Is this typical? Yes, yes it's typical. These are the bugs that we've paid rewards to our vulnerability rewards program researchers for. 30% of them are used after freeze, 561 of them. When you include all of the other C++ memory safety errors, uninitialized data, buffer overflows and other general memory safety doom plus integer overflows you get to about 65% of bugs. We would love it if we're in a world where our researchers had to come up with incredibly cunning protocol bypasses and things like that but the fact is the majority of our bugs are these kinds of basic things which is why we are constantly going on about them. Sorry about that. But as an example of why that's important we are, for example, always wanting to put bounds checks on stood vector and things like that and you can see why, right? It's a big deal. I know what you're thinking. They found a way to write to a tiny bit of memory. Big deal! This is deallocated memory Adrian. It's intrinsically pointless memory. What does it matter if they write to this completely irrelevant area? So the main point of this talk is so I can show you how they can use that seemingly innocuous and irrelevant vulnerability to write to anywhere in memory and then get the ability to do whatever they want. I'm not going to teach you about a wide array of techniques that the attackers can use because I don't understand them myself but I'm going to show you one example in which this small vulnerability could be used to achieve much more powerful code execution. This is the basic way they want to do it. This is a typical kind of change. So they obtain a specific out of bounds read or write which is what we've got with this file reader bug. They're going to try and convert that to the ability to read or write anywhere in the process memory. They're then going to try and use that to get code execution of any arbitrary native code that they want to run and finally they'll try and break out to compromise the whole machine. I'm going to try to show you each of those stages with varying degrees of detail. Now the attack code I showed you previously was the real attack code used. From here on I'm taking ruthless liberties because the attackers did some very cunning things which can't possibly be explained in the remaining however much time I've got. So I'm not actually going to tell you what they really did from here on. It's going to be somewhat made up but it does work and it can be used to use this vulnerability to get code running. I'm also going to talk about some of the mitigations or as this slide puts it mitigation which are the ways we try to stop the attackers getting between these stages and that's primarily what we in the security team are focusing on at least in parts of the security team. So back to the memory layout we've got buff2 which is backed by a DOM array buffer which is pointing to nothing useful. The attacker is going to simply do something called a heap spray which simply involves allocating an awful lot of little things. It's basically as simple as that. At some stage one of them by chance will be allocated in this bit of memory that they can access. It's kind of as simple as that. That's called a heap spray. So now they're going to throw away all the ones they don't care about. One of them happened to be allocated in this bit of memory that they can refer to. So there are now two different JavaScript objects we care about. There's the original DOM array buffer which is green which enables them to write to any memory location between 8 and 13 and then there's this new JavaScript array which happened to be allocated within that area and they just kept going until one was. The key thing here as you can see memory location 11 there is a c++ pointer. Pointers are not a thing in JavaScript but now of course the attacker has the ability to write a new pointer to that location. It's kind of as simple as that and of course when they can write any value to that pointer they can then change this second JavaScript array to address any memory anywhere else in the process memory space as basically as simple as that. There are a ton of different techniques which attackers may use to take a tiny vulnerability and make it powerful. I'm not going to talk about any of the rest this is also just one way heap sprays can be used. My point is simply when you think you see a tiny unimportant vulnerability the attacker probably has cunning ways to make it powerful. As you can see some of them are ridiculously simple right so please be afraid if you have an out of bounds write or an out of bounds read you've got to assume the attacker is going to find a way to to make use of it. So what could the attacker do when they've got the ability to read or write anywhere in the process? Well they may be able to do data from other websites maybe they can steal bank details maybe they can steal your secrets maybe they can even write over some important secret and cause money to go to the wrong place or some such. We've talked previously about the multi-process architecture and site isolation and so on that helps a bit. I'll talk a bit more in a moment about why that's not perfect. The next stage however the attacker wants to do is to achieve code execution of their own arbitrary native code. Generally speaking that's just a matter of overwriting any function pointer. If you have the ability to write to anywhere in the process memory space just find a function pointer and overwrite it there are plenty they're all over the stack as your function return addresses there's also vtable pointers. Of course we do have mitigations in place here like permissions that are applied to different memory pages but that's the basic idea. So I'm not going to talk about that anymore. So yes we've talked about how the attacker can get code execution running in the renderer process. The goal of the attacker in this case was to move beyond that and compromise the whole machine which I'll talk about in a moment but here's some things to remember first. Please do remember that even a single out of bounds read or write use after free can probably be converted into a powerful attack primitive. Don't require evidence that it's being exploited or that it is exploitable you just have to assume it is. We in the security team don't know like we don't know what's being exploited there's no point asking us sometimes we and Google have some tremendous intelligence about what the attackers are doing but as often as not we don't know we just assume it's got to be exploited if it is exploitable. And the renderer compromise alone can allow data theft. So code execution in the renderer process isn't enough to own the machine completely that's because we have this multiprocess architecture which you've heard about a whole bunch of times. Some of those processes are sandboxed which means they have limited access to the operating system. It's our goal to make sure that the untrustworthy web content always arrives in one of those processes which is sandboxed and ideally in the renderer process. So the bugs that we will inevitably introduce have limited effects. We're not always perfect at that. Untrustworthy web content does get to other processes in particular there's the network process and that's currently there's work in progress on sandboxing that it's the different stages on different platforms. But yes please think about where your untrustworthy data is coming from. Let's look at a couple of these processes in more detail. The renderer process we've talked about site isolation several times over the past couple of days. The goal is that even iframes end up in different processors which is great because when there's malware in suspiciouscatpictures.com it can't steal your credit card details from bank.com. However we can't apply that universally. On Android right now we tend not to have site isolation at all. It's not quite that simple because there's different experiments always in progress. From 77 the goal is to have limited site isolation on high-end android devices. But for one gigabyte devices there's effectively no site isolation. So you can't rely on that to keep you safe. You've got to assume that a renderer process compromise is serious and does enable data to be stolen from one website by another. Badness. Okay let's look at the browser process. So the browser process holds the keys to the kingdom. It really has it arbiters all the access between other processes. It's got unrestricted access to the file system and the network and so on. So it's extremely important that if you're working in the browser process you're super careful. And in particular we have a hard and fast rule called the rule of two. Carlos mentioned it. You must never ever pass untrustworthy data in the browser process unless you can use a safe language. You might be thinking what safe languages can I possibly use. As Carlos said on Android you can use Java sometimes. That might seem like a niche solution but the reason that's important on the other platforms you can usually spin up a sandbox utility process. That takes quite a long time on Android. So sometimes you might have to go for a kind of dual mode option where you use a utility process on all the desktop platforms and use Java for safe parsing on Android. The other thing that's very important is you've got to assume all the other processes are evil and compromised in their app to trick you. So all those Mojo IPCs you get please assume that they are trying to fool you. So to reiterate our attacker used a file reader used after free to get a specific out of bounds read and write. They converted that to the ability to read and write anywhere by doing a heap spray and then they over wrote a function pointer to get code execution of their own code within the renderer process. This is not strictly true. They actually did something cunning with an audio context object that I do not understand but they could have done this. Did they achieve compromise of a whole machine? Well before I talk about that I want to emphasize that this didn't work reliably for them but that's okay because they're an attacker so they can just repeat it. They could put it in a loop. In this case they use a whole load of iframes to do it in different circumstances and with different permutations of timing until it worked. It's really annoying because we are on the losing side here in that for you to be able to debug a problem you need to be able to consistently reproduce it. That does not apply to the attacker they only need to win once so when you get something that's timing dependent and annoying that doesn't make it less serious. Sorry that sucks I know. So did they compromise the whole machine? The answer is yes but it wasn't using a chrome bug. They used a windows kernel bug. Now this is not we are not by any means blame free here. As you know the whole point of sandboxing the renderer process is to restrict access to operating system facilities. Now on Windows 732 bit that didn't work out. Our mitigations were not enough to prevent them from accessing the windows kernel apis which enabled them to trigger this windows kernel vulnerability. On other platforms however it was good enough. On Windows 10 we used the latest OS facilities to restrict the access that the renderer process has to OS apis called Win32k lockdown so this particular attack would not have worked. And on 64 bit machines some of the earliest stages wouldn't have worked either because there's a big heap and we use a very cunning allocator called partition alloc which will try and make sure different types of memory allocations are in different places so you can't get this kind of confusion with pointers here that can be accessed from here etc. We can't do that on 32 bit just because there's not enough memory. So the attack wouldn't have worked on any of those other permutations of windows. Now just to be clear I'm not saying that no variance of heap spray would have worked on 64 bit windows it's just this particular permutation you wouldn't have worked. So it's kind of a mixed story right and that it we failed to prevent the attack happening altogether but our mitigations were fairly successful on some platforms. So back to the timeline. We've talked about who attacked this what did we do when we discovered and fixed it. So let's zoom in on that a little bit in 2019. February the 27th at 4 a.m. we found it was being exploited. I'm not going to talk about how. By 7.53 a.m. a CR bug was raised if you look at these slides you can follow the link and see the entire saga and the very first thing that happened I believe at 8 a.m. 8 a.m. we put it into clusterfuzz. So clusterfuzz is not just there to generate new bits of untrustworthy content we can also feed existing known bits of content into it to see what happens and indeed by 905 clusterfuzz had reproduced a crash which is great because that means we had evidence to go on we had a like a stack trace we could start to dig into it and indeed by 2.53 p.m. sorry 2.31 p.m. we had a CL up which is super cool that's really really quick that's absolutely epic thanks to Marine whose name I'm going to mispronounce and Adam for getting that done that's very awesome and but the story doesn't end there we got emerged by 5.51 p.m. also pretty epic and then two days later it was in a release that we were starting to push out to the public again pretty epic but it took about a week for it to get out to 50% of users because that's how long it takes to get fixes out there and that was Windows users if we were talking about Android it would be much much longer and here's the really bad news we have an open source development product process so during that time we made things worse effectively because we were publicizing details of this in our Git repositories and we know that there are companies and organizations and probably all sorts of other baddies who monitor that so it's super important not just to fix things quickly but also to help us get them merged back to beta and to stable as rapidly as possible so we can close that vulnerability window yeah question that's a great idea I love it the suggestion is that all all the different changes should include words like specter meltdown and a security genius so specifically on this on this need to merge things in you can expect the release tpms and the security tpms to be pestering you so you know you can expect sheriff bots to be nagging you to merge stuff back to beta and to stable and you can expect us to be also getting on your case so you don't necessarily need to do anything active here but you really do need to be responsive when we start pestering you all right that's basically it two more slides and other types of bug that I haven't talked about at all there's a ton of different things I don't want to imply everything is memory safety there's UI spoofing we have some guidelines on even things like how to display your eyes question oh oh how do we know what to feed into clusterfuzz we uh the means by which we determined this was being exploited also gave us the attack code that we could feed in okay and yeah so UI spoofing there's a ton of stuff there I don't know about if you have questions on that I really hope you do because Carlos is here and he'll have to answer them um protocol tls vulnerabilities all sorts of stuff like that there's various headers which prevent one website from attacking another website like csp and so on there's all of the terrifying timing attacks and side channel attacks which I also don't know enough about but my boss Andrew is at the back so please also ask questions about that and there's stuff about certificates if you haven't been following um all of the interesting developments around Kazakhstan I suggest you google for chrome plus Kazakhstan because it's fascinating so finally stuff to remember we will all introduce security bugs you have to assume that people will be able to exploit them as you see trivial bugs can be exploited and if you can take one thing away please take that away please write fuzzers it's one of our best lines of defense they can be two lines of code we have 450 and it'd be great if we had twice that money anytime you write a new parser or a handle untrustworthy data in a new way please write a fuzzer don't pass untrustworthy data in the browser process don't trust other processes act quickly when bugs are reported and do act quickly to merge things back to beta and stable and please do get security review early and above all just remember that we really have an exceptionally hard job here and it's exceptionally important that we get it right because we're in an incredible position of trust on the user's machines so please do reach out to us as early as possible for security reviews and just generally questions that you have because we really do want to work with you to make sure that we all do the right thing here that is it thank you very much for listening