 So, watching some of the presentations this morning, it's a pretty exciting time to be a performance developer, a performance engineer, because, frankly, there's just so much data that's becoming available. It wasn't long ago that just like a couple of years ago, we couldn't even talk about something like Rail because, frankly, we couldn't surface the right data for you to act on it. And I remember the first iterations of it when we started working on it, where we're exposing this data and it's just big data dumps, and it's very hard to process, right? You have DevTools kind of threading it together, bringing it to a cohesive picture, and it's becoming a really viable thing that you can tackle. So this is all exciting, and this is necessary, and it's critical to us, to our success, because you need that sort of environment, the controlled environment where you can capture the data. You can take the trace, you can actually bring in the trace from another developer, analyze it, look at all the parts of the stack, and kind of piece them all together. So you're not looking at memory versus frames versus network and isolation. All of those things interact. And really, kind of the big breakthrough that we had in the last year or so is starting to piece all those things together, so you can actually figure out what's going on. So this is critical, but I'm going to claim that this is actually not sufficient, because once you've done all of this control testing and you have all your regression monitoring and you've run all your builds and all the rest, then you deploy your applications to the real world, and the real world is kind of messy, as you probably realize, because there's just such a big variability in the devices, in the networks, in just conditions that your application may run in. And it's a little bit crazy as to the kinds of reports that you'll go back for new users. For example, you might never expect to hear that, hey, my application runs slower when it's sunny outside. But that is exactly the kind of scenario that at least one of the teams that I worked with has encountered. And this was an obvious head scratcher, but after doing some extensive user research and other things, they realized that, well, it turns out that the application is an application that many users use while driving, so they like to mount their phone on the dashboard. The dashboard gets kind of hot, the thermal regulator kicks in, it throttles the phone, the application runs slower. So you get slower framerates on a sunny day, and that's kind of crazy. And it's not the kind of thing that you would model in DevTools. I don't know, maybe at the next Chrome Dev Summit we'll add a new weather emulation mode. You heard it here first, right? This is funny, but it's true. This is the kind of stuff that we have to deal with. And this is why we need real user measurement to both capture the real world data coming from the users, and also the API is to kind of counteract and figure out what's going on on the device. So this is the part that I want to focus on here. We've heard a lot about debugging in DevTools and other things. The part that I'm interested in here is what can we do, what can we gather from real users? And it turns out this is actually an area that we've been focusing on for a while. We have the W3C web performance working group where we have all the different browser vendors talking about this. And we spent internally in Chrome quite a bit of time over the course of last year digging into all of the subsystems, trying to understand, do we have the right APIs to first identify and measure each component of rail? If we do or if we don't, how can we make them better? And if necessary, can and should we add new capabilities to the space? So you can actually bring this data from the real world. So the good news is, we're not starting from scratch. The web performance working group has been hard at work for quite a while. And we have a collection of APIs that already fill in some of these gaps. And in particular, we do a pretty good job on the loading part, so the networking part. And if you're not familiar with these APIs, I definitely recommend that you check out the perf timing primer, which kind of threads the whole story together for what's the high resolution timestamp and why do you want that? What's the timeline? And how do these events get emitted? And how do you measure the, say, loading time or web page versus resources and all the rest? So we have some good resources there. But of course, it doesn't capture the full arc of the rail experience, where it's not just the first load, but also the interactions afterwards. So we started looking at, first of all, how are developers using the current APIs today? Are they sufficient even for the load part? And can we do better? And one pattern that we found, maybe not this exact pattern, but something very similar, is that due to the nature of how these APIs have been implemented and inspected, many developers are actually polling for performance data. So the typical pattern is you want to observe that some event has happened. For example, the resource fetch has finished, or maybe the application is emitting custom metrics. And you want to observe that. So if I'm an analytics vendor, I'll just sit there and pull periodically the page. I will pull out all the entries from this global buffer, and I'll then try to diff them from the previous state to try to figure out if something is new. And this creates a lot of unnecessary work, obviously, for the platform, because oftentimes there's just no new events, and this is just unnecessary work. It also has some funny race conditions. And because it's a global buffer, if somebody else comes along and clears that buffer, then some other application is losing data. So this is not great. So after some work in this space and thinking through the plausible solutions, we actually introduce a new interface. And this is already available in Chrome Canary that you can boot this up and play with us today. This is an observer interface where instead of pulling for these events, you can actually say what type of events you would like to listen to. So for example, if you're interested in fetching or in fetches going on in the browser, you can subscribe to resource events, which will be emitted when the resource has finished fetching. Or things like marks and measures, which are user timing metrics that the application may be emitting. So this is an API that an analytics vendor or your application can subscribe to and get these notifications. And we can actually deliver these notifications in a smart way. You've already heard about the concept of idle time and pushing some of the non-critical work into these idle blocks. We can schedule these types of events into the idle time such that your RAF loop does not get interrupted. So you can play with this today. And I encourage you to. This is a very nice addition to the API. Now let's move on to the response. So response is a big component of Braille, right? We've learned that we want to respond to all user input within 100 milliseconds. So where exactly is that 100 milliseconds spent? And we spent quite a bit of time with the input team trying to figure out all those different stages of where the latency goes today. So when you tap that glass, the hardware actually has to register that fact and deliver it to the operating system. The operating system then needs to dispatch that events to the browser. The browser has a bunch of event loops and needs to deliver that event to the correct event loop. Finally, that event loop gets to your handler, dispatches the callback, you execute your code. So there's quite a few layers in here. And doing some testing, this is just a device that was done in the lab. The actual numbers will vary depending on the hardware and other things. But we found that the hardware to operating system latency can be as high as a couple of frames. So 30 to 50 milliseconds, which is quite significant. Similarly, getting the event from the operating system to the browser and then finally to the application can also take another couple of frames. So if you think about our original 100 millisecond budget, we're already spending quite a bit of time just getting the event to your application. And then finally, your application has to run. And then we looked at how you would measure that today or how developers are measuring today. And you would use something like the user timing API where you get that callback, you do a mark start, then you finish your work, you do a mark end, and then you have your duration for the handler. And that's your response time. And of course, that's not great because we want to provide better accountability both for the browser developers and the engineers or developers building all these applications. So there's a big question mark here. It's like, how do we surface this? And the proposal that we have on the table right now, it's actually being implemented in Chrome and a few other browsers, is to change the definition of the event timestamp. So there's an existing timestamp property on the event object. But it turns out today, due to various implementation bugs, it's not terribly helpful. It uses a different time base. It's not terribly accurate. So we're exploring changing this to a DOM high-res timestamp, which is a high-resolution timestamp, which will have the same time origin as all the other performance events. And more importantly, for this particular case, is that for input events, it would reflect the timestamp of when the operating system got the event. So this is a much more accurate, higher-fidelity timestamp that you can use in your applications, which is quite nice. And then for other events that are not input events, it would be equal to the time when the event is created by the browser. So this is something that's going through the implementation phase right now. And I'm hoping to see this come to Chrome soon. So with this code, you can actually, the way I guess you would use this is, if you're trying to track the response time of, say, your touch events, you would subscribe via touch start listener. You can then compute the final time by just subtracting the performance dot now, which also returns the high-res timestamp, and get the full duration, or at least a much more accurate duration, which is nice. Ideally, this is kind of where we want to be. Except as we started digging further, we realized that there's some gotchas here. So when we look at scrolling, which is one of the most fundamental and important interactions on your device, we found that in many cases, there's a huge delay, 100 milliseconds, sometimes even 500 milliseconds, before the browser can actually do the scroll. And if you dig in to understand why, it's actually quite tricky and interesting. So let's go back to our original flow diagram here. You get the kernel event, you dispatch it to the browser, the browser says, hey, are there any touch handlers registered on this page? Because if they're not, great, I'm done, and I can just do the compositor scrolling. I don't have to talk to the main thread, or do any work at all. This is the fast path, and this is nice and beautiful. But in practice, most every application will have some sort of handlers registered. So really, what happens is then we say, OK, fine, we can't do the compositor path. Now we need to actually dispatch this event to the main thread, and just this one hub, this synchronization to the main thread, is already a bottleneck, because just adding one single handler will force this entire path. So we dispatch it to the main thread, we then run your handler, and then we have to check in the handler, did you call prevent default? Because prevent default allows you to effectively stop the browser from doing its default operation. So we have to block on this, and this is critical. And of course, this is not great, because now if, let's say your application is executing something on the main thread, scrolling is blocked. And even just as the synchronization is expensive. So clearly, we need something better. And this is a good example of perhaps some missing functionality, or some missing APIs that will allow us to build much better and more responsive applications. So one proposal that's currently being discussed in this under development is this idea of passive event listeners, where when you register the event listener, you can actually declare it to be passive. And effectively, it does two things. It promises that you will not call prevent default, or rather, even if you call prevent default, that is effectively a no op. So you will not prevent the default action. And that allows the browser to then proceed with scrolling without blocking on that execution. So if we go back to our original diagram, there's a lot of pointy errors here. But the important part is on the bottom. So we say, is there a touch handler? Yes, there is. Is it passive? If it is declared to be passive, then we can continue with the scrolling. But we will still dispatch the event. So you'll still get the event in JavaScript land. You can still process what you need. And ideally, actually, you would take that handler and move it into some idle time, where you can process it and unblock all of your critical work, such that your animations run as fast as they can. And this is really nice. So as I said, this is one of the proposals on the table, and I'll be curious to see if there are others. But this is what we're currently experimenting with. So as a quick summary for response, we have event timestamp, which provides much better granularity for response time. Do be careful with event listeners. This is something you may want to audit on your application today, just to see, am I registering these things? Are the things that I'm including on my page registering these things? Because even just including one may have much higher cost than you may expect. And then finally, you probably want to start experimenting with performance observer, at least checking if it's there and leveraging where possible. Because that'll allow you to eliminate a lot of this extra polling work. So moving on, animation. This is a big use case that we found. Perhaps you may not think of it as animation, but this has come up in animation because it overlaps with scrolling and animation a lot of other use cases, which is visibility. We have this concept of visibility. And there's a lot of best practices that we craft around some concept of visibility, where we say, look, we have these long pages, but not everything is important in the sense that some content is visible, and you may want to prioritize it. So maybe that's fetching, maybe that's rendering, maybe it's doing some other work. There is things like the lazy loading and infinite scrolling, which depend on knowing the position on the page. Then finally, there's analytics. I want to know if a certain thing was seen and how long it was seen for in other use cases. And this is very, very common. We've certainly had many discussions around this topic on the Web Performance Working Group, but we never really had a good solution for it, or just kind of collating all these use cases. And really what it boils down to is the core use case here is the developer wants to say, I want to know when this particular element or section of the page intersects with the browser, with the viewable, with the visual viewport, if you will. And if we think about the properties of such an interface, if it was to exist, then we would like it to be simple and easy to use, because that's certainly not the case today. If you want to do viewability sort of patterns today, it is possible. Many people have built them, but it's actually very, very hard and tricky. And oftentimes, it comes with a very grievous cost in terms of performance, because you have to listen to scrolls and everything else. And all the things we just talked about, you're doing all of them. So you're pretty much guaranteed to be on a slow path some fraction of the time. So we want great performance, obviously. Another one is we probably want to be expressed as a passive query. This is actually a common thread that I think you're going to start to see here, which is instead of pulling and asking, we want you to just express what you want to know when we will notify you. And this interface also allows us to then delay the delivery or schedule the delivery of such events into appropriate time blocks, where you're not interfering with other critical work. So because we're very inventive with our names, we call it intersection observer. And this is another API that's currently being implemented and being discussed. An idea here is pretty simple. As the name applies, you have an observer. You can give it an element. So here I'm using a document query selector. I'm just passing it an element. The root bounds modifier is a funny name, but really what it's saying is you can actually define a margin. So if you think of that green block on the right in a diagram, there's the red blocks, which effectively say the margin. So I can express things like, I would like to know when this thing is one and a half or half or whatever viewports away, which is quite nice because if you're building something like, say, a lazy loading section, you could actually say, well, I don't want to start loading the assets when that section becomes visible. I would like to start preloading them sometime before, such that those resources are actually done by the time the user sees them. So you can play with this. And we definitely welcome your feedback, because there's a lot of different use cases that I think this would make much, much more efficient. So take a look at the spec. There is bugs for implementation, open on Chrome. And I'm hoping to see it come soon, because it'll certainly be very helpful. Now another funny thing I noticed. So as I'm watching the presentations this morning, I think I've figured out Paul's master plan. So a couple of summits ago, or maybe at last summit, we started talking about the request animation frame that you have 16 milliseconds to do all your work. Then when Paul wrote his rendering course, he actually changed that number to 10 milliseconds. Now we have only 10 milliseconds to do this work. And then finally, today in his presentations, he's lowered the number again to 8 milliseconds. So I think his master plan is to just keep lowering it. It's gradually increased the temperature or decreased the temperature, by which point at some Chrome summit 2020, it'll just like zero. You're not allowed to execute raps at all. And that's it. But more seriously, this actually does get to an important point, which is how much work can you do in 16 milliseconds? And why is it that we say 10 milliseconds, 8 milliseconds, and you'll hear 5 milliseconds thrown around? And the answer is, frankly, because there's just such high variability in the CPUs and the GPUs and all the rest. So what runs efficiently on your particular hardware may run slowly somewhere else. And there's other subjects or other criteria that may change this. So we're now actually seeing devices that have small, small, and big cores. So your application may migrate between a small core, which is a slower core and a big core, just based on what's the temperature, what's the battery, what's the other environment factors around you. So it's really quite unpredictable. And the important part here is it's actually an unknowable answer. I cannot tell you how much work you can execute, even if you ask me right before, because there are other things that may kick in. The operating system may schedule us, and another application may want to run. So it's really hard to reason about. And what you end up with is a picture that looks something like this. We have different graphs. These are just different scenarios, where we execute some overcode and then the browser has to do some work. And depending on what kind of operations that you execute it, the browser may have to do different amounts of work. So you may have invalidated a huge section of the page, which means we need to repaint and do some other expensive operations, or maybe you touch something that's very efficient, and it's all very quick. And if everything runs nice, and if you follow Paul's advice and you say, great, I'll limit myself to eight milliseconds, and then I'll be very careful about the kinds of operations that are being executed. Everything will finish quickly. And then I know, I'm guaranteed, pretty much, that I'll get my smooth and solid 60 FPS all the way out. Except there's another problem here, right? Now we have this unused time. So really, we're leaving this time on the table. This is time that I could have spent perhaps making my applications better, because I could execute better animations and do more work. If I'm running a spell checker, I could actually do more work and provide a better service to my user. So we kind of have this odd trade-off, right? And there's no one number that really makes sense. So one question to ask here is, do you even know when you're missing frames out in the wild? When we deploy the applications today, we have pretty good tooling for getting data on the network performance, but we don't have any way to gather data on the animation part. And that's what frame timing is all about. Effectively, what we're doing here is we're exposing this concept of a frame, which is loosely the time between the two v-syncs in the browser. And we want to capture that and deliver that when we exceed that budget, such that you can subscribe to it and get that back to your developers. The important part here is we're not exactly telling you why you're slow. This is the kind of data that you would get on DevTools. We're just telling you you're slow, such that you can identify that in some part of the world when the temperature is really high, you are actually missing wraps, right? Sounds kind of crazy, but that's what it is. And once again, you would use Performance Observer to get out this data. So you subscribe to the new frame type. And periodically, you would just get these events that would give you the duration of the slow frame, such that you can aggregate it. You can run your own logic on it. You could even adapt to it if you wanted, because if you're consistently getting these slow frames, perhaps you want to throttle your canvas or your something else and run it at a slower refresh rate. So this could be both a runtime optimization and also just a beacon of back to analytics, such that I can understand why and where this problem is happening. So I think this will be very useful. So some takeaways here. Intersection Observer, I think this is going to be a very powerful and a very big feature for many different applications. So I strongly encourage you to take a look at the explainer and give us feedback. This is the right time to give us feedback. And then finally, Frametiming API will definitely help as well for identifying animation. Idle is something that we've already kind of talked about a little bit. So let's dive a little bit deeper. This is that earlier problem that I surfaced. We execute our work in some limited amount of time, because we said we're going to fix it to eight milliseconds or less. There's some unknown entity of time that the browser will take to do its work. And it's really hard to reason about what amount of time that will actually take. If you constrain yourself to some fast operations, you can kind of guarantee that you're not going to go really expensive, but it's still hard to reason about it. But then there's that chunk of time left at the end. And the better performance you want to get, the more of that chunk is going to be left there, which is kind of unfortunate, because we'd like to make use of it. This was a lot of time that just being left idle. So we have this new API, Recost Idle Callback, and Paul already talked about it, so I'm not going to go into a great deal. But effectively, it allows you to move some or all of your work that is not critical to the actual rendering out of your rafts and other logic and into these idle blocks, where you know that you're not going to then blow the budget as long as you stay within the provided deadline. So this is the kind of thing where you want to be very accurate or very precise about how much time I have left and make sure that you don't overrun. Because if you do, then we're going to miss a frame. So this is definitely a kind of thing you want to avoid. And Paul has a great write up with a number of different use cases for how to use it, where to use it, and all the rest. So I encourage you to check that out and experiment with it. This is already available in Chrome. You can start leveraging it, and I'm hoping to see it in other browsers soon as well. So for response, move your non-critical work into idle callbacks. I think you've heard this theme, we're moving garbage collection. We're moving scheduling and delivery of these performance events into idle. And I think you're going to find that we're going to be moving more and more work into that, such that we unblock your application from doing the critical work that it needs to do to make everything nice and smooth. And then finally, load. This is something that, of course, we actually have pretty good tooling for already, but we're not ignoring it either. Yesterday, we learned a lot about some of the new and exciting architecture primitives that we're getting with Service Worker, and in particular, this idea that you can now take your entire critical path and move it onto the device itself. So instead of being at the mercy of the network and at the mercy of the RTT, you can take your HTML, your CSS, and JavaScript, the things you need to actually paint some pixels on the screen, put them right on the device, that's the app shell, and make that visible regardless of the network you're on. Or whether you even have connectivity at all. So even if you're offline, you can paint those pixels and then put something, like a loading spinner or something else, to let them know that, hey, I'm fetching content. So I think this is very, very important. But of course, the next question is, I need to start the worker. That also takes some time. There's another thread running. There's maybe even many threads, because we have many different service workers. So we'd like to know how long that takes. And you can actually measure this now. So we've added the worker start attribute to resource timing and navigation timing, which allows you to see if the worker was active, you'll be able to tell if it was active based on the timestamps. And also, how long it took to start the worker if this request was blocked and started in the worker. So the difference between the worker start and the fetch start, if you're familiar with those specifications, would be the time to start the worker, which is very handy. So this is already enabled in resource timing in Chrome. And we're working on adding this to navigation timing as well. So I think this will be critical for augmenting your existing ROM analytics once you enable service worker. And then similarly, you also need the same kind of tooling and capabilities within the worker itself. Because now that you're scripting a lot of your fetches and other logic within service worker, you probably want to move some fraction of your ROM analytics to live in the worker as well. So for that, you need things like user timing, which allows you to specify custom metrics. So you can just put marks along the timeline or measures, which any time span between two marks. And also get the same fidelity resource fetching data for things like how long did the TCP connection take and the DNS and all the other components of making the request. So this is now also enabled in service worker, and you can start using it. And an important point here is we have resource and user timing. There's also the navigation timing, which is specifically for the HTML document. But navigation timing is not enabled in worker. Because from a worker's perspective, every fetch is a resource fetch. Even a navigation request itself, there's no difference. You can get the same fetch event. So there are no plans to enable navigation timing, because everything looks like a resource fetch. So if you're wondering, that's why. And this is available in the worker today. So if you're running service worker, definitely enable this. So worker start and resource and user timing. And with that, I think we're at the end. And thank you, guys.