 In the summer last year, Chrome began collaborating with frameworks closely. It started off tentative because there's history there, right? But over time, it has grown into a really important way that we develop new APIs and that we ensure that the things that we're building are actually going to be helpful for both frameworks and for developers. We're going to share a bunch of the cool collaborations that we've come up with this year and that have come out of that, but first I'd like to tell you a story. One day when we were all working together, Sebastian, who's on the core team at React, and Addy, we're talking about Image Perf. Addy works at Google. Now, these are two smart people who deeply care about performance, and Addy was talking about bytes over the wire, and he was saying, we've got to reduce the number of bytes over the wire. And Sebastian, on the other hand, was saying, we need layout stability. We need images to appear all at the same time, and we need to preload them so that the user's never getting that weird uncanny valley thing where the page is visible but the images haven't loaded yet. Now, again, two smart people who both really care about performance prioritizing completely different things. What do we make about that? And then it occurred to me, this is the 10-year anniversary of BigPipe. It's a revolutionary technology that allowed Facebook to deliver parts of the page independently from the others. So, for example, if you look at the screen here, the compose view could load completely independently from the feed component. It led to massive performance improvements, as you can see from this graph, particularly on Chrome. And this is when it struck me. Facebook has had BigPipe for 10 years now. Of course, they see performance differently. They haven't dealt with the mess of route-level code splitting in ages. I think that we can bring all of the power of Facebook and Google internal tools via frameworks to everyone on the open web. Together, working together, we can take it even farther. We're on the brink of a performance revolution, led by frameworks, inspired by powerful, battle-tested internal tools and technology. And we all have a role to play in it. No matter what your role on the web is, we're all going to need to play a part in making this successful. So whether you're a framework or what we tend to call a meta framework, which is like your next or your next or even your Angular CLI, those sort of wrapper frameworks have a huge role to play. Bundlers, package managers, application authors, obviously, node module authors. And we'll get to what folks who author node modules can do as well. And of course, browsers. At Chrome, we're just super excited about how we can help. So today, we're going to talk about a sort of year in review to start off by sharing all the collaborations that we have going on with frameworks. And then we're going to talk about adding a little more nuance to our performance goals. Finally, we'll talk about secrets of Facebook and Google internal tools. What this will be useful for is most of the frameworks are going in the direction of these internal solutions anyway, so this will give us a peek into the future of what frameworks will bring us. And then we'll finish up with a bit about bundle bloat and NPM modules. By the end of the day, everyone should have an idea about how they can participate in this future. So when I joined Chrome in June of last year, I started reaching out to frameworks, asking questions like, what's your wish list for the web platform? And then connecting them with engineers working in those areas. I also started reaching out about new APIs, asking, hey, is this going to work for you? Do you want this one or that one? Which one looks better? It grew from there. We can't possibly talk about all of the collaborations we've done this year because so many frameworks have helped us out. Shubhi and I have reached out to more times than I can count to different frameworks to ask them little questions about APIs or what would be a better way to handle X or Y, and they've been incredibly helpful. But we picked four things it was hard to pick to talk about today. Code chunking, scheduling, is input pending, and display locking. So first up, let's talk about code chunking. One of the fundamental goals of the browser is to handle user interaction instantaneously. If someone clicks or taps, we want them to never feel like there's any distance between what they're doing and them taking action on whatever they're trying to interact with. JS runs in the same thread as user input is handled, mostly, caveats for the composite thread. So in order to keep the UI snappy, JavaScript tasks need to be broken up into small chunks. If the user clicks when a long task is executed, represented by the long yellow bar, they might have to wait a very long time. This graph is really hard to understand on a slide, so don't worry about that at all. What we really want you to notice is the pink part. An engineer on our team, Katie Dillon, did a big analysis of tap latency, queuing, and handling time, and found that the top contributor to it was actually v8.execute. That means the application's JavaScript execution. At any given time, you have between 10 and 50 milliseconds to execute JavaScript before you'll block user interactions, depending on what they're doing. So how are we supposed to fit into that deadline? Realistically, this is where frameworks can help. Frameworks like View and React have been starting to break up their render work into tiny chunks. For example, React experimented with yielding between nodes of the render tree. That includes both, oh, sorry. Yielding means pausing to other queued up work or browser work to execute, so that includes taps, clicks, and other scripts on the page. This is great because it allows the browser to process user clicks when they happen during the framework's render cycle. We now have these tiny chunks of JavaScript executed, and the application author didn't need to understand it at all. The framework just manages it for you. This is fantastic. React built a scheduler to allow these bits of code to be efficiently executed. View is also experimenting in this space, and Ember has built a scheduler, so everybody's working towards some common goals. But unfortunately, a lot of the code is outside of the framework's control. That means that we have a coordination problem. A single framework doesn't control the entire app, and any other code on the page can starve the framework scheduler. Another challenge for framework schedulers is that they're late lack adequate signals that would let them know when to schedule things and when not to, for example, things that the browser is doing, like garbage collection. As a result, frameworks reached out to Shubhi and I, and wanted to talk about the idea of making an in-browser scheduler. We both thought that was pretty interesting, and so we decided to pursue it. We spoke to Maps. We spoke to Airbnb, to Ember, Angular, React, and many others to get a sense of their scheduling needs. We had a design session with the React Core team. We studied a bunch of different scheduler implementations, and we think we're starting to get the shape of the problem. So how does browser scheduling work today? Let's take a walk through it. There are four basic priority levels, immediate, render blocking, default, and idle. The first two, immediate and render blocking, lead to bad user experience. They both block clicks and rendering and taps and everything else like that, so we need to use these two task priorities as little as we can. The last cue, idle, is often too late for important work. It's also vulnerable to being starved by basically anything going on in the other cues. So this priority can't really help us. The default cue is sort of the junk drawer of the web. It contains almost everything. It's got script, it's got async, callbacks, browser side async work, internal work, garbage collection, network fetches, and script loading. That's a lot, so what can we do with this mess? First we need to move to non-render blocking cues for anything that isn't absolutely urgent. Instead we should defer everything we possibly can to that normal default task cue. But that means more tasks are gonna fall into the default bucket, and we already said that the default bucket is the junk drawer of the web, right, completely full. To make that work, we also wanna add three more priority levels to default. High, medium, and low, so that we can begin to manage that work more efficiently. Have you written a web scheduler for your product or project? We'd love to hear from you, please reach out. We're starting to prototype. We'd also love framework authors to try early versions. We're already talking to a lot of you, but if not, we'd love to work together. The next API I would like to talk about is input pending. It's a shorter term solution to some of the scheduling difficulties we've had. It's something that we were able to ship quickly, and it allows a framework or a developer to check if a user action is pending. Remember how we told you that frameworks are experimenting with yielding between nodes of the render tree? It's a lot more efficient for them if they can check if they need to yield, rather than actually yielding every time they can. We've been collaborating on a short term solution to make that more performant. If the framework calls is input pending, they can tell if their work will be user blocking without having to yield. So instead of yielding five times in this example, maybe they yield only once. Andrew and Nate from the Facebook team committed this code to Chromium, and we're pretty excited about that. The next API I'd like to talk to you about is called display locking. It allows updates to a lock subtree to not be rendered immediately. This is super important when you wanna do things like have a scroller and have stuff off screen. Virtual DOM implementations can use this for finer grain control when doing framework rendering, and it's also useful for any kind of widget like a scroller or tabs or carousel or anything that has content that isn't being shown, because that content can be updated without paying any rendering costs. The feedback we've gotten so far from the React team helped shape the API, and we'd be very excited for more folks to try it. What's super clear to us is that when we work together with frameworks and browsers, the result for both developers and end users is significantly better. We're really excited about continuing to collaborate with frameworks. These are links to details about this particular APIs we talked about in this section. Please open GitHub issues for comments and questions and ideas. Let us know if you have a scheduler that we should check out. We'd love to hear from you. So we just finished up talking about some of the collaborations that we've done in the last year with frameworks. Next up, we'd like to talk about our goals for user experience and how we want to add some nuance to those performance ideas. In particular, we want to talk about page load time and single page apps, and we really want to talk about budgets for total resource sizes. So loading perf is an incredibly important aspect of user experience, but on the other hand, today's metrics don't tell a full story of the trade-offs. Let's dig in. Absolutely everyone wants users to be able to interact with their application as soon as possible, but developers have had to make a really difficult trade-off and by and large they've chosen slower initial load time in order to have really snappy single page app transitions afterwards. Our metrics don't capture that trade-off because single page app transition timing is really hard to measure. What we want is that application authors no longer need to make that trade-off. And we want to be really clear. Loading perf is very important aspect of user experience. So how can we meet budgets that were designed for average phones and still have feature-rich apps? We've said that you need to have all your critical resources loaded for a route in 170KB, and that includes CSS, JavaScript, HTML, and data, but that isn't super realistic for a feature-rich application, especially if they need to compete with native apps that don't have code loading constraints, though they have other issues. And the answer really can't be let's cut all the features. That would be sad and the application wouldn't succeed at its business goals. So what do we do? In fact, 170KB is realistic when we consider it only for the initial code and data. What if we loaded everything else on the page only when we needed it? We could achieve that first impression experience and those snappy single-page app transitions afterwards. We'd meet that initial budget without limiting features. Obviously, route-level code splitting is a good step if you aren't already code splitting, but it's still too much code. We need component-level code splitting. Keep in mind, this is gonna mean that some parts of the page continue to rely on server-side rendering until we're able to get the required resources. That's okay, we'll show you how. We just shared a vision for incremental loading for the page and now I'll hand it over to Shuby to talk about keeping initial sizes under budget. Thanks, Nicole. So that was a really nice vision of how we might achieve progressive loading so we can hit those initial resource targets. So yeah, let's look under the hood and see how Google and Facebook are tackling this. There's a few differences, but quite similar goals and hopefully this can give us some inspiration for what we can bring to the larger ecosystem. Now, there's always a place for early ideas and experiments, but everything we wanna cover in this section is really about these battle-tested technologies that have been proven on large-scale production applications. So at Google, these are apps like Image Search, Google News, Hotels, Photos, and many more. And at Facebook, this is Facebook.com as well as the new Facebook.com. So I will caveat this with saying that I've personally worked on and led many parts of the Google side of the infrastructure, so I'm deeply familiar with that. But on the Facebook side, my knowledge is from watching like two videos and tech talks and talking to a couple of Facebook engineers. So with that said, let's dive in. So let's imagine a user planning a trip to India and they visit our hotel's product. Now, we could load all of the code upfront, but then there could be a ton of features in there that the user will never interact with or unlock. So let's start with the most simplistic loading scenario. Imagine that this was all written with a simple, naive client-side framework and now on loading hotels, you have to go download all the code. So this is typically the HTML followed by the JavaScript CSS, followed by the data. And then once we have all of these resources, the browser can render the page. The problem is that now the user is waiting a really long time before they can see or interact with anything. Plus, there's a ton of features that now we have pushed down that the user doesn't care about. So looking at a basic server-side rendering scenario, we might get to visually complete sooner because now we have the server working for us, doing all the heavy lifting of getting the data, rendering all the markup and shipping that down. However, the page is not necessarily interactive yet because often client frameworks can take some time to refetch the data and hydrate themselves, kind of redoing a lot of the work that the server has done. So at this point, I would recommend watching Jason and Hussein's talk at 9.30 tomorrow. They've covered this full spectrum of loading and rendering techniques, everything from client-side rendering to server-side rendering, static, re-rendering, and kind of dive into the nuances and trade-offs. Our talk today is not about all of that. It's primarily about what has worked at scale for Google and Facebook. So server-side rendering can be an improvement, but it creates this problem this uncanny valley where the page looks ready so the user starts interacting with it, but then it's not interactive yet. And this has been coined as rage clicks in the community. The users are clicking away in frustration. A second problem with server-side rendering is that it can be slow to get pixels on the screen if the page is quite complex and there's a lot of backends to talk to and some of the takedown backends are slow. So going back to our hotel's example, so let's zoom in and let's say the user clicks on the more filters widget. And now we know for a fact that this is an interesting feature, so we know to go download the code for that. Oops. Sorry. Yeah, so in a nutshell here, we are sending down the minimal code initially and letting user interaction like those, you know, interactions with the filter or the slider dictate which code needs to be fetched later. In practice, a lot of the stuff has been preloaded. So let's look at a loading scenario. So initially, they send down the minimal code. Now the page is able to visually complete soon, the user starts interacting, and as they interact with those specific features, the filters or the slider, we go and fetch the code that is necessary. So now there are a ton of features on this page that are never sent down because the user did not care about them. In practice, though, after the initial render, we'll go figure out what's in the viewport and preload that content for you, and this avoids unnecessary round trips. So to summarize, this is sort of our state here. It's not really, literally, route-level code splitting. It is much finer-grained, interaction-driven, late-loading, and this allows us to send the minimal code initially and stay within our budgets. So how do we avoid losing those early clicks? And so the answer to this lies in some, the contents of that critical inline JavaScript in the initial HTML. So diving into that, we basically split event handling into these three parts. There is a tiny event delegation library. It's called JS Action. It's open-sourced, available at this link, and this allows us to start queuing up those early clicks. The second piece here is the dispatcher, and this is the part that knows how to figure out what handlers are needed for the user clicks, and the actual event handling code needed for interactivity is all late-loaded on demand. So the dispatcher is part of the framework bootstrap, and that's a really important piece here. It is fast, the code is small, it's less than 47 kilobytes, it's fixed, it doesn't bloat, and this is important, and one aspect here is that this is different from traditional style hydration. We don't need to redo all of the work that the server has already done, and so this is what makes the framework bootstrap fast, and this is really important piece of getting to this constant initial size. A small bootstrap loading, not loading any of the app-specific logic. And enforcements are actually important here. Enforcements may help us keep this initial JavaScript constant and clean. So at Google, we have Forbidden Steps Test that makes sure that application code doesn't sneak in here. At Facebook, they use Budget Monitoring Tooling to keep this clean. So we've talked a lot about JavaScript and CSS, what about data? So for initial data, the server will figure out what data is needed, and it will embed this data in the footer in the initial page itself, and it is streamed. And so this makes sure that on single-page app navigations and view navigations, the client has the data that it needs already right there. For late-loader data, now this is powered by the component system, and component is a self-contained piece of UI. It declares its JavaScript CSS as well as its data. It knows how to fetch its data. Components can be composed in a hierarchy, and the children know how to fetch their data. So this starts to fill in more pieces of our picture here with data fetching. So like I said, as part of the initial data that's sent early on in the footer, in the initial HTML, and it is streamed. Late data is fetched concurrently at the same time as fetching late code. And the component system helps us here by telling us exactly what code and data is needed, and resources are never more than a round trip away. The next piece I want to talk about is streaming server-side rendering. This is really important because this allows us to flush early chunks. So for example, the header bar at the top of our hotels page is flushed super early and followed by the left navigation and parts of the body, and eventually the footer coming in. So hopefully, you can sort of see the chunks, the early chunks that are getting flushed in sequence. And this ensures that our content starts rendering quickly and progressively. Initial data is sent down in the footer. It is streamed. So if there are some slow backends, we don't wait for them. We go ahead and flush. And then there's a small script that patches up the server-rendered HTML. And this keeps the page interactive. So it's interesting that Google and Facebook are solving very similar problems here. And they have both arrived at quite a similar end state. And I like to call this smart server-side rendering with interaction-driven late loading. So this is a final important aspect of this shared end state that we've arrived at. And this is not having this problem of HTTP cascade or this waterfall effect that can happen from suboptimal late loading of code and data. So naively using APIs like dynamic import can get us into this situation where we start rendering something, and then we encounter a code split point. And then we figure out we need to go fetch something. So we go fetch it, and then we continue rendering, and then encounter another code split point, and then go fetch the code for that. And so this sort of results in this cascade. And this is the HTTP cascade that our system prevents for both code and data. So looking under the covers on how we get to the solution here, declaratively declaring nodes in our dependency graph is a really important piece of this. And so this is what drives late loaded codes. Let's look at an example using code that is conditionally loaded. As an example of when you're running an AB experiment, you need to conditionally load code. So this is an example from Facebook. In a naive situation, you might do a dynamic import and conditionally load your experiment code. But the Facebook syntax here is declarative. And this makes it easy to infer what's needed ahead of time. This can be picked up by the build system and the runtime. And this makes it possible while fetching to know the full set of depths and get it all together in a single roundtrip. Now, this is the Google side of the syntax, different design, but very similar principle. It's a declarative annotation that we put on the top of the file that has the experimental version of the code. And this indicates the experiment name and the original code path. And this hint is sufficient for the build and the serving and the runtime to serve the correct code at the right time when the user is in an experiment. So at Google, for effective code splitting, we separate our code into separate phases for rendering and what's needed for interactivity. So first, we only load what's needed for rendering. And then later, as the user interacts, we go fetch the code that's needed for interactivity. Facebook has a bit more sophisticated approach here. They have three phases in addition to the two that I mentioned. And so they have an additional third phase that shows like an initial placeholder while loading before anything has even been rendered. And so, for example, this could be showing a spinner before a bit of content is ready. So a comprehensive dependency graph underlies and powers all of this stuff. It knows all the code and all the dependencies in the application. And this dependency graph is consumed by the build time and is deployed and consumed by the serving system in the runtime. And after bootstrap in the initial page, the client has learned how to do late loading without cascading by receiving a small JavaScript library that knows how to do modular code loading. So this is the full set of features. I've already talked about almost everything here. The three things that I haven't touched yet on, and I'm not going to get into are the last three bullets on the right, which is there is an integrated A-B testing system that is deeply integrated with all of this. There is serving of minimal initial CSS. And at Google, we have a CSS module system that allows us to figure what this minimal CSS is. And finally, there's technology for images and deferring it and avoiding unnecessary image bytes from being sent. So this incredible feature set comes at a cost. It comes at the cost of complexity. So it might not be the right trade-off for every app. And especially if the app is simple or has mostly static content. And so this is the overlap with what is on Facebook. And it's really interesting that it's practically 100%. They achieve a very similar feature set, but somewhat using different techniques. And it's really interesting that they've independently arrived at the same list, even though they have completely different backing, implementation, and design. And this validates our approaches and the list itself. And this starts to give us a general template for the desirable characteristics of a scalable feature-rich app. Facebook also has some unique sophisticated features that I don't have time to get into. I do recommend checking out their talk at this link from the recent Facebook conference. Now the HopeTales example was a demonstration of how our system works. It's by no means perfect. They certainly have room for improvement here, for example. Just last week, we saw that the one Google header bar is re-requesting the forms that have already been requested at quite an inopportune time. So how could we bring all this cool stuff to the larger ecosystem? So luckily, there's already a bunch of work underway. Angular has been attempting to do this. And they've been exploring and figuring out how to bring some of the Google feature set and integrating that into Angular. React has been flowing away with features like lazy, suspense, and most recently, selective hydration. And then they have this really cool data story with GraphQL and Relay. Airbnb is an example of a React app using current ecosystem tooling and experimenting with an early selective hydration technique. Again, I recommend watching Jason and Hussain's talk to see that demo and to learn about what can work with today's tooling. So we've shared a vision of what we want to have available in the ecosystem, and inspired from the techniques of Google and Facebook plus gaps we are seeing in apps today. And this is intentionally chaotic to indicate that there's a lot of moving parts here, and there's a lot of work ahead and a long road. To really bring this to the ecosystem, we need collaboration and deep integration between frameworks and frameworks the allies like Angular and create React app, as well as meta frameworks like Next and Next, as well as bundlers like Webpack and Rollup, et cetera. It's great that frameworks have already started down this road. But meta frameworks have a really big role here. They have a unique vantage point, a unique position with access to both the client and the server, control over the build system and the deployment and the serving pipeline. And traditionally, they focused on the getting started experience and DX. But this is a much bigger role and responsibility that we love to see them succeed at. And as we've seen at Google and Facebook, doing this requires an end-to-end opinionated system. And this can include enforcement's policies, budgets. And we're not just giving an academic talk here. We're actively participating in this space, focusing on constant initial bundle size and smart code splitting, and starting with some simple initial changes to Next.js. So moving on to the next segment of this talk, we discussed some exciting technology. How could it come to the ecosystem for better or for outcome? Let's take a moment to note that outcomes are not great for everyone in the ecosystem today. There's a lot of users that are not having great experience. There's a large fraction of Chrome users in emerging markets. Device characteristics are not great. They are actually quite similar to large parts of middle America. And network conditions can, of course, not be taken granted anywhere, including right here. So these are lighthouse scores from a popular meta framework. And clearly, it is possible to achieve good outcomes, as we can see from the green box at the right. But what we really want to do is come together as a community to move this baseline and figure out how to get more people shifted towards the right bucket. So these are loading metrics for various frameworks. And this is a recent study that's ongoing on our team using Wappalizer, a tool integrated into web page tests. It's run on HTTP archive origins, about 4 million origins. We were able to infer libraries and frameworks. They are using tens of thousands of URLs. We've hidden the names of the frameworks, as that's not important. And it's too early to do our conclusions yet. But I just wanted to note a couple of early observations. First scores are not wildly different. They're actually quite similar. And frameworks don't have a big role in first contentful paint. But they do between the difference between timed interactive and first contentful paint, because that kind of dictates the hydration for that framework. However, we're finding that this difference is not widely varying. It's quite similar still. So our team has started looking at how to serve application JavaScript better. But as we dug in further, we found some surprises. In practice, there's a ton of truly unnecessary JavaScript that's getting shipped down. Things like polyfills the browser cannot use. Collectively, our team has spent a lot of time looking at deep diving into bundles, looking at breakdowns. And we're finding that it's not unusual to see 20% to 30% of unnecessary JavaScript. And another interesting data point is that NPM modules are a big part of the app. Google and Facebook didn't have to deal with these problems. So digging in, these are the top three reasons that we are finding for unnecessary JavaScript. The first and foremost is over-transpillation, both in the first-party application code, as well as code in the installed NPM modules. Second, polyfills, large volumes, and lots of duplication there. And finally, both over-transpillation and polyfills all lead to this duplication of modules in the bundle. So NPM did a study of a few thousand client-side applications, and they found that 97% of bundle size was installed node modules. We've personally seen a high variation in that number, but for almost everyone, it's greater than 60% to 70%. NPM also said that the average application has 1,000 NPM dependencies. And it is not at all uncommon to have 2,000 NPM dependencies. Would we learn this? We had this moment of, like, surprise, not surprise? Because our own experience is developing apps, we found that we used all sorts of things from NPM. But at the same time, I think the magnitude of it was a little bit surprising. Now, I want to be super clear. The answer is not to stop sharing code. That's what makes our community stronger. At this point, we are able to build apps, and we focus on the thin layer that makes our app unique that's going to make users want to use it, that's going to make it different or exciting or better. We don't spend a lot of time on boilerplate because we have a whole pile of boilerplate that we can just use from NPM. This isn't absolutely a positive. So you might want to know what kind of dependency bloat you have. Maybe you have big dependencies. With Webpack Bundle Analyzer, you can figure out how big your dependencies are because they actually show up bigger in this view. You might also have duplicated code. Sourcemap Explorer shows you all the details of your minified bundle. For example, you can see that this application contains two copies of React. You can also check if your dependencies make sense by route, in particular if you're doing route-based code splitting. Webpack Bundle Analyzer shows you that this application includes three copies of Moment.js in different routes. Now, there might be a case where that makes sense, but it probably doesn't. So how could it be easier to ship JavaScript to modern browsers? So differential loading is a technique that works today for enabling loading of different bundles to different browsers based on their support level. How does it work today? So today, the module pattern works well with tooling available. There's a link here with details. But the core idea here is that you generate a second bundle using Babel Preset and make a second configuration for ES2015 plus code, and then update the HTML as shown here. Set appropriate entry points using modules for modern browsers and no module for older browsers. Module no module really works in the real world. It's especially effective when Metaframeworks and CLIs support it out of the box. Angular just launched support for this in version 8, and they're seeing big wins. This is a user seeing significant size wins. And this is a slide that was presented at the recent Angular Conf. Basically, they are finding that there are savings of anywhere from 7% to 20%. And in apps with poor body filters, this can be up to 30%. So as Nicole said, a large part of our app is installed NPM modules, and tools are transpiling these by default. So there's this wide expectation that these NPM modules contain ES5. And this means huge portions of our apps are stuck in ES5, even though bundling and delivering techniques are in place for shipping modern syntax. So we really need to tackle this on the publishing side for NPM modules. So today, for example, in package.json, it shows you what version of a module requires, but no indication for which version of JavaScript. So clearly, there's something missing here. We need more information. So this is a current proposal that our team is pursuing. What if we could add a syntax field in package.json to directly indicate ES module support? And this is not literally about ES modules. ES module is a good proxy for loading JavaScript files with modern ES2015-plus features. Everything from async await, classes, arrow functions, to fetch and promises. However, in a few years from now, this might leave us in an awkward position. So we really need to get creative and think of what could be a longer-term solution. So we don't know what the long-term solution is. We have folks on our team that are deep in this space thinking about solutions. I have the Twitter handles here. Feel free to reach out if you have thoughts and ideas. But at the very least, we think we want these properties in a compelling long-term solution designed for an evolving set of platform APIs, not penalizing newer UAs by sending tons of unnecessary code to them, aligning with edge caching and performance needs, and compatible with existing tools without significant modifications. There's a few things module authors can do today. So I encourage you to publish modern JavaScript in possible and even compiling to modern JS when writing TypeScript and shipping down-level code as a backup. So let's revisit where we want things to be. We've added a few things since the last section. We've added differential bundling, publishing, modern down-level JavaScript, and adding browser primitives. So we all have really important roles to play here. Frameworks have a really big role, and we've taken our guess at highlighting various areas where they are helping or wanting to help. Metaframeworks have a big role, same for bundlers, especially in both code splitting, as well as differential loading side, app authors, module authors, as well as package managers. And finally, we have a big role here as well in terms of making all of these people successful. Everything from shipping new browser primitives to direct PRs to open source projects. It's gonna take a rainbow to get this done. So let's go back for a moment to the dream we talked about in the beginning. We hope for a world where feature richness and performance wouldn't be so squarely opposed to one another. We believe in the possibility and we're ready to make that happen, both with PRs to the ecosystem and with the framework fund. At CDS this year, we announced that we're starting a framework fund to help support the kind of work that we've talked about here today. If you are working in framework and tooling or any of the areas that we've talked about, this link that we're sharing is how you can apply to the framework fund and get Chrome's continued to support for your good work. Thank you.