 Welcome to our Ask Us Anything session for web runtime performance. We are web developers from Chrome team which built a performance-conscious app like Scooch, Prox, and now we have a comparison site to link report. So I'd like to introduce a panel. Usually when we do this thing at Google, people say name and their title, which means like nothing outside of Google. So I'm just going to start with questions. There was a really good question from, do we have a name, Kevin? So please say your name and then answer this question. How did you convince your manager that you need to build a Minesweeper clone to learn or demonstrate something? Start with Jay. Yeah. Jake Archibald. That's my name. My job title is irrelevant. So yeah, we built a Minesweeper clone. How did we justify that? Well, partially because though I was on a flight to China for one of our events to Shanghai, and the entertainment system on the flight failed, and I thought what better to do than some coding. So I built a Minesweeper clone. Not with any these graphics, but we thought let's see where we can push the boundaries of the web here in terms of runtime performance with decent graphics, but also making the same code base work on feature phones, like the kind of $10 phones that you can buy in places like, quite popular places like India. How do we convince our manager to do it? It was an exploration for us to see if we could do it, but it's also about giving us the experience for developing on these devices, and it was a good way for us to know what we are talking about. DevRel teams differ from company to company, I think, but certainly the mantra we have here is that we need to, if we're going to talk the talk, we need to walk the walk as well. We can't just tell people to go and make your websites faster, make them work on phones, make them work here and here, like we need to be able to demonstrate that we can do it ourselves, but also get a good feeling for where all the pain points are, like where our DevTools lacking, which things are hard in the browser, that kind of thing. And it's through building these tools that gives us that experience. Okay. Next, Soma. Oh, was that your rating there? It was like, oh, that was just okay. That's exactly what I expected it to go. 2.5, score there, Jake. Yeah, if you could try harder on the next question, that would be fine. I was just thinking about like, oh, Jake is talking so long. Oh, I'm sorry, I'm sorry. Yeah, a little bit, too much self-indulgence. Well, I'm glad we sorted that out now by rambling. No, no, your time is over, Jake, shut up. Soma. Hi, I'm Soma. Yeah, I also work on Prox, by the way, it's called prox.app with a double X. And a part of it, I always looked up to game engineers or to, yeah, game engineer, because they really squeezed every last bit of performance out of the devices that they're running on. And also, I think often game developers are at the forefront of architecture at least to me, it feels like they really try to use software architecture to the first extent to paralyze work across the team and make sure it still fits together in the end. And I wanted to see if game development on the web is, I think the web is an underdog when it comes to game platforms. And so we wanted to see with us not necessarily having huge experience in game development if we could make this work. And I think we did a decent job, at least. Sam. Cool. Jason, I know you weren't part of the Prox, per se, but is there any silly project that you convince your boss to do other job? All things I've ever worked on. I mean, I was only involved in Prox from the standpoint of it was using Preact. And I think the fact that it was supposed to run on super low-powered devices was actually a really interesting case study in, like, OK, so even just the raw DOM manipulation for this is kind of out of the question performance-wise, is it even possible to use a framework at that point? Or do you just have to give up all the trappings of that? And in the words of docs, just move everything to Canvas. So I didn't do that work, but it was interesting to see and consult a very tiny amount with that. Because it's kind of a performance non-starter that makes you have to think about things very, very differently. Yeah. And then I guess I forgot to introduce myself. Hi, my name is Michael. I work with these three people and more other people at a sub-team at a Google Chrome. I worked on the Prox because Jake and Summer already convinced our boss that we are going to do it. So I just did. But I guess in that project, I became that person who just obsessed about esoteric devices. So I was the one who buys all of the esoteric phones. Whenever we go to China or India, and then just test our app and then convince people that we should care about those devices. So yeah, that's just me. Now that the introduction is out of the way, let's go to the next question. So this is a really good one from Marv. Has the uploads you took to create apps, since we created many apps, apps having changed? Or is it a fairly consistent upload every time? Who wants to take that first? Well, since I panicked, talked all the way through the last question, I'm just going to answer yes. OK, so I'm not just filling that yes. My approach definitely has changed. I mean, I never did web development before I started at Google really. So I think it's constantly evolving because we have so many people on the team and also in the community, honestly, who pioneer new techniques or new tools and all of it affects how you develop apps. I think the most consistent part for me has become I want to be more respectful and more frugal with the resources my user has to commit to whatever it is that I'm building and publishing to the web. It's quite easy to allocate all the memory, drain all the battery, block the entire device with a for loop. Like, all of these things are too easy. And more often than not, the user notices too late. And so I feel like it's on us to make sure that we are respectful of users who might be traveling and don't have a charger. And they just want to play a game to pass the time. And we shouldn't be draining all the battery just so they can have some nice sparkly effects. But yeah, I think basically that's the one constant that I have, be respectful of user spend within other resources that they might be unconsciously committing to using your app. In terms of changes that we made to our approach between the various apps we've built, in the first version of Squoosh and in Prox, our static render was a little bit of an afterthought, especially in Prox. And we found that really hard. And that was a source of a lot of bugs. I approached later on with the second version of Squoosh and things like tooling.report to make sure that we had a solid static render story from the very start. So all of our code base, we knew we could run it in two different environments. And I've seen that happen in industry as well. It's very hard to take a dynamic client site and add a static render to it afterwards. If you have that baseline from the very start, it's much easier. Yeah, in Squoosh, I think the thing I came away from the V1 with was how valuable the initial screen that is so much more than a splash screen was in shaping the entire performance of the app. Because I think it was 15k, you get 100% of the functionality you could possibly use in the initial app because you haven't given us an image yet. Oh, we can present you as that file drop dialogue or whatever. And we just ship that to you as one file. It's an HTML file that is self-contained. And I think that stuck around in Squoosh v2. And part of Squoosh v2 was just like, oh, OK. That is like everything branches out from that in our performance story because that is how we avoid loading wasm binaries and stuff upfront. It's just the app as a single screen. One thing that changed was that, well, probably we will take the same approach if we do that something again. It's the tools. So like, Squoosh v1, we use Webpack. And then we did it. We identified a pain point with it. So then when we were considering next project, we were starting from the point of going to address this pain point, we need to switch the tools. I feel like that conversation we always have whenever we start new project. And we are not the team that has a template. And then use that template for every project in terms of project setup. No. I mean, we consolidated it kind of on rollup as a tool. But I don't think that was necessarily because of the bundling aspect as much. I think it was more because rollup is also a really good substrate for doing build orchestration, which like Jake was mentioning in Squoosh v2, the front end is kind of an output of this earlier thing that in a dynamic app would be your back end. But in our case, it's a build job. And we kind of use rollups plugin API both as a bundler plugin system and also it has a build tool plugin system. And it just sort of lets us keep that line blurry. Yep. True. Well, Jake wants to say something, but I feel like we can continue going on and on and on this one question. So let's just move on to it. I'll get more time to talk. Don't worry about it. Yes. In fact, next question is Jake's favorite question. Oh. Yes. So next one is from Dan. So full-screen images are now considered background and ignored for the largest content for paid LCP. But an image that covers 98% of the image because the menus are at the top of pushing it down or that kind of thing, it's not ignored. Pro-automatic, no? Yeah. So Dan sent a lot of really interesting questions digging into the specifics of the Colbert-Bertl's metrics. And they were not the most expert on those. It was another session covering the metrics. But what I will say is that especially the way these metrics are used in search is very similar to how search analyzes content. That the whole idea is that the thing with the best score is the thing that is going to offer the user the best experience for a particular search term. And Colbert-Bertl's is no different there. So if you feel like you're having to compromise on user experience in order to meet the metric scores, then the metrics are wrong. And the best thing to do is to let us know about that and to file a bug, a CL bug. And this is something that we did. Like in Squoosh, we were getting a really bad cumulative layout shift score. And it was actually down to the range inputs because we use custom range inputs. And it was when users were moving that, it was causing a layout shift and giving us a bad score. But it's not a real layout shift. It's expected from the user's perspective. Now, obviously, we're in a position of privilege that we were able to go direct to the engineers and say, this seems unfair. Can you fix it, please? But really, externally, it should be a very similar process is to file a bug, reduce case if you can, and say, look, this is giving us, this is creating a bad score, but it's not a bad experience. Please fix it. I think that's not happened. Oh, go ahead. Sorry, so am I. I think it's important to notice the metrics that they have are not necessarily set in stone in the way that they're implemented. The core of COVID vitals, at the heart of COVID vitals, is wanting to quantify user experience. They want to, as Jake said, if the number is good, the user experience should be good. If that goal isn't met, that is considered a bug and something they want to hear about. And we're not saying file a bug so we can ignore it and let it try off. But actually, they're looking for these bugs and they're trying to adapt the metrics. And that's why, also, they have this yearly update pattern to COVID vitals so they can incorporate all these changes. And so, yeah, we definitely are looking to hear from the community if anything isn't working. And basically, if we see gamification of COVID vitals, then there is something that needs to be fixed. Yeah, I mean, our job titles, doesn't really matter, is a developer advocate. So if you flag a issue, that's our job to bring it to the table to the engineers. Okay, let's move on to the next question. So this one, probably for Jason, our React SSR server-side rendering, it's pretty slow. And so I want to give the browser something to do while our servers are busy. Our HTTP 103 early hints lady yet. Early hints are coming. I think Jake, you had said that they're only shipped in Chrome. Yeah, there's work happening with them now. They're behind a, well, you can activate them behind a flag, but we're doing experiments with them in stable Chrome right now. So it's, yeah, actively developed. Jason, do you want to say what early hints are? Yeah, so early hints is basically a way to flush some headers before you've determined the HTTP response code that you want to send. So if, you know, I think in this example, if you're doing SSR and your React SSR pass is actually the thing that defines whether something's gonna be a 404 or not, but you might want to kickstart loading your JavaScript and CSS in the browser before you've determined whether something's gonna be a 404. Early hints is sort of a spec that allows you to do that. I would say in this case, if your SSR is actually the thing that is slow, in a lot of cases I've seen where it's possible to at least pull out that status code check as like the sole thing that you don't do in your React tree. And if you can do that, then you can actually just do a full header flush prior to doing your SSR or during your SSR. Another thing you could do is if you turn on streaming server-side rendering, even if you don't flush the SSR output as a stream, you could potentially grab the status code the second you generate it and flush that with the headers, send that to the browser and then continue doing the rest of your SSR work. You know, hopefully, you know, not paying the, whatever the major cost you're paying is prior to actually having to flush that status code. All right, let's move on to the question. So this one is about WebWorker. Ooh, from Tiger, Tiger, Tiger. Is it better to spin up a new WebWorker when I need to load data, then terminate it, or keep it around and reuse it? I feel like this one is a surma. The satisfying answer is probably it depends. I- Always, always. I currently, my gut feeling is to say like keep them around in most browsers, and Chrome is one of them, creating a worker takes time and consumes resources and you want to free up those resources. We've been doing some research, Jason, I specifically, into how much consuming memory can affect other performance metrics of your apps. And so freeing up memory is important. That being said, for example, Safari, as far as I know, has a worker pool just for them. Creating a worker is near instant and so terminating and creating is rather cheap. I think it boils down to measuring, but in general, I think as I had earlier, like be respectful of the user's resources and don't keep things around if you don't need them. So what we're doing in Squish, which I really like, is that if the, you know, we use workers for the compressing and decompressing of images, the way we operate them is if the worker hasn't been given something to do for 15 seconds, I'm making up the number because I can't quite remember it, but that's when we terminate the worker. Yeah. So you have a chance, depending on usage frequency, to reuse the same worker, but you won't keep it around indefinitely if it's clear that there is no need for it anymore. Yeah. I mean, in this case, it depends on the app, what the usage pattern is, right? Like if in this case, loading data, sure, but how is this data loading triggered? Like in the context of Squish, we know that people move a slider, see the result, move the slider, see the result, and there's a lot of like frequent re-usage effectively where you just wait for the visual response and immediately adapt to what you've just seen. But then at some point it stops and then we can, you know, terminate the worker. If your pattern is different, a different recycling pattern might be more appropriate for your app. So these are all things that you need to consider when building these kind of things. Yeah, I was just gonna add that. If you're building something like game and like you know for sure that the experience user is having, it's like computing some data in the background constantly and then like, you know, as long as the duration of the game that continues, like there's no need for killing it every time and making it another new worker because you know you need worker. We terminate work workers a lot in Squish because like if you move the slider in the AVIF, and we will start encoding that AVIF and that might take, you know, several seconds. And if you move the slider a little bit more, like we now know that the work we are doing is useless. And the way we cancel that work is just by terminating the worker and spinning up a new one in its place. And it doesn't seem bad for performance. Like the browsers handle it pretty well. I think that's like interesting one. Like I finally remember correctly because like Squish is like a few years old. That's because Squish use a C library or other language library for image completions. And we put them in the web modules and running in a worker and blah, blah, blah. But those modules do not like have a terminate functions. So we just decided to terminate that process altogether, right? And then that's just how we got around to terminate the ops that we don't need anymore. Yeah, right. Yeah, work assembly is synchronous. And so there is, unless you build it into your code, you can't interrupt it. And since we are actually writing the code, we're just taking ecosystem library code, like the AVF encoder and the MotsJPEG encoder, they often don't have the hooks to interrupt an ongoing encoding process. And so unless we start like extremely monkey patching those libraries, just terminating the worker is easier. And honestly, I'm quite happy that the browsers are apparently optimized enough to then just, you know, hard stop the ongoing web assembly process and not let it finish in the background as some of us might know from like Linux processes. Like you send them the kill signal, but they still just, they're gonna finish the task. Like for some reason Chrome manages or the browser in general managed to do better here. Yep, right. Next question. How do we make our local development environment fast? Like I'm constantly loading and reloading 80 megabytes of untree shaking JS. Load last JS. Maybe Jason have opinions. Also like it's a real issue, right? Like there might be a like a production system where all of your code is gonna be optimized at the deploy time, but you might be stuck in the system where the way you develop is loading that big bundle every time on the dev and you can't really make it smaller until you deploy. So if you are stuck in load system and don't have little wiggle room to change that environment do you have any things where people should start? Yeah, there's sort of two prongs to this. So one is if you adopt lazy loading techniques that would help in your production bundles. So using dynamic import or required on share or whatever your bundler supports, those actually also end up being useful in dev because it means that unless you're working on the screen or the route or the whatever that contains the lazy loaded code, if you don't hit that import, you don't pay that cost. So for the same logic that you would apply to making your production bundle smaller can also actually benefit in development. It's a little bit unusual and I've even seen lots of cases where folks bypass lazy loading in development. It actually has huge value there and it makes your development even more useful because you can kind of use it as a proxy for production performance at that point. Basically apply a kind of a scaling factor to your performance and you can kind of intuit whether something is going to be beneficial or detrimental. The other thing though is if you're... So let's say you've done stuff and you're stuck with this 80 megabytes of JS. HTTP caching is kind of a wonderful fix for a lot of these things. So if you set up very good caching in development, so basically you don't wanna be pulling files that were not changed from your development server every time you reload the page. So this would be using either eTags based on... If it says node modules, eTags based on the node module version or a hash or even last modified times, at most there you're paying for a bunch of 304 requests which are very cheap. And that also enables the browser to do optimizations like potentially cache the code and reuse that cached code. So you can skip not only downloading but potentially parsing on the JavaScript side. And as soon as you start to enable deterministic caching like that, you can even see how like you can quarter off chunks that are very unlikely to change. Some of the newer bundlers are starting to do this now where like if stuff comes from node modules and it's not a similar directory, there's only one thing that's ever gonna change that and that's MPM install, which is something that you can listen to events for. So all of these things kind of stacked together, you can end up getting to the point where you'll see all the network requests and they may total 80 megabytes of JavaScript, but the actual download time will be negligible and most of it will be serving from the in-memory cache and reusing cached code, which is significantly faster than doing a fresh download of everything. I know this is not a good solution to this problem, but one thing that we've been doing in our team that I think has helped me a lot is to make development as similar to production as possible. Like 80 megabytes wouldn't fly in production, so why should it fly in development? And I know it's not always easy. Sometimes your build system in prod just takes a long time and you need something faster for development. Maybe it's worth investing the time to figure out what the bottlenecks are and making those environments more similar. And we have fared really well with making, like basically having the safety and the fact that when it runs in development, we are extremely confident that we'll run the same way in production. And then these kinds of problems also almost by nature kind of go away. Yeah. Yeah, I assume the 80 megabytes was referencing like before optimization and then after optimization, it might be like 80K or something. In terms of speeding up build times, like Simon says, I'm a big fan of keeping the developer environment as close to the production environment so you don't get bugs that appear when you deploy. But yeah, we often turn off minifying because that's usually a big chunk of the build time. Also, TypeScript has a really good incremental mode, which a lot of build plugins don't make use of. So if you can drop down and make use of that, that shaves off a lot of time. But generally, I'm really excited about things like ES build that are just faster than what we have currently by being written in things like Go and Rust, like there's a series of new build tools coming up. And people do ask why we didn't know why they're not part of tooling.report. And the answer right now is they wouldn't score very well. Like we've worked with the people who make these tools and decided that it would look really bad for the tools if we put them in there now because there's a lot of features that they're missing. So we decided not to do that because, and I think that, I think some of these tools do have on their website is like, yes, we're ready to go. And then you dig further into the docs and it will say somewhere, not production ready, which I don't know, I feel a little bit unsure about, I don't know, it seems a bit of a marketing no-no. But yeah, really excited for the future of those tools. Yeah. Right. My answer is to work with those people who cares about build process. I'm the kind of person who just learned, like, you know, command I was told and then if the build process takes a minute, I will just patiently wait and don't question. It must be doing something very important. Yeah. Anyway, so we're running out of time. So this is going to be the last question and we need to be quick. So the last one from Dan. Do you optimize as you are developing the website or web app, or do you wait until the end profile and then do the land of optimizations? Maybe a quick round of like, how do you upload to the optimization? Start with Jake. Yeah, definitely tools, not rules. And analyzing is good, but I refer back to my previous answer where you know, like having a static render is always necessary. Even if it is just a splash screen and having that there from the start is really, really important. So. Both, like on the one hand, I profile as I go, but at the same time, I often go by make it work, make it right, make it fast because sometimes you just lose time optimizing something in the end, you have to throw out because it doesn't actually fit in your program in the end. And at the same time, also I think I have the luxury of like, because I can often take the time on our apps to optimize them, I know what to avoid from the get go because I've done this mistake before. I know that animating top is a bad idea or something like that, you know? So I definitely encourage to keep an eye on performance, but if you're trying new things, don't waste your time on optimizing a for loop. Like make sure you measure where you're actually breaking your frame budgets or your bottlenecks. Like the for loop over an array with 10 items. Yes, maybe for each is slower than a for loop, but it won't really matter in the grand scheme of things. So my answer is kind of a rephrasing of both of those, which is that I do my architectural optimization upfront or some of it, you know, using a static build tool or taking a particular approach that I know will pay, you know, pay dividends in the longterm for performance, selecting the technologies I'm going to use. I think about those upfront because I can kind of predict the effect that they'll have on things. But then I do often wait a little while before doing the profiling to see like, okay, well, how did that pay off? You know, did I make other, you know, more specific technical decisions that I need to rethink because those are cheaper to reevaluate later on. The architectural decisions are usually much more difficult. My approach is that I don't do the profiling, but whenever I commit a version, I test it on the low end of the phone, just so I know how it performs. And then know that later in the down the load, we need to profile something. Anyway, we are learning out of the time. So that's it. I would like to remind everybody that the apps that we mentioned today, like Scooch, Tooling, Report, Plox, all of it is available on GitHub. So go dig in and then look at our source code and see what we are talking about. Also, this year we would like to get more contributors to our projects. So check out our issues for the Ask for Help. Thank you very much for joining us this morning, evening, afternoon, wherever you are. And thank you for your questions.