 Welcome to our very first session of Ask Chrome Live. This is a series of live streams where you'll be able to hear from the Chrome team about how you can build better web experiences. In addition, you'll be able to ask us your questions. My name is Katie. And I'm Hussein. And we're here to talk to you about implementing performance budgets right in your build process as a web developer. Today's session will be broken down to two parts. And the first part will be talking about performance budgets and how you can implement them in your projects. And the second part will have a Q&A session where we can answer any of your questions. Make sure you ask your questions by using the link to the forum below or in the description if you're watching this on your YouTube channel. We'll try our best to answer as many of your questions as we can. Okay, so let's get started. So today's agenda will begin with us talking about what is a performance budget. We'll then move on to try and understand different performance budget tools that are available as well as the different approaches that they take. We'll finish off actually talking about a number of these tools and how you can actually include them into your workflow before we head on to the Q&A session. Budgets are things that we use all the time in our lives to set limits on things we use or things we consume. Performance budgets act in the same way. Performance budgets set standards for the performance of your site. An example of a performance budget could be something like you want your site to load and under two seconds on a specific 4G connection. Performance budgets are a lifestyle. Not something you only need to think about from time to time. They're the most useful when you incorporate and add a layer of accountability all the way throughout your entire software development life cycle. So what are performance budget tools? Performance budget tooling allow you to add performance budgets to your site and they do this while actually looking for a few things. The very first is they allow you to measure a specific budget type. This could be timing-based budgets. This could be resource-based budgets or even score-based budgets. Measure whatever you decide to set against a specific data source. This could be lab or synthetic data or this could be field or user data. And finally, they need to provide some sort of feedback so you know how well you're performing. One thing that a lot of performance budget tools do is that they integrate directly into your CI workflow so that your builds will pass or fail if your performance budget pass or fail. Some also provide some sort of alerts so you get a better idea of exactly how well you're performing. Now it's very important to mention that many of these different tools have different use cases, advantages and disadvantages. There is no best approach to performance budgeting. The very best performance budget tool that you use is ultimately the one that you decide to use throughout your entire development workflow. What a lot of teams end up doing is they actually end up incorporating multiple approaches by looking at more than a single data source and even looking at more than a specific budget type. And this gives them a better idea of how other sites are performing along a few horizontals. So now to talk about actual budget types. The very first type of budget we think about are timing-based budgets. And this is essentially how fast resources get downloaded for your page and how fast your page loads as well as how fast it becomes interactive. Now the reason why this is the first thing you think about is because this is a direct measure of how well your page is performing. The only problem with looking at timing metrics is that it can have a very high variance. For example, we try to measure the load time of YouTube.com along a number of different runs. The highest delta we noticed in terms of variance was a two second difference. Now this type of variance is quite significant if the entire load time of your page is also about two seconds. There are a number of strategies you can take to actually minimize this type of variance. One such strategy is setting a budget for a maximum. And what this means is instead of only running once or twice, you run a number of runs and then making sure that your maximum is never exceeded across any run. Another thing you can do is you can set a budget for a median instead of the very max. And lastly, although this is something you should always be doing, it's very important to try and use a consistent test environment because you wanna make sure that you minimize all the possible variables that can cause variance. This can be the device that you're testing on as well as the connection type. The second budget type that you can measure against is resource-based budgets. And this usually means things like the number of requests that get sent as well as the size of these requests. The nice thing about using resource-based metrics is that they stay very consistent. When you load a page multiple times, you would expect the number of requests that get fired as well as the total size of the requests that get shipped always remain the same. Now the one problem with using resource-based metrics is that they're an indirect measure of page performance. They can affect page performance, but many other things can as well. If you're thinking of actually incorporating resource-based budgets, a decent first step would be to measure how much of all your assets are being shipped. Now the thing about this though is that it's not very helpful because it doesn't really tell you exactly where you can improve and which assets you should minimize. The better thing to do is actually set different resource-based budgets depending on the type of asset being shipped. This way you can see what you're shipping too much of and what you can actually begin to try and minimize. Although all asset types should be minimized where possible, JavaScript is particularly expensive because JavaScript always needs to go through parse, compile and execute steps. One kilobyte of JavaScript is going to be more expensive than one kilobyte of image or even other asset types. So if you have to pick a specific resource to begin with to apply budgets, it should most likely be JavaScript. And this also brings us to the next point. A lot of the code that we write does not make up most of the code that can actually live in our application. Take this statistic for example. Over a quarter of sites are actually made up almost entirely of third-party content. By applying budgets to the amount of JavaScript that we're shipping, we can keep an eye of not only the code that we write, but any third-party libraries and dependencies that we decide to pull in. The third budget type that we can actually use are score-based budgets. A score-based budget work in that a tool would go about running a number of tests and testing a number of budgets of its own and then giving us a generalized score of how well we're doing. Tools like Lighthouse and WebPageTest do this pretty well. The nice thing about this is very easy to understand and communicate with your entire team. You get a score, you know you need to improve, and you can tell everyone we can actually improve along this specific thing. Now a negative could be that if we're not very clear on exactly what budgets are being tested, we may miss out on specific things we'd like to also measure. But this could also be an advantage as well. There's less things we need to worry about as well as we don't need to do much work as opposed to just run these reports and tests and have them do the testing for us. Now to talk about data sources, the different tools can actually use. Like we mentioned earlier, we can look at lab data or synthetic data and this is data from simulated users. Some tools will use field data or ROM data or real user monitoring and this is data from actual users as they experience your site. Tools like Lighthouse, WebPageTest, HTTP Archive and Caliber are all examples of tools that use synthetic lab data. Now WebPageTest will actually run your page on a real device that lives in a real device farm but it's still a simulated experience because it does not represent what your actual users will experience with the conditions that they have. Examples of tools that use field data are Chrome User Experience Report, SpeedCribLux and any analytics data like Google Analytics. One nice way to get started is if you haven't launched your site in your application chest yet, you can begin by using lab and synthetic data and here it's easier to isolate variables and it can make sure that you keep an eye on how well your site is performing as you continue to add more and more features. Once you launch your site, that's when you can make sure that you can measure field data from your users so you can assess how well users are experiencing your site. One thing to keep in mind though with field data is that the numbers you see might not always be representative on how well your site is actually performing. For example, let's say you launch your application and you have 1,000 users and you notice that the load time is four seconds. So you decide to optimize and improve where possible and then you notice that you end up having 10,000 users and the load time actually went up to five seconds. This doesn't necessarily mean that your site is performing worse. If your site is performing better and better, users with weaker devices and poorer narrow connections can actually now try and take advantage of using it. So in here and cases like this, even after launch is always good to also fall back and use synthetic data and use that as a baseline on how well your site's performing in a controlled environment. By using that as well as ROM data, you can get an eye of how well your site's performing with yourself as well as how your users are experiencing it. And now Kaye will actually talk about specific tools to try and integrate into your workflow. One thing that I'd like to remind you is that if you have any questions, please drop them into the form. The link to the form is below and if you're watching this on YouTube, it's in the description underneath the video. So let's talk about tooling for performance budgets. The good news here is that there are a variety of performance budget tools that only take a couple of minutes to get set up with. The first is Bundle Size. Bundle Size is an MPM package that allows you to set budgets for the size of particular files. To use Bundle Size, install the MPM package, add it to your package.json scripts and then set budgets for particular files. And you set these budgets within your package.json file. In the example up on the screen, we've set a budget of three kilobytes for the moment.js file. There are two features of Bundle Size that I wanna note. One is that it supports globbing. This means that you don't have to manually set budgets for every single file in your repository. Secondly, Bundle Size allows you to specify the type of compression to be used when measuring a file. By default, Bundle Size will look at the GZipped size of a file, but it can also be configured to look at the uncompressed file size or the broadly compressed file size. Bootstrapped, Preact, and Lighthouse are three examples of projects in Bundle Size. So if you want to look at how budgets are being set in the wild, go check out the package.json files of these three projects. The next tool is Webpack Performance Hints. Webpack Performance Hints is a feature that comes in Webpack out of the box. Out of the box, Webpack will warn you if an asset or an entry point exceeds 250 kilobytes. However, this default behavior is probably not aggressive enough if you're serious about performance. And you do this by updating your Webpack config. In your Webpack config, set up that you want to error when a budget is exceeded. You cannot ignore errors, but you can ignore warnings. In addition, you'll probably want to lower the max entry point size and max asset size. As I mentioned, Webpack's default for both of these things is 250 kilobytes. If you're trying to ship a fast site, 250 kilobytes is way too large. However, the exact value that you should set here will depend on your particular site and what your chunking strategy is. The next tool is Lighthouse. Lighthouse is a tool that probably needs no introduction. Lighthouse is a tool that goes through and assesses various aspects of your website, for instance, performance or accessibility and provides a score as well as feedback for each of those dimensions. It's available as an NPM module, a Chrome extension, and in DevTools. However, when we're talking about using Lighthouse for performance budgets, what we care about is the performance score that it generates. So what is a good Lighthouse score? When we're talking about performance, a site with a Lighthouse performance score between 90 and 100 would be considered fast. 50 to 89 is average and below that is slow. These scores are not arbitrary. They're derived by comparing the performance of your site against other sites within the HTTP archive. Thus, you can actually work backwards from your score to figure out what percentile the performance of your site is. So if you have a Lighthouse performance score 50, you're actually in the top 25% of sites in regards to performance. If you have a score of 100, that puts you in the top 2%. Many people incorporate Lighthouse into their build processes by writing scripts that look at the JSON output generated by Lighthouse. However, there are also tools available that do this heavy lifting for you. And one of those tools is Lighthouse Bot. Lighthouse Bot is available as an NPM package and it will go through and run Lighthouse against new pull requests. It can be set up to comment on these pull requests with information about the Lighthouse scores. It can also be used to block branches from merging if their performance scores fall below a certain threshold. For example, the code up on the screen would stop branches with performance scores below 90 from merging. To use Lighthouse Bot, install the NPM package, add it to your NPM scripts, and then lastly update your Travis Yemel file to run Lighthouse Bot. Another way Lighthouse can be used for performance budgeting is with Lightwallet. And Lightwallet is a feature in Lighthouse that I've been working on recently and it will be adding support for performance budgets to Lighthouse. So now when you run Lighthouse, you'll find that there's a performance budget section. Note that this will only occur if you set up a budgets.json. The budgets.json file will be a way for you to specify budgets for your site. If Lighthouse sees a budgets.json file for your site, it will then conduct a performance budget audit in addition to the other audits that it runs. The budgets.json file looks like this and it's a way for you to declare the budgets that you have set for your site. Supports three types of budgets, timing budgets. So these are things like first CPU idle, timed interactive, first contentful paint, page weight budgets, the amount of, say, JavaScript or other resource types on a page or the amount of overall resources on a page or the amount of third-party content on a page. And then lastly, setting budgets for the number of requests made. Like page weight budgets, this budget can be set for the page overall for a particular resource type or for third-party content. When you put all of this together, you get a nice little chart within Lighthouse that outlines both the budgets for your site, what was actually measured for your site and the difference between the two. The next tool is Google Analytics. Google Analytics really isn't a platform designed for measuring performance, but it does have a couple performance features inside it. And one of those is alerts if the performance of your site drops below a particular threshold. So to set this up, it only takes about 15 seconds. Go into your Google Analytics account and set up a custom alert. You can pick from multiple performance metrics and you can specify whether you would like to be alerted via text or via email. Next is PageSpeed Insights or PSI. PageSpeed Insights is a way for you to get both lab data and field data for your site or other sites all in one place. The lab data comes from Lighthouse and PSI takes care of running Lighthouse so you don't need to worry about running Lighthouse locally and field data that comes from the Chrome User Experience Report, also known as Crux. The Chrome User Experience Report is what I like to think of as the lab data equivalent of HTTP archive. It provides real user metrics from a wide variety of sites. PageSpeed Insights can be consumed via its API. And the nice thing about the API is that you don't need to sign up for anything or get an API key to use it. You can just start hitting it immediately. The only time that you would need to sign up for an API key is if you planned on making a really high volume of requests on the magnitude of multiple requests per second. When teams use PageSpeed Insights for performance budgeting, what they typically do is write a little script that gets performance data from PSI and then incorporates that into their build process. However, I also want to note that you can use PSI from the browser. And this is really great if you want to share performance data with people who don't consume APIs on a day-to-day basis. You can just give them this link and they can run all of these performance tests in real time in the browser. What the browser version of PSI looks like is this. And the API contains the exact same data. It's just returned as a JSON format. And there's three main parts to the PSI report. One is field data. The second is the origin summary. And lastly is lab data. So field data gives you information about the observed performance of your site over the past 30 days. In particular, it looks at two metrics. The first being first contentful paint. How PSI's graphs and data are set up is that it lets you know the percentage of users that had a fast experience, the percentage of users that had an average experience, and the percentage of users that had a slow experience. I want to note that the number in the upper right-hand corner of this chart is the 90th percentile FCP. It's not the median or the average FCP. The graph for first input delay looks exactly the same. However, the only difference is that that number in the upper right-hand corner is going to give you the 95th percentile first input delay. And if you're unfamiliar with first input delay, that's because it's a fairly new metric. The idea with first input delay is that it measures the amount of time between when a user tries to interact with a page and when the browser is actually able to respond to that interaction. This is something that's important to measure because we frequently see that sites have very long running scripts on the initial page load. And because of this, the main thread is really busy and therefore can't respond to the user. This frustrates the user because the page looks ready, but you're clicking everywhere and nothing's happening. The next part of the PSI report is the origin summary. And the origin summary gives you performance data for all the pages in the origin. This is really convenient because something that I see happen again and again is that we care about the performance of our entire site. However, we may be only remembered to test the performance of the home page or the home page in a couple other pages. Well, the origin summary solves this by aggregating the performance for all the pages in your origin. And the last part of the PSI report is lab data. And this is just the same data that you would find in a lighthouse report. We've already gone over lighthouse. I won't go into any more detail about what the lab data section contains. That brings us to the end of our discussion of performance budgets and performance budget tooling. And now we're gonna move into taking your questions. And now on to our very first question. So we have a few metrics questions we can begin with. There's one here that says, if you had to pick one, would you rather optimize for Time to Interactive or first Contentful Paint? So I think the idea here is, if you're optimizing your site to begin with, you're most likely gonna be optimizing for both if you try to optimize for one. And I think trying to decide between the two can depend on use case. Are you shipping something that's mostly static on your initial payload? Do you have a very interactive site that you'd rather have users engage with as fast as possible so it can depend? But if there's a very big gap between your TTI and your FCP, you may be server rendering your initial content, but also shipping out a lot of client-side JavaScript that takes a while to hydrate. And then if you have that problem, first input delay can also be a bit of an issue. Yeah, I'm always hesitant to say that you should only care about one of those because what I see tend to happen a lot is for a particular site, one of those metrics will be very flattering and one of them will be really unflattering. And then a site will choose whatever number is flattering and care about that and toss the other number away. Probably in particular, I see this with sites that have a fast FCP, but a slow TTI. Be like, oh, TTI is a horrible metric. We don't care about it. But say for instance, if you have a two second FCP and a 15 second TTI, that should be raising some questions in your mind as to what is going on that the browser is basically busy for 15 seconds in order to finish loading the page. So I would say you should probably care about both of them. Yeah. How many metrics should I be using? I would say there's no harm in collecting as many measurements or metrics as you want because it's only going to improve your understanding of the performance of your site. However, you should probably maybe prioritize or pick a couple of metrics that you're really going to focus on maintaining. So say for instance, you want to avoid the situation where there's a performance issue and a bunch of metrics look bad. You need to prioritize which one is going to get fixed first because it's kind of like if everything's important, nothing is. And I think people can tend to find it overwhelming if they feel like they've got to hit six different metrics. Now maybe long term, you should be shooting to have good performance across all those metrics, but particularly if you're starting out with trouble shooting performance, I think it can be overwhelming to be concentrating on too many things at the same time. Yeah, the thing about metrics is that they tend to change after some time. You know, the teams are working on adding new metrics from time to time, as well as improving what we currently have. So yes, ideally, you know, in the long run, you definitely want to use as many as possible, but I do agree and just trying to focus on a core set to begin with can't help you instead of getting really overwhelmed. So we have another question here, also metrics related that asks, I want my TTI to be within three seconds in a slow 3G network connection for a low end device to achieve this target. What should I follow strictly? So if you're thinking about actually trying to get these numbers to begin with, I feel like you're ready, you know, on the very far end of making a very performant site. And if you've obviously done a bunch of the essentials, like code splitting, adding budgets where you can, aggressively caching assets, I think the very thing, the thing I do want to mention here is that if you really want to stay under this number for a slow 3G network connection, it's going to be hard. And it's a very aggressive metric and you're most likely going to have to ship an almost entirely static site with minimal JavaScript and maybe minimal other assets because it's not easy to try to get that number for a slow 3G network connection. Yeah, I think that's possible to achieve, but you need to build from the ground up with performance in mind. You're not going to be able to take an existing legacy application using a big framework and whittle it down to something that's going to hit those targets on a low end device on slow 3G. I guess put that in perspective, so slow 2G is formally defined. There's not a formal definition for a slow 3G, but you probably can expect a round-trip latency of around a second. And the reason I'm highlighting that is that for these goals under those conditions, latency is going to be a constraint way before bandwidth is. You're never actually going to hit that bandwidth constraint because the latency affects the TCP slow start of the connection. So in three seconds, roughly you can only expect to transmit about 100 kilobytes of data. Now that is 100 kilobytes of compressed data, but still that's not a lot. But it can be done if you, particularly if you start for JavaScript using something like Preact or Svelte, those can be super small applications and I would shoot maybe for having an app that's under 50 kilobytes. So that gives you about two seconds to ship everything to the user and then it gives you another second for it to be parsed, compiled, and executed. So yes, can be done. You just have to be really, really aggressive and thoughtful about how you build your app. Should third-party code be included in my budgets? Yes, definitely. When a user goes to your site, they can't discern whether something is first or third-party code. It's all the same to them. Whether something is first or third-party code doesn't, you know, it all affects the end user experience. So I would say it's definitely important to include third-party code in your budgets. Yeah, and like we mentioned earlier in the actual session, in many applications, third-party code can be the vast majority of the code that you ship. And just like Katie mentioned, your users have no idea the difference between first-party or third-party code, code is code. And I think in the JavaScript ecosystem that we live in, it's very easy to get, you know, pulled down with a number of dependencies. It's very simple to just install whatever you want and it's easy to not, you know, be wary of how much third-party code's being driven in our application. And if we only focus on the first-party code that you're shipping down, you could be missing, you know, the biggest culprit. Yeah, I don't think we touched on this during the presentation, but there's also the concept of like fourth-party code, which is the idea that your third-party code, like so which you know of as the third-party code is in turn loading other code. So you might not only consciously maybe be adding a couple bits of third-party code to your site, but then those bits of code are in turn going out and adding additional code to your site. So that's why it's really important to check that because it can really add up. And you don't wanna be caught unaware of what is being added to your site. So now, okay, so we have a Lighthouse question here. Can the Lighthouse scoring card be used to determine a performance budget? So if this question is specifically asking, can we use Lighthouse scores as a budget to start? Short answer, yes. It's a very easy way to sort of gauge metrics and see how your site's performing in a score-based way. And it's definitely a good place to start, but with that being said, it's definitely nicer to have a layer of accountability so if you could hook it into your CI workflow using Lighthouse Bot, that would be even better because manually checking budgets is a good place to begin with, but it's hard to adhere to when a lot of things are changing your code base, as well as definitely keeping an eye on for Lightwall, like that Katie's working on because that would actually make the process of adding budgets to Lighthouse even better. And if you're looking for additional resources on what you should set your budget to, yeah, certainly the numbers in Lighthouse are good. I mean, they're there for a reason. Another strategy I see sometimes people using is benchmarking against themselves. So if, usually I may be using about 20% as a good amount, so trying to achieve a 20% improvement against your current performance and then keep working down from there or a third strategy would be benchmarking maybe against your competitors. So figuring out how the performance of your site compares to your competitors and trying to be the best in your class or category of sites. So those are two other options out there if you're looking for ideas on how to set your budgets. Would you recommend third-party tools like SpeedCurve? We would definitely recommend them. The reason why I didn't talk about tools like SpeedCurve or Caliber in this presentation is I just wanted to keep the focus on things you can go out and download instantly. You don't have to pay for anything. I think both those tools I just mentioned, they have free trials available, but I wanted to keep the focus on stuff that is you don't have to ask your boss for permission to purchase anything. But they are great tools and a lot of big companies use them. So we have another Lighthouse question. Is it possible to use Lighthouse in CI for pages that require being logged in? So I don't think Lighthouse Bot provides a simple API for this. With the Lighthouse CLI, I know that people have been doing work rounds and they've been able to test authenticated pages using Puppeteer. I think there's a pretty long issue thread in the Lighthouse repo where people are talking about different ways that they've done this. So I would highly suggest taking a look there. There should definitely be more documentation in the near future about this. I think the Lighthouse team, they do wanna make sure that there's some good resources that anybody can take a quick look at to see how they can include it. But for the meantime, trying to use Puppeteer can actually help. Actually, I noticed the question I got almost twice that's tangentially related to this is setting up private instances of web page test. You can definitely do that. It's just right now probably on air is not the right time for us to explain how to do that. But if you search for that online, you can find instructions on how to do that. So now, okay, we have a... So we have a question kind of strategy related. What frameworks or methods have companies used to reduce their application size? It seems like many popular frameworks are too large if you want to aim for best load speeds. That is an issue with a lot of the frameworks that people use in general. It's just that with a Hello World application, you get started with an initial footprint that really doesn't even contain any of your code. So it's something to be wary of if you decide to use libraries or frameworks or anything that you need to power your site on the client side. But aside from the many different approaches like code splitting and so forth, there have been some companies that have taken some nicer and different approaches and ways to actually go around this. I know Netflix, for example, they decided to remove React and a few other libraries on their landing page. And what they did was they decided to just rely on vanilla JavaScript for that specific page but also pre-fetch and take advantage of the browser's idle time to pre-fetch React and other libraries for pages that definitely would need it that the user might have moved to. So I think this is an approach that we might be seeing more of in the near future to try and pre-fetch libraries like React and so forth only when they're needed instead of just relying on them everywhere on the entire site. Companies like Twitter have done things like they've built a light mode version of their site. So I think more and more companies now are thinking of if they can't really improve the performance of their main application, maybe they can build a lighter version that contains fewer assets as well as other data-saving capabilities for the users that actually really need it. In terms of actual frameworks and how they can sort of improve this initial payload that they provide, we're sort of seeing this push now towards progressive hydration. I know the React team and the Angular team, they're both really exploring this right now where when your server side render, instead of just shipping the entire payload, stream it and only ship chunks of it while the user's interacting with your page. And this can be pretty exciting if it works well. So the idea is only when you interact with a component, say you click a button, maybe the code for that component can get fetched while it's already been initially server side rendered so your user can see what they're interacting with. So I think in the near future, if this actually gets better and better, we could see better wins in the sense of framework size not being large or huge when you begin to use them. I'll continue on that question because there's a lot of different interactions you go with this. There's a particular frameworks thing, I just talked about this, but the three ones I would recommend if you're just looking for a really small footprint would be a pre-act spell and Polymer would be the three to look into. Obviously then there's also, you can take the other approach would be to just really analyze all the dependencies that you're currently shipping. A lot of people have success by slowly picking out different components that can be replaced with smaller components. So we mentioned Moment.js in the slides and passing but that's like a perfect example. There are quite a few libraries out there that can be used to replace Moment.js and they're much smaller. And in addition to looking for libraries that are much smaller, look for libraries that can be tree shaken so that you're only shipping what you actually need. Lowdash is another one that people a lot of times try to remove it as much as possible. So that's another approach. And then Hussein mentioned the strategy that Netflix was using or still using where their initial landing page is vanilla JavaScript and then they're loading the React app in the background. And I think we're starting to see that a lot more where either your landing page is vanilla JavaScript or as I mentioned, like pre-act or spelt. So even if you can't whittle down your entire app to something small or that's not something that's feasible in the near term, you can cut the scope on that and make that something that's a little bit easier to achieve by just worrying about shipping a really small landing page and then pre-caching your legacy app in the background. So that's something that's a little bit more easy to implement than rewriting your entire app. How do you go about establishing performance budgets in organizations that are very traditionally desktop? It's actually really interesting. So a lot of people, and I think this is common, I'm even guilty of it myself when we talk about desktop, we almost use it as a proxy for fast and we use mobile as a proxy for slow. That's not actually true. I think if you look at, for instance, the HP archive, you'll notice that there is a performance difference between desktop and mobile, but it's not very large. I mean, it's pretty small. So I think that's probably almost the biggest key because once you establish the fact that desktop is not a synonym for fast or powerful machines, I think as developers, a lot of times, we think everybody's working on these really powerful developer machines and that's just not the case. Then really everything else that we talk about with performance of budgets applies the same in a desktop environment. I think maybe only a couple things that are different about desktop apps is obviously the form factor is different. So images and stuff like that are bigger, but in the end, I would say they're usually the same problems on both devices. I agree. I think if you're thinking performance budgets, just treat them the same way, regardless of what a user is using. Once you've set budgets that work for your users, it shouldn't matter after using desktop, mobile, or so forth, but to Katie's point, even if you're thinking desktop and I think something we do quite often and we say, oh, desktop is fast and it's not entirely accurate because especially in other countries and not in the West, we have a lot of people using desktop machines but their narrow connections are extremely slow. Like Wi-Fi could be a very slow thing where we've talked to people where they actually tend to rely on their mobile data even when they use their desktop machines. So even if you still think that desktop machines are primarily a bit faster, you still have to get through the hurdle of flaking narrow connections or just poor narrow connections and that's always gonna be a problem. That's a good point about the Wi-Fi. I think you know, obviously we assume that I think when we're talking about desktop, there are people on Wi-Fi and not tethering or something like that and certainly in the US, Wi-Fi I think tends to be faster than mobile but that does not hold true globally in a lot of parts of the world. Mobile data can be faster so the fact that your users are on Wi-Fi doesn't necessarily mean it's fast. I mean, as an example, I'm sure a lot of you maybe have stayed in a hotel with really slow Wi-Fi. You were on Wi-Fi but it didn't necessarily mean that it was fast. And I guess maybe one place to start would be to start just looking at the numbers that you're seeing for your site and see if your users are having a fast experience. That'd be one place to start. So we have another question here that asks, how do you win the priority battle and get budgets taken seriously by your colleagues? I think it's a good question because it's something I think everybody who tries to implement some sort of performance optimizations or budgets or anything else sort of has to go through when they try to bring it to their team. Especially if it's something that hasn't been considered yet. It's something that I've actually had to deal with multiple times in previous teams as well. And I think the one thing that would actually work here is try to find case studies and examples online of sites and vendors that have actually taken steps to improve performance as well as seeing how much they've noticed improvements in user retention and revenue and so forth as a result. Like Pinterest, for example, I can't remember the numbers off the top of my head but there's a very good case study of how much growth they've seen while improving their mobile web experience. So trying to get those sort of numbers and use that as sort of evidence that performance can help. And then actually mentioning that setting budgets is a clear way there. And I think that's a good place to start for sure. Okay, we have another question here that asks, what are performance best practices for retail homepages? So a lot of the general best practices for performance also apply for retail homepages. What tend to be a little bit different about retail homepages is I was saying in general, they have far more images than the average site. I mean, most sites these days have a lot of images but retail sites in particular have a lot of images. And the quality of those images is really important because a lot of times that's what people are using to make product decisions. So the first recommendation I would have is really make sure that you're implementing all the best practices around images, so compressing them, serving them at the correct size, so on and so forth. The second thing would be to be mindful of the volume of third-party scripts on your site. Retail is one of the verticals of sites that you tend to see a lot of third-party scripts on and those scripts tend to have a negative impact on performance. So those are the two things in particular that I would really look for, the things that I think are a little bit different about retail sites than sites in general. This is an interesting question. We have someone asking, if you only had limited time and resources to tackle performance in your site, what would be the first thing to focus for a long-term benefit? I'm sure different people have different opinions and if you had to focus on one thing, it really depends on how much time that one thing might take, especially if you're short in time. Personally, the very first thing I think about when I think of optimizing is where can I split my code? It's just whatever tools you're using is very easy to just have everything being shipped as soon as a user loads the very first page. So code split on the route level, I always think that's a very good first step. And then can you code split on a component level if you're using component framework? And then just by doing that alone, you will notice pretty good improvements just by that. Personally, that's my first thing that I look into. I'm sure, I don't know, Katie, if you have any. Yeah, so I would, it's interesting, it seems like every site, they're low hanging. I would say go for the low hanging fruit. And for every site that tends to be a little bit different, though in general, JavaScript is a big issue. So I would recommend run Lighthouse and see what the results come back as. As an example, most sites have GZipped enabled, but occasionally I'll come across sites that don't have that enabled. And that's like a one hour fix. It's something that's really quick to do. And all of a sudden you might be shipping 30% less bytes. So I would say look for some of those easy wins. See if, a lot of times there are things out there that maybe certainly take less than a day and they're not going to automatically get you to super fast, but they can make a significant impact. And then, I didn't notice through part of the question, what can you do to focus up for long term benefits? I think what we kind of answered was maybe like, what would you, should you just do to have a positive impact on performance? And if you have a little time I guess, but yeah. I guess maybe if you want to have a long-term benefit, we're obviously biased because we just talked to you for half hour about performance budgets. But yeah, maybe think about putting performance budgets in place because what we see time and again is people get excited about performance. They realize it's important. They do make those quick optimizations that we were just talking about. And then life goes on, you keep shipping more code. The site gets slower and you kind of regress and you're back to just square one again. So yeah, I'm definitely biased. I would say consider performance budgets. Yeah, no. And something we're seeing more and more of where people just spending, you know, I have this limited time now. I can really optimize my site. I'm going to do it and they do it and it works well. But it's just so easy to fall back. So like, I know we did talk about this for 30 minutes, but actually just setting budgets could be one small thing that you do now but it would really help you down the line. And like actually setting the budgets will be good. But I mean, if you really just want to do something, at least just start tracking the size of bundles. Because then you're like, oh wow, we shipped this thing and our bundles were 5% larger. Maybe we should look into that. And at least you kind of understanding of what's going on and how silly performance is we're crossing. It's a starting place at least. How to avoid rendering issues when loading fonts from CDNs. So typically we see this issue when people are using Google Fonts, unfortunately, because Google Fonts does not support font display swap. And for those of you unfamiliar with font display swap, it's a CSS attribute that tells the browser to just use a system font while the requested font is still loading and the ones that font arrives swap it out. And the performance impact is that text is displayed right away and it just changes style once the font arrives versus that font blocking text being displayed on the page. So the solution there, unfortunately, is to self-host the font and then, because you're self-hosting it, you can support font display swap. We have another question here asking, how much JavaScript is too much? This, again, depends on use case, depends on what you're shipping, depends on your users, depends on what devices they have, depends on the network connections they have and so forth and so forth. So there's no direct one answer. And I think as much as we, as advocates, tell people, minimize JavaScript and minimize as much as possible, if you really feel like you're shipping enough, then you're fine. But we do have rules of thumb in place that we sort of try to use and leverage when we can, just as a good starting point. 150 kilobytes on a single page is something that we try to strive and stay underneath because that would give you a decent enough TTI on like a Moto G4 or a similar device. But again, it really depends on your example and your app and it's something to just be wary of because even if you think you're shipping enough JavaScript that's okay now, it could definitely be something that you'll be concerned about if more and more features need to be added. Next question, is it a good practice to monitor performance regressions from build to build? Definitely. By doing that, you'll be able to understand what introduced those performance regressions or I shouldn't just use regression. Sometimes we ship code and actually improve performance. So performance changes from build to build. Yes, I think you definitely should because it helps you understand the relationship between the code that you're shipping and the performance that you're seeing. In addition, once you have historical metrics, and one of the things we talked about is how timing metrics can be a little bit inconsistent but as you start to collect more historical metrics and have a larger data set, hopefully it becomes a little bit easier to see trends because if nothing else, if the data's varying a lot but it's going up over time, you do know there's a performance issue. And the second, maybe another reason to add onto that is I think by tracking from build to build, it's easier to fix the issue when it occurs. Some of them I know quite a few companies do they have basically a performance, like an on-call performance engineer. So if a build has a performance issue and they've noticed this because they're tracking performance across build to build, that engineer is alerted and their task was fixing the issue. So that stops these issues from getting out into the wild or staying out into the wild for a long time. So we have a question here that says regarding the chat about low TTIs with slow 3G, how important at this point will this be for the designer to be involved and even versed in this project? I think that's a good question. I think if you're just thinking performance to begin with, not even with that aggressive of a TTI that you're trying to reach, it's good to just have the designer involved as much as you can because they can be the one deciding which assets get used. They can be the one deciding whether different image sizes are used for different form factors and just having them in the conversation means that they can actually be like, okay, let me give you these assets or whatever or these fonts because I know performance is a big deal. If you're really thinking about slow 3G connections, then it's extremely important because whoever's designing your application needs to definitely be trying to minimize anything that they're shipping that could actually hinder performance. I think we have time for one last question and that's how does my location and bandwidth affect web page test? And actually, I'll generalize my answer because I think what I'm about to say applies to both Lighthouse and web page test. So with web page test, at least you select the server location that it's running on. Try and think, there are multiple locations and not all adults, right? You can't do web page test, you can't. It defaults to those, I think Virginia, but you can pick different states. But you can access from other locations in the world. So your personal location is not affecting it. This is, I'm assuming that you're not running your own personal instance, but location does matter, we know, because something that I get feedback on a lot is maybe locally, they'll observe much better feedback than what they're seeing on web page test. And that's sometimes due to geographic issues. No, if you're very close to the servers that are running web page tests, you can expect that you're gonna see better performance results. Then if that site that web page test is testing is located far away. And the same thing would apply if you're using Lighthouse through PSI. Obviously the further a site is away from PSI servers, generally the worst performance that's gonna be measured compared to a site that's much closer to PSI servers. So that brings us to the end of our questions today. I'm Katie. Yeah, and I'm Hussain. Thank you so much for joining us today and remember to register for Ask Chrome Live to receive updates on future sessions.