 Hi, I'm Cheney, and I'm here today to talk about optimizing your website for core web vitals. Later on in this talk, my colleague Damian will hop on and dive into a real-world case study. Today, we're going to spend time talking about the interactivity pillar, why it matters, what metric to focus on, and how to improve it. As a refresher, the other two pillars of core web vitals are loading and visual stability, and you can learn more about that in Adios Mali's talk from web.dev.live linked down below. First, let's talk about interactivity. What are we actually trying to improve and optimize for the user? Web pages are more dynamic, more touch-driven than ever before. It's an ongoing dialogue between the user and the page, with multiple taps, swipes, and scrolls, all within one navigation. When we touch or drag or swipe something, as humans, we've been trained by the world all around us to expect an instant response to that input. Yet, the UIs we touch digitally don't always seem to match those expectations. It's frustrating to encounter experiences that seem sluggish or outright unresponsive. Of course, certain actions like tapping a search button or a filter button may not be trivial work, maybe I'm just tabbing from one navigation tab to another. What this pillar looks at isn't necessarily returning results immediately, but rather measuring whether the page was able to react to that input and give feedback instantly when necessary. So we ultimately want to improve something we call input latency, and that comes into three parts. The first part is the delay. That's the time between a user interaction and when the browser was actually processing event handlers in response to that. Contrary to popular belief, when you touch something, the browser is still finishing up whatever it was doing beforehand. The second part is the processing time. That's the time that it actually takes to execute event handlers that are tied in response to that user input. Notice how in part two here, the user interface hasn't actually updated. There's no feedback yet given to the user that their input was actually registered and that it actually did something. So the third part is rendering, and that's the time it takes to render the next frame after the browser knows what kind of UI update to put onto the screen. And this is where you can see the UI shifted from fly to sleep. So the three parts here summarize what we call input latency. Now the metric input delay refers to the very first part. We measure an input delay for every discrete action a user takes, such as a tap, a click or a key press. Scrolling the page is not counted since it can usually still happen, even the main thread is still busy. So looking at this example page load, the first input delay, it's the very first interaction after a user navigates to your page. Notice how there are many input delays, but we want to concentrate on the first one because that's often when the browser is the busiest, it's parsing and executing various large JavaScript files that you might be loading, and it's really also a great time for you to make a good first impression as a web developer. While we want to make sure that every user input on the page has minimal delay, in the current addition of core files, we recommend optimizing the first input delay or short fit. In this page load, when the user taps at the very beginning of the page, the main thread was in the middle of a JavaScript task, and in order to start responding to that input, the browser needs to wait until the task finishes. And so the time between the point of the user input and the browser actually finishing that yellow task you see here, that time is the first input delay. And in order to have a strong likelihood that you can respond to the user in a fast expected way, we recommend having no more than 100 millisecond delay at the 75th percentile or higher. It's important to remember that first input delay is a field metric, and it requires a real user. And it's not easy to guess what a user might do. Some users are interested in X, others might browse differently and scroll first before tapping on something else. Others like myself are a bit more impatient in tapping things immediately. And this is all impacted by what you show the user, a splash screen or you have a loading carousel. When you show that, that particular user's intent and what other work you might be doing underneath the UI kind of hidden from the user. So the variation in input delays show the importance to collect and analyze fit data from your users in the wild and also concentrate on high percentiles like the 75th. You can collect this data using the Web Vitals JavaScript library linked here or checked with your RUM analytics provider for any out-of-the-box support that they might provide. With the data collected, you can look at every type of input from the instance ones that don't hit a blocking task or the ones where there is a delay and you want to answer an important question. When the user did experience a delay in response to their input, how bad was that delay? Oh, look, Damien is joining the call. Yeah, I've been trying to join for the last five minutes, but I wasn't getting a response. Sorry, I was finishing up this long task of an introduction. Maybe next time you can break it up into smaller chunks. So you're going to respond faster? That's a great idea. But even better, maybe there was stuff I could have cut entirely. Anyways, I know you've been talking to real developers out in the field. And what are some common questions about first input delay that you've been hearing? Yeah, that's true. So is FIDs affected by many factors in the field? Which metric can I use when I am developing and testing in the lab? That's a great question. So the problem, as we've said, is that every single user is different. You can't generalize a test case in the lab that really represents the field. Every user has their own rightful choices on the page, and it could be very different. Taking an average or medium thus wouldn't make sense since values could be zero because they didn't hit a long task at all, or values could be pretty high because they touched the page right at the middle of a long task, as you see here on the left. So first input delay requires a user to have interacted with that page. And we know that it could be in the middle of some main thread work. And in the ideal scenario, we go to our lab tool and we say, well, what is my typical input delay? Now, we just explained that it's not very easy for a lab tool to do that. So instead, in Lighthouse and DevTools, we surface a companion metric for the lab called total blocking time. Now, total blocking time describes the root cause of a slow first input delay, which is long blocking tasks. We set a budget of 50 milliseconds for each task. But if you go beyond that amount, every millisecond after that is considered potential blocking time. And you do get a free 50 milliseconds because we think that it gives the browser and the main thread enough time to do some work and reliably react visually to some user input in that time frame. Now, that user input could happen at any time. It could hit the very first task on the page to the 50th task on the page. So it doesn't make sense to measure total blocking time just for one task. So instead, we look at all the different tasks during the timeline of the page load and sum together all the different blocking times, unless it's called total blocking time. We'll give you an example here. So then on the slide, you see a main thread. You can see multiple tasks happening here. And there are long tasks, short tasks. The sum of all these different blocking regions to note it in red is what we call the total blocking time. Now, what a developer comes along and wants to improve this. And one way they can improve that is by maybe optimizing the hydration step of the app. That might knock off 100 milliseconds off of that one task, and they'll also knock off 100 milliseconds off your total blocking time. So total blocking time, it doesn't measure FID in this case, but it does correlate with the FID because if you can compare, well, this main thread now looks a bit more open. So the chances, the probability that when I tap sum to where inside this main thread, I might have a higher chance of having good first input delay due to it being more free. You can find total blocking time surfaced inside Lighthouse. And this will be one of those top metrics that you'll see there. But sometimes you might want to dig deeper. It might not just be your first party code that's causing a slow total blocking time. It could actually also be third parties. So Lighthouse is set up to try to help you optimize for this by having a Lighthouse diagnostic just for third parties. In this audit, you can see all the different third parties you've loaded from the different domains listed out here, the size of the network transfer, and you'll find the total blocking time contribution on the right-hand side here. And you'll find that sometimes even small scripts that are relatively fast to transfer over the network could have a really large impact on your blocking time due to the work that it executes on your main thread. Well, that's good. So you just mentioned that this metric correlate. But I see sides with a good FID in the field, but a poor TBT when assessed by lab tools. What would be the reason for that? That's a great question and a very common question. I suspect that the developers and you have seen reports kind of like this, where at the top, it pulls data from the user experience report from the field and this shows a green number that you have a relatively good first infant delay. And on the bottom, you open up Lighthouse. And we just said that total blocking time is a great tool to assess FID here, but we see our total blocking time is marked red. How does that make any sense? So let's dive a little bit deeper here about field data. Field data is a reflection of your actual users. So when we assess core vitals at the 75th percentile, it's checking if at least 75% of your actual user inputs fall into the fast bucket. The characteristics that may make up of this population could be very different from site to site. Lighthouse is a very general tool. It doesn't have access to your user base and kind of your biases. And it might be emulating a target user that might match closer to a higher percentile, depending on what kind of site that you've built. We also know that FID data has a very wide range. Some inputs could be as low as zero because the user just taps when the main thread is free. And sometimes it could have a very high FID score. So we see a curve you've plotted out across all your different user inputs to be somewhat like this shape. And so what this means is that when you assess it at a lower percentile, it could actually digress very far from something you might measure at the 95th and 99th percentile. When you start moving to a 75th percentile, it starts to predict with higher accuracy what that might be. But your tools might actually be assessing it at a higher percentile because it's understanding a different subset of your users. So we know from experiments that total blocking time and first input delay are correlated, but they might be impacting the curve in different ways. Nevertheless, what that correlation means is that an improvement in total blocking time will likely lead to an improvement of first input delay across the curve in this example here. So the key word here is probability. Ultimately, first input delay is reliant on many different field factors. Consider an edge case where a page has a single call to action and it shows up very early on in the page load. And it's the only call to action. It just happens to be when the main thread is busiest. Most users will end up tapping at that moment, hitting that long main thread task and they don't wait for the page to fully settle. Now a developer could come along and improve total blocking time in this scenario by cleaning up long tasks later down in the page load. But in this case, the user was still tapping at the very beginning, hitting that same long task and getting the same delay. So this is one example where total blocking time did improve, but the likely place that a user taps the page happens to still be hitting a long task and thus leading to a bad first input delay. So what you need to remember is that the key here is probability again. When you improve total blocking time, it leads to a better chance that you improve your first input delay, but know that there are edge cases when out in the field. Speaking of the field, next Damian will present a real world case of how developers actually improve this metric. Thanks, Cieny. In this part of the talk, we'll review a real-world case of interactivity optimization. The case comes from Mercado Libre, the largest e-commerce and payments ecosystem in Latin America. Mercado Libre is a complex website developed by distributed teams with a mix of technologies. For that reason, implementing a performance strategy across the company can be a challenge. Despite this, Mercado Libre front-end architecture team took the job of monitoring speed throughout the site and applying performance optimizations when necessary. In this section, we'll focus in a particular optimization for one of the core web vitals, first input delay. To start this journey, let's review how to monitor performance. Speed tools are divided into two major groups. Laboratory testing tools are run in a testing environment and are critical during development. Mercado Libre use Chrome DevTools, Lighthouse, and WebPage tests while working in the lab. Real user monitoring tools collect data from the field, letting you understand how real users are experiencing your side. Mercado Libre team combined the Chrome UX report with other RAM tools to measure performance in the real world. When working on performance, the first step is to have a plan that allows you to identify issues, iterate on them, and analyze the results. As Jenny mentioned earlier, unlike other core web vitals, first input delay is a field-only metric. When working in the lab, you can use total blocking time as a proxy metric for first input delay. A tool that can be of great help when optimizing TBT locally is Chrome DevTools. The performance tab lets you easily visualize long tasks, which are those that take more than 50 milliseconds and have a red triangle in the top. In the lower left corner, the tool informs what is the total blocking time for that trace. After making changes locally and deploying new versions of the site, you can use tools like Lighthouse and WebPage tests to simulate how the site would load under certain conditions. Finally, you can measure the impact in the real world by querying the Chrome UX report. There are different ways of obtaining and visualizing this data. The cracks dashboard, for example, lets you see how the different core web vitals have evolved in time. The cracks API is another way to dig into the data and integrating it with your own tools and solutions. For example, Mercado Libre used the cracks API to build a tool that let them easily compare their URLs against competing sites and create a ranking based on that information. But one of the cracks integrations that Mercado Libre team found more useful was the Search Console core web vitals report. The team started to receive Search Console warnings alerting that product detail pages were having a poor first input delay. This helped the team understand on which part of the site they should focus their optimization efforts. After receiving this information, the next step was measuring long task in Mercado Libre product detail pages. The team started by running Lighthouse on a sample of product detail pages. And they found that the only metric in red was max potential first input delay with a value of 1.7 seconds. This metrics represents the duration of the first long task. Take into account that at the moment when Mercado Libre applied these optimizations, they were using Lighthouse 5.2. In the latest version of Lighthouse, version six, the metric to use in cases like this is the one that Cheney covered at the beginning of this talk, total blocking time. To dig into this metric, the next step was running simulations in real devices and connection types. Mercado Libre is present in 18 countries. Its main markets are Mexico, Brazil, and Argentina. With all these options, they needed to decide in which country to work first. The team picked Mexico as the country to iterate on their solution. As web page test offers a wide variety of devices to test from close locations. So to simulate the experience of users in Mexico, they decided to use the following profile. For the location, they picked Duel's Virginia for being a relatively close city with a wide variety of real devices in web page test. For the connection type, they picked 4C. And for the device, they chose a Moto C4, a relatively low end phone which can easily reproduce performance bottlenecks around interactivity. This is how the main thread looked like for product detail pages. As can be seen, there was a long running task occupying the main thread for two consecutive seconds. This explained the long values for max potential first input delay. Analyzing the corresponding waterfall, they found that a considerable part of those two seconds came from two files. Their tracking module, which is used not only in product detail pages, but throughout the whole website. And the main bundle JS file, which was 950 kilobytes and took a long time to parse, compile and execute. Based on the information obtained, Mercado Libre set the goal of optimizing the two modules that were running expensive code. Product detail pages allow the users to perform complex interactions. So the challenge was optimizing these files without interfering with valuable functionality. They started optimizing the performance of the internal tracking module. The module contained a CPU heavy task that wasn't critical for it to work, and therefore could be safely removed. This led to a 2% reduction in JavaScript for the whole website. After that, they started to work on the main bundle. Mercado Libre used webpack bundle analyzer to detect opportunities for optimization. For example, initially they were requiring the full Lodash module. This was replaced with a pair method require to load only a subset of Lodash instead of the whole library. They also used the Lodash webpack plugin to shrink the library even further. After that, they applied the following Babel optimizations. They used the transform runtime plugin to reuse Babel's helpers throughout the code and reduce the size of the bundle considerably. Then they applied the search and replace plugin to replace tokens at build time in order to remove a large configuration file inside the main bundle. Finally, they used an additional plugin to save some extra bytes by removing the prop types. As a result of all these optimizations, the bundle size was reduced by approximately 16%. The changes lowered Mercado Libre's consecutive long task from two seconds to one second. Running Lighthouse again, it showed a 57% reduction in max potential first input delay. But one second of consecutive JavaScript was still too long. So the team set the goal of optimizing this metric even more. Digging into the main thread, they identified which word the library is producing long tasks. Since product needle pages were built with React, they found some inspirations in the guides and code labs at web dev slash react. Here are some of the optimizations they made. First, continue reducing the main bundle size to optimize compile and parse time. For example, by removing duplicate dependencies throughout the different modules. Second, applying code splitting at the component level to divide JavaScript in smaller chunks and allow for smarter loading of the different components. Finally, defer component hydration to allow for a smarter use of the main thread. This technique is commonly referred to as partial hydration. The new trace showed even smaller chunks of JS execution. This gave the browser more time to process user inputs leading to a more responsive user interface. Running Lighthouse once again, they found that the max potential FID time was reduced by an additional 60%. But the true goal of these optimizations was to improve the experience for real users. Mercado Libre collected their own real user data to measure core web vitals. This is a report obtained from New Relic showing how FID improved in product needle pages. The control group in yellow show first input delay without any optimizations. The experiment group in purple shows a much lower first input delay after the changes were made. Every 28 days, the cracks report presents new data from real users. Here we can see Mercado Libre's first input delay progress between January and April 2020. Before the optimization project, 82% of the users were perceiving FID as fast. At the end of the journey, this number went up to more than 91%. This means that 9% more users perceiving these metrics as fast. At the beginning of this section, we said that Mercado Libre was receiving warnings from Search Console about the performance of product needle pages. After fixing these issues, they stopped receiving these warnings. Let's do a quick recap of Mercado Libre's case. At the beginning of the year, the team set the goal of optimizing interactivity in product needle pages. They combined laboratory and real user monitoring tools and used an incremental approach to apply optimizations. As a result, they achieved 90% reduction in max potential first input delay in Lighthouse and a 9% increase in users perceiving first input delay as fast in cracks. But performance is never a finished work. Mercado Libre believes that the speed of their site is a crucial aspect of their user experience. So they are constantly monitoring and applying optimizations across all the core web vitals. I hope you have enjoyed this talk. Thanks for watching.