 Hey there, welcome to web.dev live. I'm Diyong Almer and I work on the web developer ecosystem at Google and I'm delighted to kick off our online event. First though I want to acknowledge the times we're in. We're dealing with a global pandemic that has taken a huge toll on us all. And most recently we're witness to events which have once again surfaced the systemic racism in our society that we must do everything we can to eradicate. You know these events have been really humbling. They're showing us how much work we have ahead of us but they also show us the power of community. So we join you today and over the next three days in the spirit of being together and helping each other because we were upset when we had to cancel Google I.O. and I kept thinking about an empty shoreline amphitheater on the days that some of us would have congregated. Web developers reached out sharing these same feelings. Wishing we could be discussing ideas and enjoying the hallway track. Whether you're joining us from your couch, kitchen or hammock, we hope you're safe and ready to kick web.dev live into gear. Now we'll be coming to you in different time zones each day reaching you no matter where you are on the globe. We'll be bringing you content from across our teams as well as members from the web community at large. Now each day you'll have Googlers on standby to answer your questions in real time. So as you're watching the session simply head over to the live chat on web.dev slash live or on YouTube and just ask away. Now when coronavirus became global, we really felt the need to stabilize. This resulted in us pausing Chrome releases and temporarily rolling back the same site cookie changes. Now we wanted to track Chrome usage and see what changed to make sure that we could be on top of any ecosystem changes too. You probably won't be surprised that we saw surges and usage of media or APIs as video chat and streaming really soared. Now also some types of content saw large traffic surges such as food, commerce, entertainment, health, science, etc. And many developers were focusing on making sure these sites were as resilient as possible. That's when we gathered our best practices and made them available on web.dev slash COVID-19. We saw a lot of developers scramble to make changes to their websites and many created new ones. Governments had to jump on this to make sure that people had all of the critical information that was changing rapidly. I remember seeing Alex Russell tweet about one of these government sites from the state of California. We were really inspired with their work and wanted to ask them about their experience and they kindly agreed to join us. So let's welcome Aaron Hans, the engineering tech lead on the project. Thank you. Great to be here. So Aaron, I'm really curious about how this site even all came together. The Alpha.ca.gov team was formed in December of 2019 by Angelica Carrote to bring human-centered design processes to the state of California and improve their online services. We built a lot of prototypes for things like how to help people review the safety of their tap water and see if they're eligible for subsidized phone services and to prepare for wildfires. And then when the pandemic hit the state, we're asked to stand up the public response site. Got it. So when a government team has to build something like this, like how do you go about it? What are your core principles? The number one goal is to make something that works well for everybody. And the technical considerations, they're passing accessibility audits, making sure it works with keyboard navigation, with screen readers, and that it has a smooth experience on low-end hardware. We use the cheapest phone we can get from the local cricket wireless as our test device. And the non-technical considerations are the readability, what's the grade level of all the content, and are we really building something that users need and iterating based on their feedback? Got it. Now I've been trying to pitch at the time pressure that you had here to get this site out. Can you tell us a little bit about how you actually built the site and how you manage the trade-offs between quality and that timeline? Sure. It's definitely an accelerated timeline. We put the site up in four days, then the governor announced a statewide lockdown and we had millions of visitors. Really happy that we chose a static site generator for that because it helped us weather the traffic smoothly. We chose 11 for the static site generator and we augment that with web components and serverless APIs built on Node.js. Got it. We're actually fans of 11T2. We use it on web.dev and really like it. Was this kind of a new setup for the team to build a website like this or have you been doing this for a while? I remember reading that article about how web.dev was built and being really happy that we were using some of the same tools. We started using 11T at the end of last year just to use it for a blog on news.alpha.ca.gov and when we used it for the COVID-19 site, it's built on content authored in the WordPress environment and then we consume it with the WordPress API and use GitHub Actions for the 11T production build. Got it. Now you've got the site out there now. I'm curious, what's next for you and the team? Next things for the team are continuing to respond to the pandemic. We're going to be getting back to helping improve other online services and we're hiring. Check us out at news.alpha.ca.gov if you want to help out. We're talking about 11T and the other tools that we're using. I wanted to mention that Lighthouse is an incredibly important tool for us because performance is such a paramount concern and I love the way that it gamifies by development. You can get the rest of your teammates to challenge each other and say, who can put up some more points today? We really need to get that score up. Nice. I'm curious who's winning on the points race and I'm really impressed in how you think of performance being a key part of the accessibility story in general. Well, Aaron, it's been really inspiring to see the work that you and the team did here. Again, at an incredibly stressful time. Thank you so much for coming on and sharing the story with us. Thank you. Now, it's been great to see developers like Aaron focus on accessibility, resilience, and performance. We've made some announcements over the last month about a program that brings this all together under the umbrella of Web Vitals. To hear more, let's welcome Elizabeth, a PM on the Chrome team, to explain. Thanks, Deanne. Yeah, there have been a lot of product updates and releases and I'm really excited to go over them with you. Yeah, it's been particularly busy here over the past couple of months. So it'd be great to have you get us up to speed on Web Vitals and what developers should be really considering here. Yeah, that's great. Let's dive in. First off, what are Core Web Vitals? They are a set of user centric metrics and thresholds that apply to all web pages across all industry verticals and all types of experiences on the web. They are signals to developers and business stakeholders about the basic health of your site and as such, they should be measured by everybody. But okay, I jumped straight into definitions. Let's take a step back. Why did we introduce Core Web Vitals as a thing? There are already tons of metrics, lots of guidance about how to measure your site's performance. How do Core Web Vitals help us? Well, let's go back to our foundational goal. We want to create outlandishly phenomenal experience for all of our users. And it's not just out of the goodness of our hearts either. We know that every time we have a rage clicker on our site, we lose out on a reader, a customer, or a client. Also, we want a money pug. So there is this mythical, absolutely fabulous experience that we've set our sites on creating. It seems easy until you realize that the unicorn horn requires both loading and interactivity performance measurement and the rainbow. Well, the rainbow requires an entire rum setup for each color. So there you are watching your flying Unicend, unicorn doxioned, and you realize that you have this. It's gorged on a bit too much JavaScript. It doesn't respond when you're issuing it commands, and that's upsetting. But it's going to take quite a bit to get this to this. So the question is, where do you start? Well, in order to know if you've improved, we need to know what to measure. To know what to measure, we need to define our goals. So put another way, what makes a web experience shine? This is where the core dimensions of quality come in. There are foundational elements of a user experience that make a Unicend shine above the competition. Content needs to load quickly. We've all been there. The longer we have to wait, the more likely we are to bail. So your pages have to load fast. Interactivity is just as important. You're clicking and nothing is happening. No fun. You don't just need content to be visible, you need it to be available for use. Lastly, we want a page to be stable and predictable. Just a few pixels moving around can make a huge difference. These core dimensions of quality reflect user-centric signals that have long been mission critical for you and your site's success. So we are closer to defining quality, but how do we measure these quality dimensions? And that's where representative metrics come in. To represent fast loading, we have largest Contentful Paint, or LCP. It provides insight into how quickly a user is able to see the meat of what they are expecting and wanting out of a page. For responsive interactivity, we have first input delay, or FID. This metric has been a critical signal for developers for some time to understand how long a page takes to respond to a user's initial input. And finally, to represent visual stability, we have cumulative layout shift, CLS. CLS measures the amount that the elements within the viewport move around during load time. Okay, so we know how to measure our core quality dimensions, and let's say my LCP is three seconds. Do I celebrate? Wait, I don't actually have any idea whether or not that's good. So I need to evaluate my performance on a spectrum for each metric, which is where the final element of Core Web Vitals comes in, our thresholds. For each representative metric, we have clear goalposts around what constitutes a good experience, one that needs improvement and one that's poor. So for instance, for LCP, anything that is 2.5 seconds or less is on its way to being a unisoned. Anything between 2.5 and 4 seconds needs some work, and anything above 4 seconds is needing quite a bit of love. So to finish up our definition of what are Core Web Vitals, the initiative is a combination of three things. First is user-centric quality dimensions, then we have representative metrics of those dimensions, and finally, thresholds to help you evaluate whether or not your performance is good or not against any given metric. But there is one more piece of really important information. We need to know how many page loads need to hit the thresholds for the Core Web Vitals metrics to constitute a good experience. So say we have 100 users. If only one of them has an LCP below 2.5 seconds, do I pass Core Web Vitals? The answer is no. Core Web Vitals uses the 75th percentile value of all page views in the field to evaluate against the thresholds. In other words, if at least 75% of page views to a site meet the good threshold, the site is classified as having a good performance for that metric. And this applies to all three metrics, LCP, FID, and CLS. The 75th percentile is used to evaluate all three. Core Web Vitals is a holistic package of everything you need to create the foundation of a healthy site. They are valuable because they show you exactly where to start to set yourself up for success. If 75% of your users are getting fast, interactive, stable content, it's cause for celebration. But as we know, there are other dimensions of quality that are extremely important. Accessibility, security, mobile friendliness. There are a lot of dimensions that make a basic unison even more fabulous and are important to your site's success. So don't stop measuring these if you already are. And if you aren't already, once you've optimized your Core Web Vitals, you can begin to venturing into measuring and benchmarking against other important vitals that are relevant to your business and your users. Core Web Vitals are just as the name indicates. They are core, and provide you with a solid foundation upon which to further optimize. Given how important it is to quantify a user's experience accurately in order to be successful on the web, we are constantly working to find ways to better measure all quality dimensions. What this evolution has often meant in the past is a stream of new metrics, tweaks to existing metrics, and new guidance, many times at an unpredictable cadence. We know how difficult this can be when trying to set goals, align roadmaps, and get organizational buy-in. Because of this, we want to set a predictable cadence of updates to Core Web Vitals. They will be refreshed once a year around the time of Google I.O. to ensure that they reflect the latest in our learnings, and this includes adjustments to the set of metrics as well as the thresholds. Looking ahead towards 2021, we will be providing regular updates on future metric candidates, motivations, and implementation status. Okay, so this is all fine and good, but how do I get started? To know what to optimize, you have to measure first. And Core Web Vitals are now in all of your favorite developer tools, and there are more than what is listed here, including a new Web Vitals library and a bunch of ecosystem tools that have already adopted them. As you can see, Core Web Vitals are available across the board. You're able to measure them for a specific page, for your origin, locally in the lab, and from real users in the field. Remember that first input delay is only measurable in the field, so you have to have a real user clicking on your page in order to measure it. But that doesn't mean you can't use lab tools to help you improve it. Total blocking time, TBT, is a proxy lab metric for FID that allows you to debug and improve your interactivity in the lab before your users ever have to experience a bad FID. The next obvious question is, again, this is great, but where do I start? What tool should I use? I'm so glad you asked. Each tool has its own strength. For example, PSI is one of the only places you can see your lab and field data in one place, and Search Console is critical for identifying page types that need improvement. As I mentioned earlier, we're seeing so many great ecosystem players and production monitoring solutions already implementing support for Core Web Vitals, and we're really delighted. But again, you ask, you've shown me the magical unison, and now you've given me a palette of tools to choose from. That's amazing. But tell me what to do first. Okay, two things. First, go to PageSpeed Insights. That will give you a pulse of your Core Web Vitals performance in both the field and the lab. From crux, you'll be able to see whether or not 75% of your loads are hitting the Core Web Vitals thresholds for both your page and your origin in the field. Then you can take a look at your lab data from Lighthouse to see whether or not you are hitting the Core Web Vitals thresholds for each metric in a synthetic testing environment. This helps to guide you towards actionable opportunities to improve your page's performance. Second, check out some more in-depth talks later today that go into detail about measuring and optimizing against your Core Web Vitals. And with that, I'm going to pass it back to Dion. Thank you so much. Great. Yeah, thanks for showing us the context and all of the information across the whole slew of tools there, Elizabeth. My pleasure. One of the critical steps in modern web development with a lot of influence over your vitals is your build step. That's where your CSS modules are turned into real CSS, your bundler analyzes your module graph, and optimizations can really kick in. We wanted to go deeper here to understand the popular bundlers, how they work, what they can and cannot do, and how to set them up for success. Let's welcome Serma to tell us more. Hey, Serma. Hey, Dion. So there are many best practices to follow in web development. Knowing them is one thing, but getting your build system to follow them as well is kind of another beast. So do you have anything to maybe report on that front? So there are two bits on this side of things. On the one hand, there are many developers who want to know what build tool they should learn and use for their next project, and on the other hand, there are many projects that already have a build tool set up but are looking to improve their output. To tackle both of these problems, we built Tooling Report. Tooling Report is a website that you can actually go to right now, tooling.report. We created an extensive list of best practices in web development, took what we think are the four most popular build tools, and checked for each build tool if it allows you to follow that best practice. And each tool gets a point for each test that it passes. We chose to start this project with Browserify, Parcel, Rollup, and Webhack. Now, Browserify might be surprising to some, but the data indicates that there are still many sites out there that use Browserify, and we want to help those projects improve their sites as well. Of course, we have been working with the core teams of all these tools to make sure that we not only use the tool correctly, but also represent them fairly. The tests are subdivided into categories, and in the overview you can get a quick sense of which tool is excelling at what category. You can get more information on the test in the overview, and learn more about it. And now this is where I think Tooling Report gets really interesting. Each test has a dedicated page, where you can compare how the tool score on this specific test. There is an in-depth explanation on why this test is important and how it relates to best practices in web development. We also explain how we codified the best practice and what the expected outcome is. And finally, at the bottom, you can find an explanation for each tool and why it is passed or why it might have failed this test. If a tool is not passing a test, we will also link to bug reports on the tool's issue tracker. Many of them we have actually filed ourselves while building Tooling Report. We also link to a minimal NPM project that we use to determine the tool's behavior. This way, Tooling Report not only tells you what a tool can and cannot do, but you can also look at the configuration files and plugins to see how you can follow a best practice with this tool. This way, the site double functions as a source of documentation. The entire site is open source on GitHub, and we'd love the community to help us come up with more tests and help us add more tools over time. So you can check this out now on Tooling.Report. Thanks for joining me, Soma. Cheers. Now we're all becoming more aware of the importance of security and privacy. Chrome believes in an open web that's respectful of user's privacy and maintains key use cases that keeps the web working for everyone. I'd love to welcome Rohan to have a chat and kind of share some of what's new here. Hey there. Thanks, Dion. My name's Rohan, and I look after Web DevRel for security, privacy, payments, and identity, or SPI for short. Now, while that's a cute internal name, we are part of the wider trust and safety team within Chrome. Great. So why don't we start with same site cookies and the temporary rollback that kind of kicked into gear for us when COVID kind of really started to hit globally. Can you kind of share what the latest news is there? Sure, yeah. So hopefully, as a lot of you are aware, there's an update to the cookie standard that's being adopted across Chrome, Firefox, Edge, and others to restrict cookies to first party by default, along with requiring explicitly marking cookies for third party contexts. Now, that's all configured via the same site attribute, hence, same site cookies. We were rolling this out to stable Chrome, but decided to reverse this at the start of April because the COVID situation saw a huge jump in demand for online services, but also a huge shift in developers being at home without their equipment or looking after their families. We made the call that it was important to prioritize stability at that moment. Now, these changes are intended to make the web a safer place, protecting against cross site request forgery, and trying to minimize the surface for covert tracking. Sadly, during a crisis, when people are most vulnerable, you see these kind of scams and attacks jump too. So with the Chrome 84 stable release, which is mid-July or about two weeks from now, if you're watching the stream, we are going to start rolling this out again across all Chrome versions. Got it. So what I'm hearing here is that if you haven't tested your site yet, if you haven't made changes to kind of make sure that everything works well, now is actually the time to get going. Absolutely. So we have documentation and examples and samples out there right now for same site on web.dev, as well as on chromium.org, and we'll be covering implementation and debugging in our segment on day three. Okay, so we all love cookies, but I'm assuming there's going to be a few more things that we're going to be talking about in the kind of general view of trust and safety. I'll be honest with you, I am going to talk about cookies a lot, but the rest of the team does have a healthier range of interests. Got it. Okay, sounds good. So we're going to cover things like, you know, back in 2018, Spectre kind of raised its head and we as a web community started to really look at what can we do to help make sure that our users are secure. Are they going to be kind of those type of aspects that we'll be covering too? Sure, yeah. So AG is going to be taking us through some of the new cross-origin opener and embedder policies or coop and co-app for short. So like you were saying, Spectre was a vulnerability that in a super short summary meant that malicious code running in one browser process might be able to read any data associated with that process, even if it's from a different origin. And that is super bad. Now, one of the mitigations for that is site isolation or putting each site into a separate process. AG is going to be running through how the headers allow sites to opt into that, along with a bunch of other benefits that it brings as well. Got it. Got it. Got it. Okay, so we've got restricting cross-site cookies, and then we've got isolating sites to individual processes. We've got this interest in evolution. So I'm sensing there's kind of a bit of a theme here. Yeah, there is definitely a theme. So we've also got salmon mode on the team, and they're going to kick off our little segment to explain the link between these. And really it comes down to the web today is seeing this evolution of expectations regarding privacy. That includes users expecting more transparency and control over their online data and new regulations impacting how data can be used and collected. Now, at Google, we believe in an open web that's respectful of the user's privacy, whilst also maintaining a healthy ecosystem. So under the banner of the Privacy Sandbox, we're introducing a number of standards proposals that aim to support the use cases that let people make their living off creating web content, but do that in a way that better respects user privacy. We're also actively seeking feedback on these proposals. We're participating in all the open forums in W3C to discuss our proposals, as well as those submitted by other parties too. Okay, so the web's evolving and we're getting new privacy preserving APIs coming in, and we're getting rid of the old cross-site data leaking APIs, so they're kind of moving out. Exactly. And one of the ways I like to think about it with our team as well is that we're kind of all about the pros places where you create relationships on the web. So people should feel in control of their data when they browse around the web with a clear choice about what and where they share things. And when they do want to create a relationship, like signing into a site or making a purchase, that should be simple, secure, and only share what's needed. Awesome. Thanks so much for the brain dump on what we're thinking about here in the Realm of Trust and Safety Row, and I'm really excited to see the content that's coming later on the stream from the team where we can kind of go into more of a deep dive. Cool, thanks, and I'll see you around. Now, the web has a great history as a content platform with its roots in hyperlink documents, but digital content has gone on richer and richer, and we think the web has a great role to play here too. I'd like to invite Paul Backhouse to talk about a new content type that we're really excited about called Web Stories. Hey, Dion. Hey, Paul. So what are these Web Stories and why are we working on them? My team and I have been hard at working on Web Stories, and I'm very excited to share some updates with you. And yes, I'm talking about these kind of stories, you know, full-screen portray, tapped advance, swipe to move on, and if you're like, wait a second, aren't you a little late to the show? Then you'd be right. But these are not your standard walled garden stories. Current implementations focus on ephemerality and ultra-low barrier to creation. But our bet is that the story's format works beyond the ephemeral use case and can become its own pillar in the open web media landscape. And that's because they're really cheaper to make than video and more engaging than a text article and really important. Web Stories are different to walled-off stories in many important ways. Just like a regular webpage, you own them, you host them, and very important, you get the money from the ads, not the platform serving the stories. Because stories are really a visual format, my friends at Google Search and Discover are showcasing them in really cool ways, telling me that many more integrations are coming later this year. We think these can be a great net new traffic source for web creators. These stories look visually really compelling, but how hard is it actually to create them? If you want the web to be able to compete with the closed platforms out there, story creation needs to be as intuitive and fast for all content creators. Now, lots of people are working on making web stories a thing, but one of the things my own team is doing is bringing story creation to WordPress, the most used CMS in the world. In the form of a visual editor coming to you very soon, find out more about the beta at goo.gle-story-editor. You'll hopefully see all the basic editing features you would expect, like smooth image and video handling, text controls, shape masking and so on. But we're also working on some you might not expect, like this one we call text magic, running in real time against images from the Unsplash API here. Target on this feature makes it so that the editor always ensures text is readable, making dynamic decisions about the background, line height and so on. I hope you like it as much as I do. Yeah, it looks really cool. I can't wait to read some of these stories on the web. Thanks so much for sharing there, Paul. And thanks again to Paul and everyone who took the time to join me as we kick off the event today. I'm really excited about the upcoming sessions, starting with the focus on how to make your website hit its vitals and discovery through search. Now, please enjoy the show. Remember, the whole team is here to chat with you on web.dev slash live and via YouTube. I will see you there today and I'll be back tomorrow morning for the day to kick off. Hello again, everybody. For those of you who don't know me yet, my name is Elizabeth Sweeney and I'm a product manager on the web platform team in Chrome. I'm excited to talk with you all today about the latest and greatest in our speed tooling. I'll be sharing some updates as far as how we think about measuring user experience, including metrics updates and our new core web vitals initiative, as well as making sure that we're privy to, you know, all of the newest features, products and updates to our developer tooling as far as speed measurement is concerned. So let's dive in. Well, I know we've heard it before. It is worth reiterating why metrics change. Well, ultimately, it's because our understanding of how to best measure user experience evolves over time as we learn more and work through technical hurdles. We need to make sure that our metrics and tooling are updated to reflect the latest in our learnings. Fundamentally, we view it as mission critical to give you the most accurate and effective mechanisms by which to optimize your site's experience and help you achieve your goals. And that doesn't just mean for one of your users or a few. We want to make sure that as many users as possible, regardless of what network they are on or what hardware they're using, are in the bucket of users that want to come back to your site again and again. And that brings us to the impetus behind core web vitals. We have long been espousing performance and user experience quality because we believe that good site performance leads to better outcomes for users, businesses, developers, and for the web in general. The core web vitals initiative aims to bring together a more cohesive picture of web performance so that there is a better shared understanding of what should be prioritized first. Let's take a moment to review the metrics themselves. Largest Contentful Paint, LCP, is a measurement of perceived loading experience. It marks the point during page load when the primary or largest content element has loaded and is visible to the user within the viewport. It's an important complement to first Contentful Paint, FCP, which only captures the very beginning of the loading experience. LCP provides a signal about how quickly a user is actually able to see the content of the page. To provide a good user experience, sites should strive to have largest Contentful Paint occur within the first 2.5 seconds of the page starting to load. To ensure you're hitting this target for most of your users, a good threshold to measure is the 75th percentile of page floats segmented across mobile and desktop devices. First Input Delay, FID, measures the time from when a user first interacts with the page, so they're clicking on something, tapping a button, that kind of thing, to the time when the browser is actually able to respond to that interaction. To provide a good user experience for FID, sites should strive to have first Input Delay of less than 100 milliseconds. To ensure you're hitting this target for most of your users, a good threshold to measure, again, is the 75th percentile of page loads. Given that FID can only be measured in the field with real users, we want to make sure that you have a way to locally debug and optimize FID in the lab. That's where total blocking time, TBT, comes in. TBT quantifies load responsiveness, measuring the total amount of time when the main thread is blocked long enough to prevent input responsiveness. So, TBT measures the total amount of time between first contentful paint and time to interactive. So, in short, you should definitely make sure that you're leveraging the signals that you're getting from TBT in the lab to optimize for FID in the field. Cumulative Layout Shift, CLS, is a measurement of visual stability. It quantifies how much a page's content visually shifts around. A low CLS score is a signal to developers that their users aren't experiencing undue content shifts. A CLS score below 0.1 is considered good. CLS in a lab environment is measured through the end of a page load. Whereas in the field, you can measure CLS up to the first user interaction or including all user input. So, that was a quick overview, but it's important to remember that our goal is to have the vast majority of our users served with fast, interactive, stable experiences. To that end, Core Web Vitals uses the 75th percentile value of all page views in the field to evaluate against these thresholds. So, in other words, if at least 75% of page views to a site meet the good threshold, then the site is classified as having a good performance for that metric. And this applies to all three of the Core Web Vitals, LCP, FID, and CLS. The 75th percentile is used to evaluate all of them. As I mentioned before, our ability to measure user experience quality is always improving. We expect to update Core Web Vitals on an annual basis and provide regular updates on the future candidates, motivation, and implementation status. Looking ahead toward 2021, the Core Web Vitals will be refreshed to ensure that it reflects the latest in our learnings, and this includes adjustments to the set of metrics as well as the thresholds. Let's do a quick refresher on the value of combining both lab and field signals together to diagnose, optimize, and monitor your site's performance. Lab data, which is synthetically collected in a testing environment, is critical for tracking down bugs and diagnosing issues because it is reproducible and has an immediate feedback loop. Field data allows you to understand what real-world users are experiencing, conditions that are impossible to simulate in the lab. The real world's messy. I mean, there's permutations of devices, there's network configurations, cache conditions, the list is long. Either set of metrics taken in isolation aren't nearly as powerful as when they're combined. And that's why we try to provide you with ample coverage for both lab and field tools. We have the tools that focus on providing you with what, you know, information about what real users are experiencing, field tools, such as the Chrome User Experience Report, Search Console, and the new Web Vitals extension. And then we have our lab tools as well, coming in to provide you with mechanisms to see what needs improvement before a user ever even sees your page and it gives you a reproducible environment to debug and optimize. Those are tools like Chrome DevTools and Lighthouse. PageSpeed Insights is a great place to start to give you a pulse on your Core Web Vitals performance in both the field and in the lab because it leverages crux and Lighthouse under the hood. Given that the Core Web Vitals initiative aims to help folks know what should be prioritized first, we wanted to make sure you had full support and tooling coverage for LCP, FID, and CLS. Core Web Vitals are now in all of your favorite developer tools, and there are more than what is even listed here, and that includes a new Web Vitals library and a bunch of ecosystem tools that have already adopted them. You're able to measure your Core Web Vitals for a specific page, for your origin, locally in the lab, and from real users in the field. And as I mentioned before, Total Blocking Time, TBT, it's a proxy metric for FID that allows you to debug and improve your interactivity in the lab, which is why it's listed here in the FID column. Before we go over all of the latest updates in each tool, I wanted to make sure that you had all of our tools mapped in a workflow for Core Web Vitals. Which tools do what? Where do I go first? As I said before, a good place to start to get a general pulse is PageSpeed Insights. But all of our tools have a really critical role to play. Using Search Console allows you to see across your entire site and identify which types of pages need improvement. Then you can diagnose and optimize locally with Lighthouse and Chrome DevTools. We have some really new capabilities, by the way. I'm excited to share with you in a moment. And then you can prevent regressions with Lighthouse CI and create a custom dashboard to monitor your site with Crocs. Along the entire journey, you can turn to web.dev for guidance. All right. Let's get into the tool updates themselves. Lighthouse just announced v6 last month, which has new metrics, including Core Web Vitals, new audits, and a new performance score. Let's start with the updates to the perf score. On a high level, we want to make sure that you can get a sense of your loading performance, interactivity, and layout predictability. The metrics and the weights of those metrics that formulate the top level score are intended to give you a balanced view of your user experience against critical dimensions of quality. While three new metrics have been added, the Core Web Vitals metrics, three old ones have been removed, first meaningful paint, first CPU idle, and max potential fit. These removals are due to considerations like metric variability, as well as simply having newer metrics that offer better reflections of the part of the user experience that we're trying to measure with that metric. There are also improvements to the weights based on user feedback. For instance, reduction of time to interact with his weight in the final scoring calculation is in direct response to user feedback about variability and inconsistencies in metric optimizations correlating with improvements to the user experience. However, it is still a valuable signal to understand when a page is fully interactive. That's why we still keep it. TBT serves as a nice complement to TTI so that together you're able to more effectively optimize for user interactivity. There's also a super nifty scoring calculator to help explore the performance score. The calculator gives you a comparison between v5 and v6 scores as well. It's not shown here, but it's in the tool. And when you run an audit with Lighthouse 6.0, the report comes with a link to the calculator with your results pre-populated. So I highly recommend you check it out. Lighthouse v6 also offers quite a few new audits. These are with a focus on JavaScript analysis and accessibility. You can now easily trace how much unused code is being shipped with your application, as well as making sure that you're providing audits to check that screen readers and other assistive technologies have all of the information they need about the behavior and purpose of controls on your web page to serve users well. All of the products that Lighthouse powers are updated to reflect the latest version, including Lighthouse CI, which now enables you to easily measure your core web vitals on pull requests before they're merged and deployed. PageSpeed Insights PSI reports on the lab and field performance of a page on both mobile and desktop devices. The tool provides an overview of how real world users are experiencing the page that's powered by crux and a set of actionable recommendations on how a site owner can improve page experience that's provided by Lighthouse. PageSpeed Insights and the PSI API have also been upgraded to use Lighthouse 6.0 under the hood and now support measuring core web vitals in both the lab and field sections of the report. So core web vitals are annotated with the blue ribbon that you see here. From the crux data set you'll be able to see whether or not 75% of your loads are hitting the core web vitals thresholds for each metric in the field for both your page and for your origin. Then you can take a look at your lab data from Lighthouse to see whether or not you are hitting the core web vitals thresholds for each metric in a synthetic testing environment. This helps to guide you towards actionable opportunities to improve your page's performance. Now the new core web vitals report in Search Console helps you to identify groups of pages across your site that require attention and this is also based on real world field data from crux. URL performance is grouped by status, metric type, and URL group which is basically groups of similar web pages. The report is based on the three core web vitals metrics and it's a great way to identify pages that need attention on your site. There are many many cool new things in DevTools but I'm going to focus on just two of them right now that are related to core web vitals support. First is the capacity to now debug interaction readiness with total blocking time in the footer. The total blocking time TBT metric again the proxy for first input delay is now shown in the footer of the Chrome DevTools performance panel when you measure page performance. The performance panel has a new experience section that can help you detect unexpected layout shifts. This is helpful for finding and fixing visual instability issues on your page that contribute to cumulative layout shift so you select a layout shift to view its details in the summary tab and to visualize where the shift itself occurred hover over the moved from and move to fields and for more information on everything that's new in DevTools see that what's new in DevTools Chrome 84 link that's here. The Chrome UX report crux is a public data set of real user experience data on millions of websites we just hit over seven million so that's awesome. It measures field versions of all of the core web vitals even if you don't have room on your site crux can provide a quick and easy way to assess your core web vitals the newly redesigned crux dashboard allows you to easily track and origin performance over time and now you can use it to monitor the distributions of all of your core web vitals metrics to get started with the dashboard you can check out the tutorial on web.dev we've also introduced this new core web vitals landing page to make it even easier to see how your site is performing at a glance there is also a new crux API for you to use built from the ground up to provide developers with simple fast and comprehensive access to field based experience data developers can query for an origin or a url and segment results based on different form factors the API updates daily and summarizes the previous 28 days worth of data including your core web vitals performance we're excited to integrate more features over time to enable new ways to explore the data and discover insights about the state of user experiences web.dev is your go-to place for guidance on web development it also now sports the canonical page for information about web vitals the web.dev measured tool also allows you to measure the performance of your page over time and it provides a prioritized list of guides and codelabs on how to improve its measurement is powered by page speed insights which has lighthouse 6.0 under the hood and fully supports the core web vitals metrics as you can see here there are also a slew of other amazing tools to help you with measuring optimizing and monitoring your core web vitals the web vitals extension measures the three core web vitals metrics in real time for desktop in google chrome this is helpful for catching issues early on during your development workflow and as a diagnostic tool to assess performance of core web vitals as you browse the web the extension is now available to install from the chrome web store the web vitals library is a tiny modular library for measuring web vitals metrics on real users in a way that accurately matches how they're measured from chrome and reported to other google tools the library supports all of the core web vitals as well as other field vitals site kit google's official wordpress plugin allows you to get insights about how people find and use your site how to improve monetize your content directly in your wordpress dashboard they've also just updated to ensure that you know how you're performing against core web vitals as i mentioned earlier too we're so excited to have so many amazing ecosystem players and production monitoring solutions already implementing support for core web vitals honestly we're delighted and thank you so much for your amazing work it's really cool and this is a long list of links but i'll make sure to tweet them as well so that you can click through them more easily there are a bunch of goodies in here and with that i'm just going to give you a huge thank you really appreciate your time hey folks my name is adi osmani and welcome to optimizing for core web vitals so today we're going to talk about optimizing user experiences on the web with a case study on french luxury fashion house clloe clloe have recently been taking a fresh look at web performance and i'm really excited to share their learnings with you now you may have seen google search announced an upcoming search ranking change recently that incorporates page experience metrics now these metrics include the core web vitals which together with a few other signals paint a pretty holistic picture about the quality of user experiences on a page but what are the core web vitals and how do you go about optimizing for them well core web vitals are a set of metrics related to speed responsiveness and visual stability now these three aspects of user experience are measured using three metrics so first of all we have largest contentful paint which measures loading performance next up we have first input delay which measures interactivity and last we've got cumulative layout shift which measures layout stability let's kick things off by talking about cumulative layout shift or cls now cls is a pretty important metric for measuring visual stability because it helps quantify all those times when we see really surprising shifts in the content on page it helps make sure that the page is as delightful as possible have you ever been reading like an article online when all of a sudden something suddenly changes on the page and without warning the text moves and you've lost your place that's literally what happens a giant chicken kicks your content away and he has no regrets look at him he's he's basically cls so what causes poor cls well first of all we've got images without dimensions um ads embeds or iframes without dimensions dynamically injected content and web fonts that might cause a flash of unstyles content now as i mentioned cloy is a french luxury fashion house and it's become a bit of a go-to brand not just for like luxury apparel but also handbags and fragrances and things like that and they have recently been focused on improving cumulative layout shift on all their like main pages so their homepage their product listings page and their product details page through a bunch of work they've been able to reduce their cls all the way down to zero which is about as perfect as you can get so how did they get here this is the before view of the cloy home page where we can observe a number of surprising layout shifts due to elements on the page not following cls best practices so let's dive into a few tips that worked well here first off always include width and height size attributes on your images and video elements alternatively you can always do things like reserve the required space with css aspect ratio boxes but in general this approach just makes sure that the browser can allocate the correct amount of space in the document while the image is loading so here's a demo of this in action these are some images that don't have width and height um specified and what you see happening is that they're pushing content in the page all the way down this is something that's reflected in our tools like lighthouse and i've got a little bit of a clip out here you can see the lighthouse report where cls is in the red and not quite where we want it to be so how do we address this well in the early days of the web developers would add width and height attributes all over the place they'd add them to their image tags they make sure that they kept enough space allocated on the page before browsers would start fetching images that was great because it would minimize reflow and relay out now when responsive web design was introduced developers began to omit these width and height attributes and they started to use css to resize their images instead one of the downsides to this approach is that space could only be allocated for an image once it began to download and you know at that point the browser could determine its dimensions as images load in in that old world you know the page would reflow as each image appears on the screen and a lot of us got used to um you know our text suddenly popping down the screen which wasn't a very great user experience and this is where aspect ratio comes in so the aspect ratio of an image is the ratio of its width to its height it's pretty common to see this expressed as two numbers separated by a colon so for example 16 colon 9 or 4 colon 3 for an x colon y aspect ratio the image is x units wide and y units high what that means is that if we know one of the dimensions the other one can be determined so for a 16 to 9 aspect ratio if dress.jpg has got a 360 px height the width is 360 multiplied by 16 over 9 which gives us 640 px i'm not very good at math so um hopefully that was helpful now um modern browsers now set the default aspect ratio of images based on an images width and height attributes so it's really valuable to set them if you want to avoid those layout shifts this is a change in modern browsers and it's all thanks to the css working group they've done some work that basically allows us to just set width and height as normal and this calculates an aspect ratio based on the width and height the attributes before the image is loaded so what we're seeing on screen here this is something that's added to the default style sheet of all browsers and it calculates aspect ratio based on the elements width and height attributes so as long as you're providing width and height the aspect ratio can be calculated and everything will will hopefully avoid layout shifts so this is a great best practice to be following this is also something that works well with responsive images so with source set you're generally defining images that you want to allow the browser to select between you can define sizes for those images to make sure that your image width and height attributes can be set just make sure that each image is using the same aspect ratio and here's um that demo once again with width and height attributes added notice that in a modern browser you won't see any layout shifts there and the user will get a much more pleasant experience so another reminder set those width and height attributes as much as you can here's the impact that this change has on lighthouse as we can see before we went from a cls of 0.36 so we're in the red all the way back to something that's a little bit better there are one or two other things in this page that could have been improved but on the whole we've had a relatively significant impact on reducing layout shift you might be wondering how can I figure out what elements on my page are contributing to cls we've got you covered so in lighthouse we have an avoid large layout shifts audit that highlights the top DOM elements contributing most of the cls to the page so check out that audit in dev tools we also have a good story here so if you're using the dev tools performance panel it has an experience section that can help you detect unexpected layout shifts super helpful for finding and fixing visual instability issues they get highlighted in this experience section with some kind of reddish pinkish layout shift records and if you click on one of those records you'll be able to get more details about you know what was the score where did this element move to and from really great diagnostics to help you nail down how to fix your cls so cloy's approach to image loading is that they use a skeleton pattern with a sass csx mixin called bruschetta loading um bruschetta is one of those things that are a little bit of a luxury to me during quarantine they're they're right up there with toilet paper and antibacterial soap but let's stick with bruschetta loading um so this is cloy's approach to image loading uh they have a parent container with a color similar to the final image that's being loaded now low lazy loading strategies like this where you have um a little bit of a preview of what's finally going to be shown or sometimes referred to as a low quality image placeholders you can use a you know a predominant color um from the final image you can use a low resolution image sometimes people will use like a one pixel by one pixel image or something 10 pixel by 10 pixel something very low resolution that just gives you a preview of what's finally going to be displayed now um lazy loading strategies like this which either use a color or that kind of placeholder um they don't strictly improve largest contentful paint but they do improve perceived performance so they can still be pretty good for the user experience now um what cloy did here in addition to using this uh skeleton loading approach was that they do use responsive images and they do make sure that they're setting dimensions on their images as well to avoid cls let's shift things up shift things up uh let's go on to the next tip so reserve enough space for any of your dynamic content things like ads or promos um ideally you want to make sure that you are giving any of that content a container that is not going to just you know bounce out of and suddenly cause shifts in the page a related tip to this one is avoid inserting new content above existing content unless it's in reaction to a user interaction you want to make sure that any layout shifts in your page are ones that you are making a conscious decision around and like occur uh as as expected so let's try to visualize this here's an example of a promo where we're dynamically injected into the page we haven't reserved space and it's just pushed everything all the way down we can see this reflected in our lighthouse callout at the bottom of the screen now this is something that very typically happens with ads ifro iframes promos and these types of assets can sometimes be the largest contributors to layout shifts on the web now many ad networks and publishers will often support dynamic ad sizes and ad sizes that are you know dynamic or something that can sometimes increase revenue because you're giving people a lot of flexibility around you know what can go inside there your ad slots but it can also be something that can potentially negatively impact the user experience by pushing things down so that's something that you want to avoid so how do we approach this well one solution to the problem is statically reserving space for the slot so you can make sure that you're defining a container for these ads or embed frames so that regardless of what goes inside you're not shifting the content of the page around so here i've got a container where i've set my width and my height i've set a background color but i've also set it to overflow hidden just in case anything dynamic is a little bit you know a little bit taller than the container i don't i still don't want it to be able to break out of it ideally the content fits inside of our container like our iframes or whatever else we might inject in there and what you can do if you're in the you know if you're somebody that has lots of dynamic content that gets injected into your page you can take a look at your uh your data look at uh what are the medians or the 95th percentile widths and heights for this dynamic content and size your container accordingly that'll just mean that you have the best chance at still being able to present that content to users without negatively impacting the rest of the user experience so here's what it looks with with my pattern in place i've reserved enough space and that content pops in but there are no layout shifts in the page so i'm really happy about that slightly better is my baseline for everything in life at the moment so yeah this is the lighthouse 6.0 impact we can see that we reduced our layout shifts from 0.24 all the way down to about zero i'm going to give myself about zero it's in the green so that's great so let's talk about um a production example of something like this on chloe so chloe had a promotion banner for shipping at the top of their product listings page and you'll see this like free standard shipping promotion listed the very top but this wasn't always there there was a time when this product listings page had a cls of 0.4 which is like really not great because of two things the first was the way they approached their dynamic promo banner and the way that they approached filters let's talk about the banner first now this banner used to be positioned in line underneath the main page header and as you can see here it looks it looks kind of harmless but what's the impact of having a dynamically sized banner on the user experience well we have a video here let's take a look as we can see here once the content is fetched and rendered for this banner it pushes the content for the rest of the page all the way down and that's not very ideal so how did chloe go about fixing this well they reserved space for this banner the content for this banner was also coming from a client side request therefore messages were causing a pretty visual layout shift occurring a few seconds into page load now they moved this api call street to the server and they made sure to reserve enough space for the banner with a simple height setting as a part of this work they moved the position of the banner up a little bit but all together like moving more work to the server always a good idea and just making sure that they're reserving space these things made a bit of a difference so here's here's the after view here we can see the impact to their product listings pages after these pages after these changes have been made it's uh it's a lot less shifty so i'm happy about that so we talked about their promo banner the other big cls issue for product listing pages was that chloe had a filters widget for filtering products now this would rehydrate to become dynamic once it booted up and so on the client it was pending xh r calls for data was waiting on session state based on filter choices in order to be able to like finally render this thing on the screen so this is what this basically looked like we'd wait for kind of content to be sent down for the filter widget we wait for hydration and it would still push content on the screen all the way down now what they ended up doing here was that they adapted this widget to contain more of the information needed to render the filter widget server side because they'd rendered it with better defaults this helped avoid those layout shifts and i just wanted to give a call out here to the right of the screen we can see the web vitals crum extension this gives you a real-time view of all of your vitals metrics and it can just be helpful as you're building your sites locally or you're just browsing the web and want to get a sense for the performance of different sites that you you check out on the regular and here's what things look like after their rehydration fix for filters as you can see cls reduced by a decent amount looking at the before and after and it was just another case of like pay attention to the little things in your pages that might be in aggregate causing lots of things to be pushed down every little cls fix helps and here's the overall impact of these changes on desktop we can see that the above the full content is relatively stable and offers a much better user experience on the whole and this is also reflected in lighthouse work on lighthouse gotta give gotta give lighthouse a shout out as we can see here cumulative layout shift is in the green we've hit zero so it's in a really solid place so to improve cls cloy acted on a number of different elements it wasn't just one thing they reserved space for the promo content in terms of its ratio they made sure to set within height dimensions on their images and they adopted a skeleton pattern to improve perceived performance they reserved space for their promo banners requests before receiving messages and they also reserved space for the filters dynamic component as well as making a few other optimizations to just help with rendering so on the whole it was it was definitely worth it all right so I have a big surprise for you we've got more metrics to talk about put a lot of work into the slide historically it's been a bit of a challenge for web developers to measure just how quickly the main content of the web page loads and is visible to users thankfully we now have metrics like largest contentful paint that are able to report the render time of the largest content element that's visible within the viewport now you might be wondering what causes a poor lcp well there there are lots of things slow server response times are a big one this could be your back end infrastructure it could be unoptimized database queries api responses that are just taking a while to resolve it could be render blocking javascript and css slow resource times are another big one you could have unoptimized images slowing down your lcp and then there's client side rendering there's a whole class of problems where those of us who love working in javascript and using modern libraries and frameworks and bundlers can sometimes get into a place where we have our requests for assets like images in particular hero images behind javascript fetches so the browser first of all has to fetch your javascript then it has to parse and process that javascript to fetch your image and that whole process can take so long that you delay showing meaningful content to your user so it's just things like that you should keep an eye on there are plenty of tools that can help diagnose these issues so let's take a look at a at some real world production challenges around lcp and how to work around them chloe started off with an lcp of about 10 or 11 seconds in this view here we can see that their primary hero image content wasn't wasn't getting fetched and rendered until about 11 seconds in to our trace their homepage suffers from in this case it suffered from a few different things it had heavy full screen image downloads poorly optimized images some images that were requested late in the network chain and these are very common issues there there's nothing here that's just like that they're doing crazy wrong it's just very common issues and it's useful to be aware of some of the things that impact lcp so things that impact lcp are image elements image elements that might be inside of an svg element video elements block level elements containing text nodes and so let's let's talk about images first because they're they're pretty often a cause for poor lcp so for many sites images are the largest element in view when the page is finished loading especially as ux patterns have shifted towards us using more hero images in our pages so it's very very important to optimize our images especially anything that's visible within the initial viewport now there are a few techniques that you can use here you can consider not having you know an image in the first place uh if it's if it's not that relevant maybe remove it um compress those images use you know there are plenty of image optimization tools out there compressor images maybe consider converting them to more efficient modern formats use responsive images um and you can also consider using an image cdn i'm seeing an increasing number of sites leveraging image cds just to help them uh get control over an ability to just tweak parameters in a url for an image and change what format gets served down or what quality you have um and it's just using an image cdn can be a really good way of staying on top of modern best practices because uh even even us like that are you know web enthusiasts sometimes have a hard time staying on top of all everything happening in the image optimization world now you might be wondering how can i identify the element that is my like lcp thankfully we've got some solutions here in dev tools in the performance panel if you record a trace and you go to timings you should find a record for lcp click on that record and you'll get the summary pane showing up that includes things like the size of the image and more importantly the related node so if you hover over that related node it'll highlight what in your page was considered lcp i personally find this really valuable um as kind of a stepping stone to where where should i be spending my time optimizing so check that out um if you use the performance panel this is also something that we try to capture in lighthouse so lighthouse has got a largest contentful paint element audit and we try to highlight what element was responsible here too so if you use lighthouse check that out so back to chloe so chloe discovered that they were delivering very high resolution images even to even very high resolution for retina screens because there is a bit of a cutoff point where if you're serving kind of two by three by images the human eye is is not going to be able to perceive large amounts of difference there and there are kind of you know you have diminishing values that you get out of just serving very very very high resolution images now in this case we're in we're in dev tools we're in the elements panel we're looking at a specific image and what we see is that the maximum width of images being served down is 1920 pixels it's pretty it's pretty large so one of the things that chloe actually decided to do was change things up here they resized their images to not be more than two times the image viewport size so they removed source at sizes over 828 width to keep an image maximum size that they were comfortable with and that actually ended up being pretty fine on retina devices as well so it was this nice trade-off of how do we deliver uh rich imagery without negatively impacting the user experience now by doing this work on an iphone x or a pixel 2xl uh that that was previously seeing anywhere up to 245 kilobytes with an image bytes being downloaded they were able to reduce it down to 125 that's that's huge that's like a 51 decrease in image bytes being served down with no noticeable difference so optimize your images people the next thing we're going to talk about is uh some of the other image optimizations that they they performed so on the product listings page chloe use image lazy loading which is you know it's it's a relatively popular pattern what they discovered was that there were four primary images being loaded above the fold however there was one off-screen image that seemed to be tripping up their lazy loading heuristics and was still being fetched now this particular image happened to be 248 kilobytes in size about 200 plus kilobytes in size and um this was this was negatively impacting the user experience they wanted to try improving this now on the whole there were a number of things chloe did they were able to bring down their above the fold image download size all the way to 14 and a half kilobytes they were able to tune their lazy loading heuristics so that off-screen images like the one i was just talking about were no longer a problem they adopted an image cdn they adopted wet p by default uh improved their image resizing strategy and the results of this outside of just having a nice like lighthouse report with lots of greens is that each product page now weighs 57 less than it did before which is a really nice outcome to have as a result of like optimizing your images taking a a step back here's what the homepage lcp looked like after these changes we can see that again previously those hero images were not rendering and until about 11 seconds in now lcp happens at about four seconds into the process and it's complete just a few seconds later the request time for our lcp related node for kind of our hero images is about 1.3 seconds in so on the whole this is this is really great there's still work they could do here but on the whole this is like fantastic to see so let's switch things up to our next tip um defer any non-critical javascript and css to speed up loading the main content of your page now this is guidance that is it's not new it's been around for a few years but for anyone that's not familiar with this guidance i'll give you a very quick recap of it now before a browser can render any content it needs to parse html markup into a dom tree the parser needs to pause if it encounters any external style sheets your synchronous script and scripts and style sheets can both you know render be be be render blocking resources which can delay your first contentful paint consequently your largest contentful paint as well and so what we tell people to do is defer any of your non-critical scripts and style sheets to speed up load so let's take a look once again at the product listings page for chloe as we can see this is a trace independent of their image optimizations and as we can see here lighthouse highlights that there are a few render blocking style sheets that are delaying early paints on the product listings page now this is this is kind of manifested in terms of like just how much white we're seeing in our film strip so one approach to addressing this problem is by inlining your critical css and deferring the load of non-critical styles we often call this technique critical css so critical css is all about extracting css for above-the-fold content ideally across a number of different breakpoints and making sure that you can render the above-the-fold content as quickly as possible in the first few rtt's and deferring the load of the rest of your style sheets for the page you know for for things below the fold um as soon as possible otherwise so how to chloe do this well they they built some tooling they implemented critical css in their sass build process and they constructed a syntax allowing their developers to specify for each widget what part of the css code goes into their critical css this is highlighted using the critical keywords you see on the screen right now now at build time they're able to build both the critical css and the non-critical css so that every single build is consistent with both there are many ways that you can approach critical css i've contributed to some tooling on this topic in the past and you can you can automate it you can go very custom i see some teams that will just have a critical css file that they manually curate and regardless of the the approach that you take what's key is just making sure that you're delivering uh important content to the user as quickly as possible so we talked about the need for you know loading in the other style sheets for the page what what chloe do is their non-critical css style sheets are stored in an array so they point to references to them on their servers and that's injected with a deferred script so that it's hopefully not render blocking but is still loaded with a relatively high priority that isn't going to interfere with the html parser so what was the impact of optimizing their critical css well the answer is pretty large they were able to bring down their first contentful paint from 2.1 seconds to about 1.1 and their lcp from 2.9 seconds to about 1.5 now this is this is really great work optimizing your critical css can be a bit of a time investment but is something that can just make sure that your page is getting styled as soon as possible so let's talk about another tip i mentioned slow server response times when we were discussing like what what impacts lcp now the longer that it takes a browser to receive content from the server the longer that it takes to render anything on the screen the faster a server can respond that that's going to improve every single page load metric including lcp so you might be wondering how can i tell if i have a slow server response time lighthouse has you covered in lighthouse we have an audit called reduce initial server response time and if you if you see this it's a good hint to spend more time diagnosing the problem and causes of the problem as i mentioned earlier it can be plenty of things on your back end and we're trying to optimize our server response times there's plenty that we can do in terms of optimizing you know our dns our pre-connects all of those types of things but there are also things that we can do to optimize loading priority this is where techniques like link rel preload and server push can come into play now if you're new to server push i'll give you a quick summary of it to improve latency hb2 introduce this idea server push which basically allows a server to push resources to the browser before they're explicitly requested now you and i as developers we can um as well as anyone else watching you're all awesome too uh we often know what the most important resources are on a page and so we can start pushing those as soon as you know uh things respond with the initial request this allows the server to fully utilize what's otherwise an idle network to improve page load times now server push is is not without its its nuance um this is one of those optimizations where you need to be careful uh it's possible to uh over push so server push is not htp cash aware so i could push something for a particular page the user could come back to another related page and the server would push those exact same resources again the way to avoid that is by either using cookies or a service worker to um avoid those those refetches for those types of resources and track what's in the cash but it does involve a little bit more work in general server push is an optimization that can have a big impact but just just be aware of some of that nuance it's not quite as simple as just like turning it on sometimes now chloe use automatic server push which is an implementation provided by akamai it uses uh data to decide you know when to push critical css fonts and script and if you're manually um using server push yourself you might end up looking at syntax that looks a little bit like this what we see here is the link htp header um this is actually the preload resource hint in action and it's a separate but distinct optimization from server push but in reality most hb2 implementations will push an asset that's specified in a link header containing a preload resource hint so you can use the syntax in order to enable server pushes for for a page so what was the impact of this optimization without server push chloe are finding in their lab test that lcp was closer to four seconds but with it it was closer to two point five seconds which is like a huge amount of impact on screen at the moment we've been verifying that using lighthouse but you can also tell if individual requests um you know were server pushed uh using things like dev tools and using things like web page tests uh network waterfall view both are very very handy now we're on to our very last metric hooray uh chloe didn't optimize for forced input delay but i did want to very quickly cover it now first input delay measures the time from when a user first interacts with a page so that moment when they start to click on a button or or tap some ui some javascript powered control to the time that the browser is actually able to respond to that interaction now there are many things that cause a poor first input delay there can be long tasks on the main thread heavy javascript execution large javascript bundles can delay how soon script can be processed by the browser and can have an impact here and then you have things like render blocking script now in general i would strongly recommend uh using lighthouse and using dev tools because they do try to point out areas where you might have long tasks or heavy script execution very often the solution is to just break up this work serve what the user needs when they need it and try to look at opportunities for you know minimizing main thread work as much as possible sometimes people will contextualize this in terms of you know maybe shift some of that work some of the logic work to a web worker but regardless of the the path you want to take there uh the the end goal is essentially just making sure that the main thread isn't busy and that user interactions are not delayed so we're almost um at the end of our journey with chloe here we can take a look at chloe's overall web vitals in the lab thanks to their investments in performance and user experience they were able to reduce their cumulative layout shifts down to zero and their lcp by almost half so this is like this is mind-blowingly awesome this is like really really cool as you've seen all of this work is kind of the uh culmination of a number of smaller optimizations that when added up actually make a pretty significant impact to your end user experience and we don't have to just look at data in the lab we can look at the field as well here is chrome user experience report data um for chloe and as we can see our core web vitals metrics for lcp and cls are trending in the right direction uh cls went from 0.85 down to zero in the latest data set and this is all like on the whole it's tremendous work it's really great to see and i know that chloe um are are happy to continue building on this work in the future as well now if you're interested in building dashboards like this for your own team measuring the core web vitals you might be interested in checking out the chrome user experience report dashboard this is a great solution that just allows you to drop in a url and very quickly get access to field data and distributions for the different core web vitals it also summarizes the metrics so if you're trying to share around this report with other people on your team they'll hopefully be able to um also get some familiarity with the core web vitals too we also recently uh shipped a new chrome user experience report api crux api this is great for programmatically being able to build out your own dashboards very similar to what we were just taking a look at so check that out too and that's it um i hope that you found uh this talk useful go and optimize your web vitals there are plenty of docs over on web.dev that cover the methodology the tools as well as the best practices that you can use to get fast and stay fast my name is adi osmani i hope this has been useful thank you hi everyone thanks for joining me my name is rick viscomi i'm an engineer and developer advocate on web transparency projects at google including the chrome user experience report or crux for short as you may know crux is a powerful data set containing insights about how real users experience the web and this data set goes all the way back to late 2017 and includes data from over 18 million websites this will be a somewhat advanced presentation so if you want to brush up on the basics you can visit the crux docs at bit.ly slash chrome ux report to learn about things like metrics dimensions best practices and more what i'll be sharing with you today are a few pro tips for mining the low level data set on bit query for insights about how users are experiencing the web so by now i'm sure you've heard of core web vitals they're the most important ux metrics we think you should be looking at in 2020 the list includes lcp fid and cls in fact crux supports all three of these metrics and has months of data across millions of websites so let's head over to bit query to see what we can find here i'm querying the metric summary table which is a really quick and easy way to get high level stats about a website's core web vitals you can see here that we're extracting the percent of user experiences that meet the good thresholds for lcp fid and cls as well as metrics 75th percentiles all of these stats are pre-computed for you so you can spend more time finding insights and less time writing queries this summary table is also much smaller and more efficient you can see it processes only about 100 megabytes so you shouldn't have any concerns about exceeding your one terabyte of free monthly quota the raw data still exists if you need access to specific histogram bins but almost everything you need is here in the materialized data set if you've ever queried the raw data you'll know that there are several useful dimensions that you can drill down on like month device type and country so let's look at a few examples of doing that efficiently with the summary tables the first thing we'll do is modify this query to see how the core web vitals have changed in recent months to do that we need to change our where clause to include all releases in 2020 by setting the condition to date greater than 2020 01 01 or january 2020 next we include the year and month of the release and the select clause so we can see it in the output the difference between year month and date is that the tables are partitioned by date while the year and month correspond to the table names in the raw data set and finally we can sort the results chronologically and run the query you can see from the results that web.dev has had relatively stable and good user experience this year but what if we want to break this down by desktop and phone experiences for that all we need to do is change over to the device summary table we'll restrict the results to only desktop and phone results now tablet is available but it's less interesting next we'll add the device name to the select clause and secondary sort by it to keep the ordering of the results consistent i'm going to run this query but there's one thing i wanted to show you in the results these percentages are out of all user experiences on the origin not just the percent of desktop experiences or the percent of phone experiences for boring technical reasons so unless thing we need to do is normalize these distributions so it doesn't matter that desktop is more popular than phone to do that we just divide the metric by the total now we have comparable results between devices and we can see that desktop actually trends slightly better than phone and finally what if we want to break this down even further by users countries for that we can change over to the country summary table for demonstration purposes let's restrict the results to two countries with very different experiences korea and nigeria and focus only on desktop now we could write the country code to the results but i wanted to show you one other cool trick the crux data set includes an experimental function to map country codes to full names and the last thing we'll do before running the query is to sort by country rather than device the results tell a really interesting story about the disparity in user experience by country and big query was able to analyze this in only a couple of seconds and using only about a gigabyte of data so that's it these are just a few quick examples of the power of the big query data set and it doesn't have to be mysterious or expensive i hope you start exploring the data set and finding insights about the state of the web you can find links to all the resources and queries we discussed in the description and comments of this youtube video if you have any questions at all we have a whole support network set up for you you can find me on twitter at rick viscomi and i also tweet from at chromie x report we have announcement and discussion groups for important product updates and support we have the crux cookbook on github where you can find example queries for common problems and finally we have crux office hours where we can meet virtually and get your questions answered i hope you found this useful please hit the thumbs up if you did thanks for watching everyone hi everyone hope you're all staying safe my name is the same journey and i'm a developer advocate on the web team at google for this segment of web.dev live we're going to talk about different ways to explore and analyze javascript bundles on a web page analyzing bundles is a good first step to optimizing the amount of javascript shipped to the browser which can improve page load times and directly result in better larger potential paint and first input delay javascript bundling is a term commonly used to describe the approach many websites take to group multiple javascript files or modules into a single file or bundle many tools that bundle javascript code for the browser usually include a number of different optimization steps such as minification and score posting this is a good thing because code written across multiple files and modules can be combined into a single optimized bundle although this might be useful from a developer and user experience standpoint this process usually obfuscates javascript code to the extent that it can't easily be read and analyzed without the help of additional tooling let's take a look at some examples to get a better idea if you're using chrome the network panel on the dev tools is the easiest way to look at all the javascript downloaded on a page open dev tools by pressing control shift j or command option j on the mac and click the network tab to open the network panel to take a look at all the network activity during page load reload the page while dev tools still open click the javascript button to filter requests by javascript and click any url to view the response body the format button can make a minified file more readable notice how with this simple static site there's only a single javascript file and although minified it's easily human readable if we do the same for a site that bundles the javascript code it gets harder trying to understand exactly what lives in the bundle this is an example of a site that bundles many third-party libraries and hundreds of first-party modules into just a few discrete bundles so let's take a look at some ways to analyze this code the coverage tab can show you how my javascript code is unused and any of your files or bundles directly in dev tools open the command menu with control shift p or command shift p for mac type coverage and select the show coverage command click the reload button to reload the page while capturing coverage and in the drop down menu select javascript in the table the unused bytes field shows exactly how much javascript is unused for each file click any url to see a line by line breakdown so although the coverage tab gives us a lens on how much code is being used on a page it still isn't easy to identify which modules make up the bundle now there are other tools out there that make this possible if you're already bundling code for your site chances are you're using a module bundler like webpack or rollup and many of these module bundlers provide either first class or third-party tooling that you can use to visualize and map your bundles let's go over an example if you use webpack you can generate a stats json file that contains statistics about all bundled modules a single CLI command emits the file although reading this file yourself can give some information about what modules live in the bundle there are community build libraries that can consume this file and display a more useful visualization one such library is called webpack bundle analyzer and it works by parsing the bundles generated by webpack and then mapping them to the module names in the stats json file by doing this it creates an interactive tree map visualization of an entire bundle showing the sizes of each module as well as the relation to each other gzip and part sizes are also displayed to give you a better idea of how large each of the modules are bundler specific visualization tools are great they make it easier to see what makes up each of your bundles but they are bundler specific for any site regardless of whether a specific module bundler is used or not source maps are a way to map original written code to its transformed output this is useful because it'll allow us to continue to obfuscate and transform our code during the build process but still have a means to map it back to its original form javascript files that have been transformed due to minification or other bundling optimizations need to point to the location of its source map file with a source mapping url comment or a source map hdp header all newer browsers support source maps and with chrome you can enable it in the dev tools by opening up settings and checking the enable javascript source maps option when chrome can detect that a source map is available it'll show a message and we're able to open and debug the separate associated files as regular javascript files source map explorer is the library that you can use to see a tree map visualization of the bundle this visualization is an example of using source map explorer with a production built just by looking at this we can identify a few issues already a few common js models here moment js and load ash are already larger than they need to be if they were switched to use es modules they could be smaller and more optimized there are duplex copies of react and code needed for multiple different routes all live in this bundle and they could easily be lazy loaded into their own separate bundles these are all common issues that many sites run into and we can spot them by using a visualization tool like source map explorer other tooling that you may already be familiar with are also starting to consume source maps in different ways that can be useful lighthouse an open source website auditing tool is currently experimenting with source map support for some of its audits with source maps the unused javascript audit could show how much unused code and potential savings live in bundled modules there's also a new legacy javascript audit being developed that takes advantage of source maps to show legacy code within the bundle that contains polyfills and new browsers don't need and there we have it we just went over a number of different techniques to analyze bundle javascript code to recap the network panel and dev tools is the easiest way to start seeing how much javascript code is being downloaded the coverage tab could show you how much javascript is actually used many module bundlers have supported tooling that make it easier to visualize bundles if you use webpack for example you can emit a status json file and use webpack bundle analyzer consider enabling source maps on your site and use source map explorer to visualize your bundles if you'd prefer not to emit source maps from production you can set it up as part of your build process so that it's only generated during development and lighthouse is also working on collecting source maps to display more useful audit recommendations these changes will land in a future version so keep an eye out so analyzing your bundles and limiting the amount of javascript on a web page reduces the amount of time the browser needs to spend parsing compiling and executing javascript code this speeds up how fast the browser can begin to respond to any user interactions improving first input delay and results in a faster render improving largest content paint thanks for watching i hope you found this screencast super useful hi everybody i'm paul lewis and i'm philip walton okay so we thought today what we do is we would talk about the core web vitals inside of dev tools now i know about the dev tools side in fact i implemented some of the core web vitals inside of dev tools but phil you're more of the person that knows about the actual metrics where they came from and that kind of stuff right that's right i know a lot about the metrics i work on the chrome team working with some of the people that were helping to define the metrics and standardize them in browsers but i don't really know much about how they work in dev tools so paul you're a great person for me to talk to here let's um let's dive in and see what we can what we can find out okay so i guess our plan is to have a bit of a conversation to go back and forth uh we'll be diving in and out dev tools having a bit of a discussion about these metrics and just trying to kind of explore understand and and share what's kind of going on there so i guess the first one uh that i was kind of thinking about uh or we were discussing this was uh lcp and fcp so i guess the first thing to do to kind of talk about is what are what are they like where do they come from yeah well these are both paint metrics so fcp is um first contentful paint it's it represents the first point in time that the browser is able to paint any content on the screen and lcp is largest contentful paint and that represents the largest single text node or image element on the page and the idea between these two is that fcp represents like the first time the user sees something and and lcp represents when you know the main content of the page has painted i mean in general whatever the largest thing on the largest image or text node on the screen is generally the thing that the user is going to notice and so that kind of represents once the page is really loaded so i guess for a lot of people then the first thing they're going to think of certainly for the largest contentful paint would be something like a hero element or something like that right yeah they're going to big image at the top of the page for example absolutely okay right but it's not always that i'm guessing because you could be deep linking into some content uh like further down the page and everything else yep that's absolutely right so i i okay i'll tell you what we'll do then let's take it i've got a page here actually i've got this page on uh web.dev performance tab open inside of dev tools um and i guess the the goal here is going to be to show fcp and lcp in context and i have uh web.dev open here uh on a page in the performance section around uh using image cds to optimize images so if you've not seen this content definitely worth a look it's a great art okay and we have yeah we have uh i'm gonna see i can deep link into this section right um with uh with this and so this i guess would become our hero image right and an interesting point to make here is that um the hero image is not necessarily going to be above the fold like in this case you're loading a page halfway down halfway scroll down the page and so lcp is always you know it's only going to consider elements that are actually visible to the user on the screen right great point so now this is what's going to make this probably a bit interesting so what i'm going to do is i'm actually going to uh going to go to fast 3g so in the performance tab you can open the capture settings here i'm going to change just online over to fast 3g so we're just going to switch to uh a slowdown on the performance you see this little uh exclamation mark shows up saying network throttling throttling is enabled and i'm going to i'm actually going to slow down the cpu just a little bit and are you doing this so that we can see things you're doing this to simulate maybe a lower power device or something like that correct yeah i am but right now as well what i wanted to do is if uh i take a recording with things just slow down a little bit um it might be easier to just to see what's going on because i happen to be uh somewhere in my house as a actually a really good internet connection so i don't particularly see uh network latency quite as much uh as you would in other cases say if you're on a mobile device out about so i just thought let's just try this and see what happens so i'm going to hit record i'm going to hit command shift r to do a reload okay and i'm going to stop and we can discuss what we see okay let me just ramp this up here now the first thing to notice i suppose would be the timings row here um to have to remind ourselves what these are don't content loaded this has been around forever hasn't it yeah but there is first paint first content for paint first meaningful paint which we could talk about a little bit i suppose largest content for paint and you can see that it's actually highlighted our screenshot here and then the load event now i could use the keys on the keyboard to come into uh a little bit closer zoom in a little bit on this particular area of interest and you see here i suppose uh the first contentful paint uh is presumably happening and then the largest contentful paint is happening slightly later that's right now i think we can get a little bit more info about this because first contentful paint is happening and then the largest contentful page which implies to me that the image is coming in after the initial page content so we're drawing something we're painting something and then we're painting the image after the fact so let's see if we can do that with screenshots on and uh we will record again and see what we get okay i'll stop there and hopefully if i just leave this a little bit and we might see okay so roundabout in fact i wonder if i can just bring this in a little bit further let me just see if i can drag that down drag this a little bit okay that might be as clear as this is gonna get i wonder yeah it is okay i tell you what we're gonna do we're gonna make this a little bit clearer because what's happening is we're actually seeing the page content before i did the refresh and then slightly after so i can do i can if i take this and i go to about blank this is actually a really interesting way to do this testing if you're ever curious about it record it from about blank so that you start without anything on the page can that can make it easier to find your screenshots so i'm gonna uh i'm gonna paste in the url here but not hit enter not go to that yet okay hit record okay and now go there okay hopefully that will make it a little easier to see what's going on okay so you see we've gone from here into the screenshots we see this we see the original page content the top of the page and then we're going down to our uh our deep link just below that so my assumption is if we if we bring our zoom in here that around about here in fact we can just do this uh here yeah you see we're just right on this line here where we go from nothing to something right nothing to something is exactly the point where we actually start to see this this uh the first right this is the content for paint coming in yeah it's the first thing that the user sees but it's not the main thing that they wanted to see when they were when they were loading the page yeah in fact it's saying that the largest content for paint at this point is actually this uh piece of text now let's try one more time just to really really dial it in i'm going to go for slow uh 3g i'm going to go to about blank again and i'm going to hit record and i'm going to see what happens i feel like we're going to see something reasonable here let's process that profile okay there we go this i think is starting to make more sense to me uh over here there we go okay wow there we are first content for paint is here there okay and then much later there comes our image okay which is slightly over to the right here there so i can select that area and as from the based on the screenshots roughly there and i see that's the first content for paint and then if i select later on in the screenshots there i can see that that's the largest content for paint which is our image okay that's nice that depth tool shows you exactly what element on the page is the largest content for paint absolutely um i can't resist i know we're going to talk about layout shifts next but why not just jump the gun a little bit we actually have a layout shift showing up between first and content for paint and largest content for paint and i think based on this um i think uh the reason is because we're going from no image to image it's pushing the content down there that's right so i think we're seeing the page content move so my guess is if we were to go and find this image here in the elements panel we're going to see that it doesn't actually have yeah it doesn't have width and height attributes set yeah and i think that's basically uh causing this to happen so uh if you will come we'll talk about layout shifts more in a second but the reason this page is shifting is because we have an image here that that loads uh when it loads it loads asynchronously essentially uh and uh it when it's loaded it pushes the rest of the page content down if we added width and height attributes to this image we wouldn't see uh we wouldn't see that layout shift as i said we'll come back to that more in a moment yeah that's a good general i guess best practice though just uh let everybody know um always put width and height attributes on your images that way the browser can render the space that it needs um it can it can allocate the space that it needs to render them uh before it actually finished loading loading the image so then you don't get that layout shift exactly the other thing i think we should talk about uh felt before we move on is uh how to optimize for this particular situation so what would you suggest if somebody said i need to get first content full paint and largest content full paint nearer the start that is taking too long to get to these numbers these numbers are too high do you have uh do you have a kind of go-to list of things you would say to them yeah well definitely one thing that you you don't want to you know ever block i mean ideally you don't want to ever block painting on more than kind of one network request that initial network request that you make take at the page content you want to be able to paint at that point if you have additional requests like requests for fonts or style sheets or other things that are preventing you know the browser from painting that will just delay the time when that paint can happen and so you know i mean sometimes you know depending upon the design you're working with you don't have a choice but in in an ideal world you would want to be able to paint right away and so looks like in this case um uh on web.dev we are able to paint pretty quickly and then and that's why first paint is happening you know at the beginning and then uh the browser is loading this image and then largest content full paint happens as soon as that image gets loaded in exactly yeah i think what we're actually uh also seeing here um is that app.css which is the main style sheet and the fonts as well okay um my guess is that they are going to be blocking based on the you see that when i roll over them the network panel here is saying highest uh which is the priority that's been assigned to the css and the reason i guess is because the css is going to be blocking the render which is what you were saying so that's why i think some people would inline that but i guess if we go ahead and take a quick look in our head and if we can find we could search for it but i'm going to link well link rel there's the style sheet yeah you see there's a style sheet for the fonts and right below it app.css and so this will be a classic case of here's a style sheet it's going to block render because the browser chrome is going to take a look at that and go well i need to wait and see what the styles are before i render anything right absolutely so there can be something that we can sometimes take a look at same with blocking javascript right yeah we see that one uh sometimes it gets uh gets in the way and you something like deferrination you sometimes hear this referred to as critical css where you identify just the css that is needed to lay out the page not necessarily style all the components on your entire site and so you can inline just that css content in the head of your document and so then you're not blocking on an additional network request in order to paint something on the page exactly yeah right so that uh that was fcp and lcp as i say you you will find those uh on the timings track here in dev tools okay so next up uh layout shifts now we talked about this very briefly just now yeah with these two down here but where does it come from what what's the history of the layout shift and cumulative layout shifting i think i've also heard it called yes so the metric name cumulative layout shift or cls for short is a metric that tries to capture the experience of visual stability on a page and you probably everyone's probably uh had this you know uh experience where you go to a website and you go to tap on you know a button or something and right before you tap on it it shifts out from underneath you it's a very frustrating experience even if you're not interacting with the page you're just reading it if you know some images late loading images pop in some ads pop in the content changes like a number of things going to happen and you lose your place as you're reading and it's it's just not the greatest experience as a user from the user user's point of view so uh cumulative layout shift is a metric that attempts to quantify that experience and so um there's a couple pieces there but a layout shift is um anytime an element on the page uh between one frame and the next frame it's start position changes and so this will happen like in this case that we just saw an image loads in and it pushes the text below it down and so the image that the layout shift was not on the image the layout shift was on the text below the image that on the previous frame it had you know an x and y position of something and then on the next frame it was pushed lower and so its position changed so um it's a bit you know tough to explain but uh the cls is a measure of both how much of the page content moved and also how far it moved and so if the entire page content shifts from being fully visible on the page to not visible at all that would be a cls of one um if that happened 20 times throughout the page lifecycle that would be a cls of 20 um you know and then if it moves kind of half of the screen distance um and the the image itself is only filling up half the screen that would be roughly you know 0.25 cls you can go read more about how to calculate cls and whether that's a little bit too complicated to explain now but that gives you a sense it's a measure of kind of how much visible instability there is on the page okay so uh as we talked about before then we have this one layout shift here um and so on in fact this is probably the better one of the two to actually demonstrate this um and when you click on this and it's in this experience track if you don't get this experience track in dev tools it means that we didn't detect any layout shifts in that particular recording if you do find that it's there uh then you'll see that it's populated these kind of records now you can click on this and it will take you off to the detailed information about cls um but what we try and do is we try and give you a sense of the score and the cumulative score about what's about what's going on but we also try and highlight for you so you go going from an image here that's 11 by 11 and we show it there's this very small overlay on the the left hand side there uh to a much bigger 801 by 414 so one of the the items that I actually have to do uh in this area and you can see actually we have a few going on here which are probably other images that are being shifted um as we as we make our way through um and let me let me just one of the things I wanted to step back for a second and just talk about why somebody would do this I mean typically you'll you know you'll run lighthouse on a page or you'll go to search consoles new core web battles report or the chrome user experience report and you'll see that you know you have layout shifting happening on your page and you might be wondering to yourself okay but I don't see it when I visit my page so where is this layout shifting happening and so then depth tools is a great place to debug that and to load up you know figure out which page on your site has layout shifting and then load it up in depth tools under the throttling conditions that you know Paul showed earlier and then you know look and see what depth tools is telling you is shifting because that's how you can figure out what's causing the layout shift and then you know you know what you need to do to fix it yeah and there's more I have to do here uh to be clear I think one of the things that is missing from this which is actually available in the data I just need to pull up plummet through is which element are we talking about I can show you that we've got these areas but we it does feel like we're missing a bit of information about exactly which element it is like we do with lcp we highlight the image that we're actually you know referring to here we should be able to do the same here so by the time this goes out and you're watching this give it a try in chrome canary because I might have been able to land the feature by that yeah I'm not making any promises but that would be good wouldn't it and just um yeah just as a kind of a quick point there there's often two pieces to a layout shift there's the there's the element that shifted and then there's the element that caused it to shift and so sometimes you know figuring out one or the other can be helpful in fixing because it looks like here that it's showing the image that came in but adding images to adding elements of the DOM doesn't in itself cause a layout shift but if adding an element to the DOM moves the elements below it then that would cause a layout shift right because the the default size of this image looks to be 11 by 11 pixels to begin with and then when it when it gets populated with the actual pixel data it pushes down the rest of the page content which I guess justifies the the layout shift there yeah yeah okay so that's uh that's that you know and if you got uh like we said earlier I mean if you put width and height on these things that will help but you can also have I mean let me show you this other one even on the google home page this privacy reminder down here if I take a recording here and I just refresh this page we're going to see a layout shift here and similarly we've got this here which is going from down here and I presume there's some JavaScript or something like that that's looking to see whether the privacy reminder has been seen and if not it pushes that content up and so again this is probably JavaScript based and you're going to know in your own apps you know what's going on what is it third party content is it your own JavaScript is it your own styles right and it's a case of sort of digging into the specifics of your application to try and figure out uh exactly you know what's triggering that like what could be happening there uh in order to figure it out so that's just you know a couple of examples of the layout shifting that you could see yeah and just you know right what one thing to keep in mind is that in an ideal world you would have no layout shifts on your page but sometimes it's unavoidable and so the you know the threshold that we recommend you know folks stay below is is 0.1 and so it looks here that you know this layout shift is is quite a bit below that um and so even though you know you still want to be at at uh at zero if you can um as long as you're below 0.1 for you know 75 of your users you're usually in good shape so you say 0.1 I guess that's like for page load um because that's where a lot of these uh a lot of these metrics are aimed at page load right now right yeah so that's actually a really good point I'm glad you brought it up um CLS measures layout shifts that happen during the entire life cycle of the page from when you load the page until when you unload the page even if you leave the page open for you know days or weeks it does measure that entire time whereas here in DevTools you ran a trace and you got um you saw the layout shift that happened during that trace and so in this particular case CLS was only measuring layout shifts for a small period of time um it's important that developers keep that in mind because um you know the the the actual metric definition is for the entire lifespan of the page so if you run lighthouse trace or a web page test trace or even in DevTools and you see a certain value and it's below 0.1 um the threshold that just mentioned just keep in mind that uh you have to actually be measuring it the entire time you know that that's that's the the the metric the measure that counts is the entire life cycle of the page um also I think in this area we should talk about perhaps the the metrics themselves as a bit of an evolving art I mean we have for example first meaningful paint up here um but this isn't one of the metrics that we would mention say something like core web vitals and there's also uh no metric as far as I'm aware for something like animation performance so that's true I guess my question to you is what's going on there why have we got a metric here that we wouldn't refer to and why do we not yet have a metric for something that we might be interested in tracking what's the kind of history and story there yeah that's a good question so uh fmp our first meaningful paint um if you remember from a previous you know trace that that you did paul uh fmp was right next to fp uh fcp and then lcp was you know later in the page load so what actually ended up happening was that oh yeah and it looks like that's that's the case here so yep after a bunch of testing I mean fmp is essentially it's a different metric it has a different meaning than lcp and after a bunch of research we found out that fmp actually wasn't as accurate at predicting when the main you know what most people would consider to be the most important content of the page you know the most meaningful part of the page the metric itself has the word meaningful in the name um but it turns out that lcp is actually a better predictor and so as we come up with metrics that are better at capturing the user experience we'll you know kind of deprecate older metrics and replace them with with newer metrics um but you know we do recognize that that's happened a bunch over the years and i'm sure developers are getting tired of hearing new metrics announced all the time and so one of the things that we did with core web vitals with the web vitals initiative and specifically with core web vitals is we're committing to you know only introducing metrics at most once a year for the the core set of web vitals and so developers are following along they can bring that you know gives them a little bit of stability if they're building a business on these metrics or you know predictability if they just kind of don't want to have to you know always be following along with the latest and so you know recently we announced um lcp was one of the core web vitals and an fmp was not one of the core web vitals and like over time that will probably be deprecated so you also also asked about animation performance this is definitely a metric that we're looking at for the future maybe you know in 2021 or 2022 um so we know that the set of core web vitals doesn't capture the entire you know the entire story of user experience and we're hoping that over time we can improve it and animation performance is definitely a metric that are definitely an area of performance that we're exploring i think the lesson that we talked about talking about if i get that right i think i did um was first input first input delay which uh is not directly shown in dev tools so what is it's not sometimes called fid right what is that and and why yeah so first input delay or fit or fid for short represents the time from when the user you know interacts with the page so taps on the screen or you know clicks a key a keyboard key um to the point when the browser is able to respond to that input event so this can you might think that it's always going to be instantaneous like you you know you click on the screen and then something will happen but as users we kind of know that that's not the case oftentimes you know we've all had the experience of clicking on something or tapping on something and not having an instant response and so this can happen if you know there's a bunch of javascript running on the page maybe you have a large javascript file that the browser is currently parsing and executing and then so if at that exact time the user taps on the screen then the browser has to wait a little bit of time before it can respond to that input event and so fid quantifies like that duration of of time and um you mentioned that it's not exposed directly in dev tools and the reason is because i'm assuming you know you're you're the one who helped implement this but first input delay requires an input it requires a user and so you know uh in many lab scenarios there is no user and so you can't always measure first input delay that way but we have another metric called total blocking time that quantifies just how we do have yeah um that's great and that quantifies how often the main thread how much of the main like how much time the main thread is blocked and a block main thread as i just mentioned contributes to you know the the likelihood that a user will interact with the page but the browser won't be able to respond right away so you said that total blocking time is in dev tools can you show me where that is yes oh i see it there at the bottom of the screen i have i have long tasks over here and i yeah it is down there um and it currently says it's unavailable and i'll talk about that more a little bit i've been working on that feature in fact today so i can tell you a little bit more about what's going on there too so uh what i'll do is i'll i've come to web dev and i've cleared it and i'm just going to hit record and i'm going to hit refresh and i don't expect here um that i'm going to see uh any particular blocking time because i've got a fast machine i'm on a good connection and yeah you can see right down at the bottom here uh we have total blocking time and it's currently set to zero milliseconds right so what that roughly translates to over here is when we zoom in on these top level tasks which are on the main thread um we have no task that goes over 50 milliseconds so 50 milliseconds is our threshold for hey this task is long and it's it's going to contribute to the the blocking time right because what we want to do is we want to keep a track on on tasks that um that go over 50 milliseconds because they're the ones that are most likely worth the user to interact they're the ones that are most likely to prevent uh the browser from being able to respond in in an adequate amount of time right so we currently have no tasks block so blocking time is defined as any time greater than 50 milliseconds in a task so if a task is right 49 milliseconds there's zero blocking time and if a task is 51 milliseconds there's one millisecond blocking time and just out of curiosity some people ask why you know why 50 milliseconds what's the thinking behind that um yeah so the answer is that the idea you might have heard of rail um the rail performance model and you've heard oftentimes people say you should always respond within 100 milliseconds of user input and so the question is why is 50 milliseconds the blocking time and the idea there is that if you ever have if you keep all of your tasks below 50 milliseconds then there's never a situation where two tasks can't both run within the 100 millisecond threshold and so that's kind of if people are wondering why that 50 millisecond time exists and why we chose that for the the magic number with total blocking time exactly and of course if you were doing an animation then your task time really should be under like 10 or 12 milliseconds so so I mean sort of it you've got to be context aware the 50 milliseconds number is a it's a great number to have uh in mind especially for load performance uh but it does change depending on the context and whether you're say animating or not now what as I said we have no tasks here that are uh running long and that I mean if I got a trace like this from somebody I would be very happy perfect I would say yeah I wouldn't complain at this at all but what I can do is I can at least simulate a slower uh device like I did before over my capture settings I'm going to go to like a six time slowdown and I'm expecting that this 25 milliseconds here is going to run long so this is some JavaScript that's being evaluated so I've gone six times slowdown I'm going to hit record and I'm going to refresh again okay I'm going to do two things I'm going to stop the recording a little bit earlier than I did last time but the first thing to notice here is our tasks are now longer because of the slowdown and if I zoom in on this task it's 176.55 milliseconds and instead it's qualified for being a long task by what's 126.55 milliseconds okay so what we do is after the 50 millisecond point on this task we do this candy striping here and we also pop a red triangle up into the top right hand corner so that when you're looking at a glance like zoomed out you get a sense of just how many of your tasks are running a bit long I think almost universally here the ones that are running long are JavaScript based yeah so if you if you again you know are looking at the chrome user experience report or search console's core web battles report and you see that you have a first input delay that's higher than you would have expected for a certain page I think this is a great example of how you would go about debugging that so like you might you know be on your fast macbook pro laptop or something and not see any long tasks but if you go into depth tools and you throttle the cpu and then you start seeing a bunch of long tests like shown here and that would help explain why because if a user tried to interact with the page during one of these long tasks the browser would not be able to respond it would have to wait until the task completed before it could run those event handlers yeah so paul I'm seeing it saying unavailable there in the bottom in depth tools what does what does that mean yeah so sometimes we do say unavailable the reason is we wait for blink to tell us when it's happy for us to declare the page interactive and at that point it tells us how much blocking time it measured and so sometimes if the trace isn't long enough we don't actually get that information so what I've been working on actually recently is adding in an estimate which is essentially counting up the amount of candy striping that we're getting right in those top level records so that we can at least give you an estimate even if blink hasn't given us the kind of official answer so hopefully you should see that in chrome canary them soon yeah that makes sense because yeah total blocking time is technically the definition is the amount of blocking time between first content for paint and timed interactive and so it makes sense that depth tools would wait until the browser is interactive but yeah that does seem like a good feature to just give like a you know an unofficial total when it's not interactive yeah exactly so we've talked about fcp lcp layout shifting and long tasks um and f id uh or fit uh if i was a developer who wanted to know more about these things as well as playing with it in depth tools where would i go and get more information that's a great question you can go to web.dev slash vitals and that will have you know all the information about the definitions of the metrics links to guides on how to optimize for them um you know links to more information about all the tools that support them and everything like that so definitely the best place is to go to web.dev slash vitals thank you for joining us today my name is Sebastian Benz i'm part of the amp developer relations team and my name is nana reising honey and i'm a product manager on the amp project we want to talk about the work we are doing on amp to make web development less painful and developers more productive yeah i'm incredibly excited so let's dive right into it so nana we would be remiss if we talked about amp and didn't talk about the impact of google's recent announcement around the page experience ranking signal absolutely so even before we can actually start talking about amp and page experience first let's just talk about what the announcement is in may the google search team announced that they're going to measure how the page is experienced by the user in addition to prize signals such as a page's usefulness and this whole suite of measurements is called page experience it uses core web vitals which the chrome team announced earlier that month and adds other pre-existing signals such as mobile friendliness safe browsing and htps on top of it and the great thing is that these metrics line up really well with amp's design goals of making sure that users are getting a content forward experience and are able to consume content without having to download unnecessary resources or wait for unnecessary processing okay so how does amp do against page experience good catch actually we did some analysis and we saw that a majority of amp pages actually already do pretty well against this criteria this means that amp is really living up to the intention of being a well-lit path to creating a great page experience so you said that a majority of amp pages meet the criteria but not all yep so in the cases where the amp page doesn't perform well against the page experience criteria we saw that they failed for reasons that were outside of amp's control such as overly large images being served on mobile devices or the server response time being too slow that's a really interesting key aspect of page experience that the core web vitals are measured from real user data this means to improve your core web vitals for example it's a good idea to use a CDN to ensure that users around the world get your content delivered quickly yeah and just like other libraries and frameworks the amp project will be monitoring these metrics closely and continue investing in amp's performance by our performance working group but more generally it's really important to note that amp will intend to reduce the ongoing effort needed to create pages that offer a great user experience and we intend to do so by helping offload tasks and various such as browser compatibility accessibility javascript budgets etc at its core amp is a UI component library before using amp I often struggled with too much choice when it came to adding a new feature to a site having to decide whether I should build my own carousel which is a bad idea or finding a suitable existing implementation could take a lot of time and energy with amp you get a flexible high quality UI component out of the box and you can be sure that these perform well are accessible and play along well with each other recently I talked to a developer from an agency which uses amp for building most of their clients websites they told me that one of their design interns had been able to build a fully interactive website from one of their clients without any javascript knowledge I think that's fantastic and a great example for the value of a good UI component library it makes it easy to get started for beginners and allows experience developers to focus on creating new user experiences instead of bike shedding technical details and that's exactly what we're focusing on in 2020 we want amp to be a cost effective and simple solution that allows developers to focus on their product and not worry about other things like performance infrastructure etc and this is an effort that we're calling amp as a service the idea here is to use amp as a turnkey solution to easily create and then maintain a great page experience and make developers more productive simultaneously so what exactly do you intend to do so the first thing that we really want to do is address the feedback that amp developers have and some of the top complaints that we've seen with amp is first the need for custom javascript and second the fact that the inline CSS limit is too small at 50 kilobytes now we address the need for custom javascript by adding amp script a component that allows you to add custom javascript to amp to help fulfill any business specific need that amp doesn't solve and if you want to hear more here you should stay tuned because our colleagues Ben Morse and Crystal Lambert will be talking you through this in their talk titled workerized JS now with our CSS limit the intention was to promote CSS hygiene but we got feedback that the limit was too tight at 50 kilobytes so we worked with the amp community to understand what a reasonable CSS limit could be and after working with plugin developers news publishers and e-commerce site creators we realized that most interactive experiences could actually fit in within 75 kilobytes of CSS and so that's what we made our new limit 75 kilobytes and this really seems to have hit the sweet spot with 50 kilobyte limit I have from many developers that they've been struggling with skipping this year's as below the limit but I still have to hear from someone starting the 75 kilobytes limit yeah fingers crossed that this limit works now aside from addressing feedback we wanted to we want to make developers more productive we want to help them create and maintain performance sites as well the problem usually wasn't with amp itself but then they had to maintain two versions of the pages the canonical one and an additional one amp one yep that's far that's by far the largest problem that amp developers have the problem is even more acute if you have separate teams that are working on the amp and mobile web experience especially if they're in separate parts of the organization to be honest the amp team itself advocated for paired amp experiences when we got started we saw it as an easy way to create amp pages with the least amount of effort but talking to developers over time has made us realize the amount of pain that can be associated with maintaining this dual code base and that this outweighs the initial gains of actually creating the amp page quickly and google's page experience announcement is a great move for amp developers in this regard it allows development teams to really think about how they want to continue investing in amp going forward okay so say i'm i'm publishing paired amp pages because i want to be in the google top stories carousel should i continue doing this so in that case i would ask for you to consider the maintenance costs that you're incurring by having to maintain an amp version of your code and a non-amp version of your code now that you have the option to be flexible with your text app you should be looking to pick a setup that allows all your web developers to be productive from day one so you're telling those who are going the paired amp route to completely drop amp support no what we're telling them is to pick something that makes them the most productive and this could be a number of things developers could pick experiences on their site that could actually benefit from amp and only invest in amp for those experiences or they could go fully amp first across all their site if they actually believe that amp is able to meet their needs and we've gotten pretty positive feedback from developers who use amp as their main library because they think that amp makes them more productive and this is what we see amps future as a component library that helps developers be more productive and this is why we're investing in allowing everyone to use amp components even outside of amp pages it's an effort that we're calling bento amp and we look forward to releasing it later this year i'm really excited about this focusing on amp as your i component library is a much healthier direction in my opinion and i'm very happy that we're making this move another area we are taking all learnings from and are making them available to a wider audience our server side optimizations for amp pages at the beginning amp pages were mostly served from amp caches and these perform additional optimizations enabling amp's strong user experience however many developers started using amp for building their whole website in these cases amp pages are not served from a cache and there's been room for improving amp's loading performance to address this we created amp optimizer a tool to bring amp cache optimizations to publishers for example we use amp optimizer for the official amp website amp.dev and by using amp optimizer we achieve the same performance as when the page is served from an amp cache and what i really like amp optimizer fits really well into our idea of amp as a service it enables us to automate web development best practices for example the latest amp optimizer release added support for image source generation to make it easier to serve optimized images another example is javascript modules the amp project is soon going to start serving the amp runtime and components as javascript modules and if you're using amp optimizer you will automatically get the benefit of smaller runtime modules once this becomes available that sounds so great and i'm really excited about all the improvements that are coming to optimizer but what's the best way for developers to actually include amp optimizer i mean of course you could include it normally in your build pipeline or your rendering pipeline but ideally you shouldn't have to think about how to integrate amp optimizer our goal is to make the integration seamless by integrating amp optimizer into existing frameworks and scene processes the next jas integration is a great example for what a good amp development experience can look like next jas has a special amp mode that you enable via flag and this will result in the generated page being valid amp the cool thing is that you can start using amp components straight out of the box and you don't need to worry about the amp boilerplate or importing amp components all this is automatically added in the background by amp optimizer which is integrated directly into next jas and the resulting editing experience is really nice and it feels like web development from 30 years ago and a great example for this is axios they recently launched their new site and it's completely built on amp using next jas and they've been really happy with the experience another example for a cms that has these features integrated is wordpress recently the official amp wordpress plugin started by publishing an optimized amp by default so this means if you built an amp page using wordpress you get the best serving performance for them wow it's it's really exciting to see so many new experiences that are being built using amp and in fact amp optimizer and i'm really hoping to see more but that's it that's our time and that's our vision for 2020 the google page experience announcement allows amp to focus on what it does best be a ui component library that helps developers be more productive by helping them deploy web development best practices at scale and if you want to read more about amp's plans for 2020 please read our blog post at go.amp.dev.service service and with that thank you for joining us if you want to learn more about amp in general you can visit amp.dev today thanks everyone and we will also be in the chat to answer your questions for a bit hey there i'm ben morse i'm a developer advocate working on the web and on amp and i'm christel amber technically a writer working for the web on the amp project we're here to talk about something we think is pretty cool a new way to run javascript in web workers with amp awesome let's get started but ben what is this slide javascript foe i love javascript it lets me do whatever i want sure javascript is amazing it's made the modern web possible but we both know that many websites are too slow and that's partially caused by lots of javascript that's one of the reasons why people like this are staring at their phones waiting for our sites to load yeah that's no good you'd think the more javascript the better i could write more code to make things quicker well it's like too much ice cream or time spent at home you don't want to overdo it well what about these web workers i hear you can use them to get javascript off the main thread but i'm not sure how to get started yeah it can't be pretty intimidating because the oh and another thing amp doesn't let me write my own javascript period can we make a video about that too well conveniently crystal this video can be about both of those things because amp now provides an easy way to use workers so we're going to show javascript developers how amp makes it easy to try web workers and for people who are already using amp we'll show you how you can write your own javascript without breaking amp's performance guarantees for everyone it's a nice way to run javascript in a way that's unlikely to harm your web vital scores oh yeah i'm hearing lots about this web vitals that's uh oh our page's first input delay largest contentful pay and cumulative layout shift right those are the three so let's get going another slide what is this a guy knitting yeah it's a transition slide well it does remind me why is the web single threaded i mean every modern os has multiple threads why hasn't the web caught up honestly it's just how browsers and javascript have always been i mean of course modern browsers can multitask they can't be more than one thing at a time but each browser tab has a single thread for the ui only one process can make changes to the screen at a time that means javascript can block the browser from doing things and vice versa but wait javascript is asynchronous right so whenever an event gets fired doesn't the event handlers code start running right away well sure but all the code on the web page still runs in a single thread this diagram illustrates javascript's event loop so the browser fires an event if you have an event handler that code runs until it's done as other events fire they get added to a queue i see so if my code is handling one event and another event fires the browser just can't spin up another thread instead it has to wait for that event in the queue right it has to wait until the current code is done let's say the user taps a button while your code is running a long task well a javascript can't handle any other event until your task completes so the next bit of code will be delayed worse still the browser may be unable to change the ui because it's waiting for your code i guess if it weren't that way everything would just be fighting for control over the DOM and you'd have race conditions and general chaos oh yeah and unfortunately to make javascript thread safe you'd have to completely rewrite it all right this is making some sense not only can excessive javascript make your page slow to load it can also make the page slow to respond to users interactions i'm guessing this is where web workers coming yes javascript in a web worker runs in a different thread and this is not a new idea web workers have been around for about 10 years you're kidding 10 years that's longer than i've been working on the web why am i just learning about them i think because their limits have made them harder to use workers can't cause race conditions with other workers or the main thread because they lack access to the DOM or the global scope instead a worker communicates with the main thread by passing messages back and forth where each message contains an object there are libraries that make this simpler notably comlink by serma and workerized by jason miller but workers can't access the DOM so workers are great for doing long tasks off the main thread but what if you want access to the DOM that's a big obstacle and that's where amp script comes to the rescue i knew at some point we were gonna bring amp into this we did so in 2018 the amp project released an open source library called worker DOM worker DOM makes a copy of the DOM for the worker's use worker DOM also recreates a subset of the standard DOM API this lets the worker manipulate the DOM and make changes on the page using standard techniques worker DOM keeps the copy of the DOM and the real DOM in sync so when something changes in the real DOM worker DOM sends a message to the worker to make that change in the copy and if your worker changes its copy worker DOM sends a message over to the real DOM and the same change gets made there so i heard you say amp is all of this only true for amp or can i use worker DOM with a different stack you can't import worker DOM into your own project but worker DOM is super useful for amp since it provides a way to run javascript in a sandbox where it can't run rampant and break amp's performance guarantees amp encapsulates worker DOM in a component called amp script this is a little abstract can you show me some code code i understand okay fine let's make a basic hello world example with amp script in the body we insert an amp script component the DOM it contains gets passed to the worker so here to the worker that entire DOM is that h1 tag next we put our code in a script tag whoa that's weird you set the type to plain text instead of text javascript yeah we did that's what the browser won't see it as javascript and just execute it immediately instead amp script finds the code it puts it into a worker so the code in this script here grabs the first h1 tag in the DOM and it pens a comma and the word world right on page load and does that work look magic that was pretty quick let's watch it again i'm overwhelmed well okay it's not gmail but that world was really and truly added by a web worker can you prove it if we open dev tools and go to the sources tab and click over here we can see our script right under the code added by amp script okay that's kind of cool here's how that looks in a full web page i've left some things out for simplicity's sake but you can see that as with all amp pages we're loading amp's runtime script we're also including the javascript that makes the amp scripts work so do you always have to include your javascript in line like that it's not really a best practice yeah that's a good point we can also store the javascript in his own file by using amp scripts source attribute like this so that example works but it's not really that useful could we say add that world when the user presses a button okay fine let's add to html a button that says hello who will write javascript that grabs that button and adds a handler for the click event when you click the button it works as magic let's try it out so there's hello there's our button and look hello world okay let's go a little crazy super nido what else can we do does amp script let us do a fetch does it ever here's that hello world example modified to retrieve the word world from an endpoint workers natively support fetch xml htp requests and even web sockets okay this is getting pretty cool but this is amp right how does amp just let me write any javascript i want well that's a good point amp tries hard to guarantee low cumulative layout shift to keep page elements from moving around if your code makes mutations to the page that would really disturb the page layout amp reserves the right to disallow those changes or even shut the worker down if your amp script container can't change size it can't disturb the page as much and it gives you more freedom that's why i specified the height and width here in the html and why i didn't choose amp's container layout there's a lot to this so check the documentation on amp dot dough for details hold on can i just use amp script to inject more scripts into the dom nope you're working with a virtual dom not gonna work fair enough but i see something about not allowing more than 150 kilobytes of javascript is that on a page level that's right that 150k is per page but i could still fit jquery into that and oh i can just copy in my favorite image slider and charter libraries remember the worker dom has recreated the dom api that supports in its own javascript if worker dom is supported at the whole dom api it would be cumbersome and huge it would slow down pages enormously so pretty few third party libraries are going to work right out of the box okay then what's the best way to use amp script well one way is to use vanilla javascript while keeping an eye on this table of supported apis there is quite a bit there wait react can i use react yes that's the other way react uses a very specific subset of the dom api so the worker dom team made sure that subset is well supported okay but i've used react before my react bundle might need to break that 150 kilobyte limit yeah that's why it's probably better to use preact instead preact is highly compatible with react but it's only 3k minified in gzipt for projects with more code preact is probably the way to go here i've made the button example using preact i find it easier to write the debug the jsx in a simpler environment and then build it into my amp page so let's build this let's start up our server and those are page with our button it works all right that was a lot if only there was an amp script tutorial out there wait a minute didn't you and i already make one of those yeah you want to take the next slide of course that tutorial is a great introduction to amp script head on over to go dot amp dot dev slash learn dash script to get started and then keep on going remember that worker dom is still quite new if you have future requests or find things that are missing please get involved on github help improve it in conclusion web workers can help you keep javascript from slowing down your web pages amp script is a nice way to try this technique out you can find all the code from this talk here on glitch thanks for listening and let's get to work on putting workers to work for you hi everyone thanks for watching this session on debugging javascript SEO issues in the next 15 minutes i will take you on a short journey in which we will talk a bit about the worries that a few SEOs still have about javascript and google search then look at the tools available to SEOs and developers and then get our hands dirty on a few case studies from the real world now let's get started with looking at the basics can SEO and javascript be friends there is a bunch of history behind this that contributed to various opinions and answers to this question today the answer is generally yes sure as with every technology there are things that can go wrong but there is nothing inherently or categorically wrong with javascript sites and google search let's look at a few things people tend to get wrong about javascript and search the number one concern brought up is that google bot does not support modern javascript or has otherwise very limited capabilities in terms of javascript features at google i o 2019 we announced the evergreen google bot this means that google bot uses a current stable chrome to render websites and execute javascript and that google bot follows the release of new chrome versions quite closely another worry is concerned with the two waves of indexing and the delay between crawling and rendering google bot renders all pages and the two waves were a simplification of the process that isn't accurate anymore the time pages spent in the queue between crawling and rendering is very very short five seconds at the median a few minutes for the 90th percentile rendering itself well takes as long as it takes your website to load in a browser last but not least be wary of blanket statements that paint javascript as a general seo issue while some search engines might still have limited capabilities for processing javascript they ultimately want to understand modern websites and that includes javascript if javascript is used responsibly tested properly and implemented correctly then there are no issues for google search in particular and solutions exist for seo in general for example you may consider server-side rendering or use dynamic rendering as a workaround for other crawlers when saying test your site properly the follow-up question is usually well how do i test my site properly and luckily we have a whole toolkit for you to test your site for google search let's take a look at what's available the first tool in your tool belt is google search console it's a super powerful tool for your google search performance besides a ton of reports it contains the url inspection tool that lets you check if the url is in google search if there are any issues and how google bot sees the page the second tool that is really helpful is the rich results test it takes any url or lets you copy and paste code to check its main purpose is to show a structured data is correctly implemented but it has much more to offer than just that last but not least the mobile friendly test is similar to the rich results test on top of the rendered html the status of all embedded resources and network requests it also shows an above the full screenshot of the page as well as possible mobile user experience issues now let's take these tools for a spin i have built three websites based on real cases that i debugged in the webmaster forums the first case is a single page application that does not show up in google at all as i'm not the owner of the domain i don't have access to google search console for this site but i can still take a look i will start with a mobile friendly test to get a first look at the page in question as we can see the page loads but shows an error message when i load the page in the browser it displays the data correctly hmm we can take a look at the resources googlebot tried to load for this page here we see that one wasn't loaded the api.example.org slash products url wasn't loaded because it's blocked by robots txt when googlebot renders it respects the robots txt for each network request it needs to make the html css javascript images or api calls in this case someone prevented googlebot from making the api call by disallowing it in robots txt in this case the web app handles a failed api request as a not found error and shows a corresponding message to the user we caught this as a software for and as it is an error page we didn't index it take note that there are safer ways to show a 404 page in a single page app such as redirecting to a url with a 404 status or setting the page to a new index right we solved that one that's pretty good all right on to the next one this one is described as a progressive web app or pwa that didn't show up in search except for their homepage let's go find out why looking at the homepage it looks all right the other views in this progressive web app also load just fine let's test one of these pages we will use the mobile friendly test again to get a first look at what's going on oh the test says it can't access the page but it worked in the browser so let's check with our dev tools in the network tab i see that i get a 200 status from the service worker though what happens when i open the page in an incognito window whoops so the server isn't actually properly set up to display the page instead the service worker does all the work to handle the navigation that isn't good googlebot has to behave like a first time visitor so loads of page without the service worker cookies and so on this needs to be fixed on the server great two websites fixed but i have one more to go this one is a news website that is worried because not all content can be found via google search to mix things up a little bit i'll use the rich results test for this one the website doesn't seem to have any obvious issues let's look at the rendered html hmm even that looks fine to me so let's take a look at the website in the browser so it loads 10 news stories and links to each new story and then loads more stories as i scroll down do we find that in the rendered html too interesting this story isn't in the rendered html it looks like the initial 10 stories are there but none of the content that is being loaded on scroll wait does it work when i resize the window oops it only works when the user scrolls well googlebot doesn't scroll that's why these stories aren't loaded that's not exactly a problem this can be solved by using an intersection observer for instance generally i recommend checking out the documentation at developers.google.com slash search for much more information on this topic and other topics i hope this was interesting and helped you with testing your websites for google search keep building cool stuff on the web and take care i'm excited to show you in the next 15 minutes how you can use structured data to make your website stand out more in google search and how that can be done with javascript when a static implementation isn't feasible we will start by looking at what structured data is and why it is a good idea for your website then we will look at ways to implement it using javascript and last but not least we'll take a look at how to test and debug your implementation all right now what is structured data and why is it useful structured data is a standardized set of additional markup that you can put on your pages to tell machines like googlebot more about the content on your page on the right side here you can see the information for a specific product being highlighted in both the image search as well as the search results including additional information like ratings and price we call such results rich results to implement structured data you can use json ld microdata or rdfa but we recommend using json ld here is an example of what a json ld block on your page might look like besides products there are many verticals that can benefit from structured data and become eligible for rich results here are some examples but you should check the link for the full gallery of supported verticals note that implementing structured data makes a page eligible for rich results but does not mean that we will always show them for every page that implements it so now we talked about what structured data is and how it benefits your website let's walk through a few possible implementations we've seen that the easiest way is to include a script tag with the json ld data in the page this can of course be done in the back end or straight in the html of a page but what are the options if you are using client-side render javascript first of all it is fine to implement it dynamically with client-side javascript we recommend to use server-side rendering to make your website as robust as possible but there is no issue with implementing it with javascript per se in this session we will look at three possible implementation approaches of course you can use javascript without libraries or frameworks to inject structured data into your pages here is an example of a vanilla javascript implementation for a client-side rendered single page application it fetches the json ld data from an api and injects it into the head of the page as googlebot renders this page it will execute the javascript and the structured data will be rendered just make sure that the api is available to googlebot and not blocked by robots.txt when you are using frameworks such as react angular or viewjs you very likely have helpers or built-in functionality available to insert structured data into your pages here is an example of a react component using the schema helper utility to create type json ld for a person's profile page should you not have access to the code of your pages but have google tag manager on these pages you may use a custom tag and custom variables to create structured data from the information that is on the page to do that create a custom html tag in your container and insert the relevant json ld as well as the variables for the values of each field in the json ld block then create the necessary custom javascript variables to extract information from the page so it can be inserted into the custom html tag automatically we advise not to copy and paste information from the page directly into google tag manager as that will likely cause a mismatch between page content and structured data generated by google tag manager to arise in the future great so we've seen three ways of generating structured data with javascript let's find out if our implementation works as expected there are two main tools for testing the implementations the first one is the rich results test you can paste a url into the tool and see what structured data is recognized as well as if there are any issues with the structured data on the page when using javascript to generate structured data we recommend testing a url instead of pasting code directly into the tool the other great tool for testing this is the google search console in the url inspection tool you can see the structured data that is detected and if it is valid but you can also see which pages of your site were eligible for rich results and which ones have errors or warnings to look into if you want to learn more about google search and structured data check out our documentation at developers.google.com slash search or use this short link to read more on how to use javascript to generate structured data for your pages thanks a lot for joining and have a great day bye