 Hey there, welcome to web.dev live. I'm Diyong Almer, and I work on the web developer ecosystem at Google, and I'm delighted to kick off our online event. First, though, I want to acknowledge the times we're in. We're dealing with a global pandemic that has taken a huge toll on us all. And most recently, we're witness to events which have once again surfaced the systemic racism in our society that we must do everything we can to eradicate. You know, these events have been really humbling. They're showing us how much work we have ahead of us, but they also show us the power of community. So we join you today and over the next three days in the spirit of being together and helping each other, because we were upset when we had to cancel Google I.O. And I kept thinking about an empty shoreline amphitheater on the days that some of us would have congregated. And web developers reached out sharing these same feelings, wishing we could be discussing ideas and enjoying the hallway track. Whether you're joining us from your couch, kitchen, or hammock, we hope you're safe and ready to kick Web.dev Live into gear. Now, we'll be coming to you in different time zones each day, reaching you no matter where you are on the globe. We'll be bringing you content from across our teams as well as members from the web community at large. Now, each day, you'll have Googlers on standby to answer your questions in real time. So as you're watching the session, simply head over to the live chat on web.dev.live or on YouTube and just ask away. Now, when coronavirus became global, we really felt the need to stabilize. This resulted in us pausing Chrome releases and temporarily rolling back the same site cookie changes. Now, we wanted to track Chrome usage and see what changed to make sure that we could be on top of any ecosystem changes too. You probably won't be surprised that we saw surges in usage of media APIs as video chat and streaming really soared. Now, also, some types of content saw large traffic surges such as food, commerce, entertainment, health, science, et cetera. And many developers were focusing on making sure these sites were as resilient as possible. That's when we gathered our best practices and made them available on web.dev slash COVID-19. We saw a lot of developers scramble to make changes to their websites and many created new ones. Governments had to jump on this to make sure that people had all of the critical information that was changing rapidly. I remember seeing Alex Russell tweet about one of these government sites from the state of California. We were really inspired with their work and wanted to ask them about their experience and they kindly agreed to join us. So let's welcome Aaron Hans, the engineering tech lead on the project. Thank you. Great to be here. So Aaron, I'm really curious about how this site even all came together. The Alpha.ca.gov team was formed in December of 2019 by Angelica Carrote to bring human-centered design processes to the state of California and improve their online services. We built a lot of prototypes for things like how to help people review the safety of their tap water and see if they're eligible for subsidized phone services and to prepare for wildfires. And then when the pandemic hit the state, we were asked to stand up the public response site. Got it. So when a government team has to build something like this, like how do you go about it? What are your core principles? The number one goal is to make something that works well for everybody. And the technical considerations, they're passing accessibility audits, making sure it works with keyboard navigation, with screen readers, and that it has a smooth experience on low end hardware. We use the cheapest phone we can get from the local Cricut wireless as our test device. And the non-technical considerations are the readability, what's the grade level of all the content, and are we really building something that users need and iterating based on their feedback? Got it. Now, I've been trying to picture the time pressure that you had here to get this site out. Can you tell us a little bit about how you actually built the site and how you manage the trade-offs between quality and that timeline? Sure, it's definitely an accelerated timeline. We put the site up in four days, then the governor announced a statewide lockdown and we had millions of visitors. Really happy that we chose a static site generator for that because it helped us weather the traffic smoothly. We chose 11 for the static site generator, and we augment that with web components and serverless APIs built on Node.js. Got it. We're actually fans of 11T2. We use it on web.dev and really like it. Was this kind of a new setup for the team to build a website like this, or have you been doing this for a while? I remember reading that article about how web.dev was built and being really happy that we were using some of the same tools. We started using 11T at the end of last year, just to use it for a blog on news.alpha.ca.gov. And when we used it for the COVID-19 site, it's built on content authored in the WordPress environment and then we consume it with the WordPress API and use GitHub Actions for the 11T production build. Got it. Now you've got the site out there now. I'm curious, what's next for you and the team? Next things for the team are continuing to respond to the pandemic. We're gonna be getting back to helping improve other online services and we're hiring. Check us out at news.alpha.ca.gov if you wanna help out. We're talking about 11T and the other tools that we're using. I wanted to mention that Lighthouse is an incredibly important tool for us because performance is such a paramount concern. And I love the way that it gamifies by development. You can get the rest of your teammates to challenge each other and say who can put up some more points today? We really need to get that score up. And I'm curious who's winning on the points race and I'm really impressed in how you think of performance being a key part of the accessibility story in general. Well, Aaron, it's been really inspiring to see the work that you and the team did here. Again, at an incredibly stressful time, thank you so much for coming on and sharing the story with us. Thank you. Now it's been great to see developers like Aaron focus on accessibility, resilience and performance. And we've made some announcements over the last month about a program that brings this all together under the umbrella of web vitals. To hear more, let's welcome Elizabeth, a PM on the Chrome team to explain. Thanks, Deanne. Yeah, there have been a lot of product updates and releases and I'm really excited to go over them with you. Yeah, it's been particularly busy here over the past couple of months. So it'd be great to have you get us up to speed on web vitals and what developers should be really considering here. Yeah, that's great. Let's dive in. First off, what are core web vitals? They are a set of user-centric metrics and thresholds that apply to all web pages across all industry verticals and all types of experiences on the web. They are signals to developers and business stakeholders about the basic health of your site and as such they should be measured by everybody. But okay, I jumped straight into definitions. Let's take a step back. Why did we introduce core web vitals as a thing? There are already tons of metrics, lots of guidance about how to measure your site's performance. How do core web vitals help us? Well, let's go back to our foundational goal. We want to create outlandishly phenomenal experience for all of our users. And it's not just out of the goodness of our hearts either. We know that every time we have a rage clicker on our site, we lose out on a reader, a customer or a client. Also, we want a money pug. So there is this mythical, absolutely fabulous experience that we've set our sites on creating. It seems easy until you realize that the unicorn horn requires both loading and interactivity performance measurement and the rainbow. Well, the rainbow requires an entire rum setup for each color. So there you are watching your flying unicend, unicorn doxioned, and you realize that you have this. It's gorged on a bit too much JavaScript. It doesn't respond when you're issuing it commands and that's upsetting. But it's gonna take quite a bit to get this to this. So the question is, where do you start? Well, in order to know if you've improved, we need to know what to measure. To know what to measure, we need to define our goals. So put another way, what makes a web experience shine? This is where the core dimensions of quality come in. There are foundational elements of a user experience that make a unicend shine above the competition. Content needs to load quickly. We've all been there. The longer we have to wait, the more likely we are to bail. So your pages have to load fast. Interactivity is just as important. You're clicking and nothing is happening. No fun. You don't just need content to be visible. You need it to be available for use. Lastly, we want a page to be stable and predictable. Just a few pixels moving around can make a huge difference. These core dimensions of quality reflect user-centric signals that have long been mission critical for you and your site's success. So we are closer to defining quality, but how do we measure these quality dimensions? And that's where representative metrics come in. To represent fast loading, we have largest contentful paint or LCP. It provides insight into how quickly a user is able to see the meat of what they are expecting and wanting out of a page. For responsive interactivity, we have first input delay or FID. This metric has been a critical signal for developers for some time to understand how long a page takes to respond to a user's initial input. And finally, to represent visual stability, we have cumulative layout shift, CLS. CLS measures the amount that the elements within the viewport move around during load time. Okay, so we know how to measure our core quality dimensions, and let's say my LCP is three seconds. Do I celebrate? Wait, I don't actually have any idea whether or not that's good. So I need to evaluate my performance on a spectrum for each metric, which is where the final element of Core Web Vitals comes in are thresholds. For each representative metric, we have clear goalposts around what constitutes a good experience, one that needs improvement and one that's poor. So for instance, for LCP, anything that is 2.5 seconds or less is on its way to being a unisoned. Anything between 2.5 and four seconds needs some work, and anything above four seconds is needing quite a bit of love. So to finish up our definition of what are Core Web Vitals, the initiative is a combination of three things. First is user-centric quality dimensions, then we have representative metrics of those dimensions. And finally, thresholds to help you evaluate whether or not your performance is good or not against any given metric. But there is one more piece of really important information. We need to know how many page loads need to hit the thresholds for the Core Web Vitals metrics to constitute a good experience. So say we have 100 users. If only one of them has an LCP below 2.5 seconds, do I pass Core Web Vitals? The answer is no. Core Web Vitals uses the 75th percentile value of all page views in the field to evaluate against the thresholds. In other words, if at least 75% of page views to a site meet the good threshold, the site is classified as having a good performance for that metric. And this applies to all three metrics, LCP, FID, and CLS. The 75th percentile is used to evaluate all three. Core Web Vitals is a holistic package of everything you need to create the foundation of a healthy site. They are valuable because they show you exactly where to start to set yourself up for success. If 75% of your users are getting fast, interactive, stable content, it's cause for celebration. But as we know, there are other dimensions of quality that are extremely important. Accessibility, security, mobile friendliness, there are a lot of dimensions that make a basic unisoned even more fabulous and are important to your site's success. So don't stop measuring these if you already are and if you aren't already, once you've optimized your Core Web Vitals, you can begin to venturing into measuring and benchmarking against other important vitals that are relevant to your business and your users. Core Web Vitals are just as the name indicates. They are core and provide you with a solid foundation upon which to further optimize. Given how important it is to quantify a user's experience accurately in order to be successful on the web, we are constantly working to find ways to better measure all quality dimensions. What this evolution has often meant in the past is a stream of new metrics, tweaks to existing metrics, and new guidance, many times at an unpredictable cadence. We know how difficult this can be when trying to set goals, align roadmaps, and get organizational buy-in. Because of this, we want to set a predictable cadence of updates to Core Web Vitals. They will be refreshed once a year around the time of Google I.O. to ensure that they reflect the latest in our learnings, and this includes adjustments to the set of metrics as well as the thresholds. Looking ahead towards 2021, we will be providing regular updates on future metric candidates, motivations, and implementation status. Okay, so this is all fine and good, but how do I get started? To know what to optimize, you have to measure first. And Core Web Vitals are now in all of your favorite developer tools, and there are more than what is listed here, including a new Web Vitals library and a bunch of ecosystem tools that have already adopted them. As you can see, Core Web Vitals are available across the board. You're able to measure them for a specific page, for your origin, locally in the lab, and from real users in the field. Remember that first input delay is only measurable in the field, so you have to have a real user clicking on your page in order to measure it, but that doesn't mean you can't use lab tools to help you improve it. Total blocking time, TBT, is a proxy lab metric for FID that allows you to debug and improve your interactivity in the lab before your users ever have to experience a bad FID. The next obvious question is, again, this is great, but where do I start? What tool should I use? I'm so glad you asked. Each tool has its own strength. For example, PSI is one of the only places you can see your lab and field data in one place, and Search Console is critical for identifying page types that need improvement. As I mentioned earlier, we're seeing so many great ecosystem players in production monitoring solutions already implementing support for Core Web Vitals, and we're really delighted. But again, you ask, you've shown me the magical Unisoned and now you've given me a palette of tools to choose from. That's amazing, but tell me what to do first. Okay, two things. First, go to PageSpeed Insights. That will give you a pulse of your Core Web Vitals performance in both the field and the lab. From crux, you'll be able to see whether or not 75% of your loads are hitting the Core Web Vitals thresholds for both your page and your origin in the field. Then you can take a look at your lab data from Lighthouse to see whether or not you are hitting the Core Web Vitals thresholds for each metric in a synthetic testing environment. This helps to guide you towards actionable opportunities to improve your page's performance. Second, check out some more in-depth talks later today that go into detail about measuring and optimizing against your Core Web Vitals. And with that, I'm gonna pass it back to Dion. Thank you so much. Great, yeah, thanks for showing us the context and all of the information across the whole slew of tools there, Elizabeth. My pleasure. One of the critical steps in modern web development with a lot of influence over your vitals is your build step. That's where your CSS modules are turned into real CSS. Your bundler analyzes your module graph and optimizations can really kick in. We wanted to go deeper here to understand the popular bundlers, how they work, what they can and cannot do and how to set them up for success. Let's welcome Serma to tell us more. Hey, Serma. Hey, Dion. So there are many best practices to follow in web development. Knowing them is one thing, but getting your build system to follow them as well is kind of another beast. So do you have anything to maybe report on that from? So there is two bits on this side of things. On the one hand, there are many developers who want to know what build tool they should learn and use for their next project. And on the other hand, there are many projects that already have a build tool set up but are looking to improve their output. To tackle both of these problems, we built Tooling Report. Tooling Report is a website that you can actually go to right now, tooling.report. We create an extensive list of best practices in web development, took what we think are the four most popular build tools and checked for each build tool if it allows you to follow that best practice. And each tool gets a point for each test that it passes. We chose to start this project with Browserify, Parcel, Rollup and Webpack. Now, Browserify might be surprising to some but the data indicates that there are still many sites out there that use Browserify and we want to help those projects improve their sites as well. Of course, we have been working with the core teams of all these tools to make sure that we not only use the tool correctly but also represent them fairly. The tests are subdivided into categories and in the overview, you can get a quick sense of which tool is excelling at what category. You can get more information on the test in the overview and learn more about it. And now this is where I think Tooling Report gets really interesting. Each test has a dedicated page where you can compare how the tool score on this specific test. There is an in-depth explanation on why this test is important and how it relates to best practices and web development. We also explain how we codified the best practice and what the expected outcome is. And finally, at the bottom, you can find an explanation for each tool and why it is passed or why it might have failed this test. If a tool is not passing a test, we will also link to bug reports on the tool's issue tracker. Many of them we have actually filed ourselves while building Tooling Report. We also link to a minimal NPM project that we use to determine the tool's behavior. This way, Tooling Report not only tells you what a tool can and cannot do, but you can also look at the configuration files and plugins to see how you can follow a best practice with this tool. This way, the site double functions as a source of documentation. The entire site is open source on GitHub and we'd love the community to help us come up with more tests and help us add more tools over time. So you can check this out now on tooling.report. Thanks for joining me, Soma. Cheers. Now we're all becoming more aware of the importance of security and privacy. Chrome believes in an open web that's respectful of user's privacy and maintains key use cases that keeps the web working for everyone. I'd love to welcome Rohan to have a chat and kind of share some of what's new here. Hey there. Thanks, Dion. My name's Rohan and I look after Web DevRel for security, privacy, payments and identity, or SPI for short. Now, while that's a cute internal name, we are part of the wider trust and safety team within Chrome. Great. So why don't we start with same site cookies and the temporary rollback that kind of kicked into gear for us when COVID kind of really started to hit globally? Can you kind of share what the latest news is there? Sure, yeah, so hopefully, as a lot of you are aware, there's an update to the cookie standard that's being adopted across Chrome, Firefox, Edge, and others to restrict cookies to first party by default, along with requiring explicitly marking cookies for third party contexts. Now, that's all configured via the same site attribute, hence same site cookies. We were rolling this out to stable Chrome, but decided to reverse this at the start of April because the COVID situation saw a huge jump in demand for online services, but also a huge shift in developers being at home without their equipment or looking after their families. We made the call that it was important to prioritize stability at that moment. Now, these changes are intended to make the Web a safer place, protecting against cross-site request forgery and trying to minimize the surface for COVID tracking. Sadly, during a crisis, when people are most vulnerable, you see these kind of scams in the tax jump too. So with the Chrome 84 stable release, which is mid-July or about two weeks from now if you're watching the stream, we are gonna start rolling this out again across all Chrome versions. Got it, so what I'm hearing here is that if you haven't tested your site yet, if you haven't made changes to kind of make sure that everything works well, now is actually the time to get going. Absolutely, so we have documentation and examples and samples out there right now for same site on web.dev as well as on chromium.org and we'll be covering implementation and debugging in our segment on day three. Okay, so we all love cookies, but I'm assuming there's gonna be a few more things that we're gonna be talking about in the kind of general view of trust and safety? I'll be honest with you, I am gonna talk about cookies a lot, but the rest of the team does have a healthier range of interests. Got it, okay, sounds good. So we're gonna cover things like, back in 2018, Specter kind of raised its head and as a web community started to really look at what can we do to help make sure that our users are secure. Are they gonna be kind of those type of aspects that we'll be covering too? For sure, yeah, so Aji's gonna be taking us through some of the new cross-origin opener and embedder policies or coop and co-app for short. So like you were saying, Specter was a vulnerability that in a super short summary meant that malicious code running in one browser process might be able to read any data associated with that process, even if it's from a different origin and that is super bad. Now, one of the mitigations for that is site isolation or putting each site into a separate process. Aji's gonna be running through how the headers allow sites to opt into that, along with a bunch of other benefits that it brings as well. Got it, got it, got it. Okay, so we've got restricting cross-site cookies and then we've got isolating sites to individual processes. We've got this interest in evolution. So I'm sensing there's kind of a bit of a theme here. Yeah, there is definitely a theme. So we've also got salmon mode on the team and they're gonna kick off our little segment to explain the link between these. And really it comes down to the web today is seeing this evolution of expectations regarding privacy. That includes users expecting more transparency and control over their online data and new regulations impacting how data can be used and collected. Now at Google, we believe in an open web that's respectful of the user's privacy whilst also maintaining a healthy ecosystem. So under the banner of the Privacy Sandbox, we're introducing a number of standards proposals that aim to support the use cases that let people make their living off creating web content. But do that in a way that better respects user privacy. We're also actively seeking feedback on these proposals. We're participating in all the open forums with W3C to discuss our proposals, as well as those submitted by other parties too. Okay, so the web's evolving and we're getting new privacy preserving APIs coming in and we're getting rid of the old cross-site data leaking APIs so they're kind of moving out. Exactly. And one of the ways I like to think about it with our team as well is that we're kind of all about the places where you create relationships on the web. So people should feel in control of their data when they browse around the web with a clear choice about what and where they share things. And when they do want to create a relationship like signing into a site or making a purchase, that should be simple, secure, and only share what's needed. Awesome, thanks so much for the brain dump on what we're thinking about here in the Realm of Trust and Safety row and I'm really excited to see the content that's coming later on the stream from the team where we can kind of go into more of a deep dive. Cool, thanks, and I'll see you around. Now the web has a great history as a content platform with its roots in hyperlink documents, but digital content has gone on richer and richer. We think the web has a great role to play here too. I'd like to invite Paul Backhouse to talk about a new content type that we're really excited about called Web Stories. Hey, Dion. Hey, Paul. So what are these web stories and why are we working on them? My team and I have been hard at work working on web stories and I'm very excited to share some updates with you. And yes, I'm talking about these kind of stories, you know, full screen, portrait, tap to advance, swipe to move on. And if you're like, wait a second, aren't you a little late to the show? Then you'd be right. But these are not your standard walled garden stories. Current implementations focus on ephemerality and ultra low barrier to creation. But our bet is that the story's format works beyond the ephemerality use case and can become its own pillar in the open web media landscape. And that's because they're really cheaper to make than video and more engaging than a text article and really important, web stories are different to walled off stories in many important ways. Just like a regular web page, you own them, you host them and very important, you get the money from the ads, not the platform serving the stories. Because stories are really a visual format, my friends at Google Search and Discover are showcasing them in really cool ways, telling me that many more integrations are coming later this year. We think these can be a great net new traffic source for web creators. These stories look visually really compelling, but how hard is it actually to create them? If you want the web to be able to compete with the closed platforms out there, story creation needs to be as intuitive and fast for all content creators. Now, lots of people are working on making web stories a thing, but one of the things my own team is doing is bringing story creation to WordPress, the most used CMS in the world. In the form of a visual editor coming to you very soon, find out more about the beta at goo.gle.story editor. You'll hopefully see all the basic editing features you would expect, like smooth image and video handling, text controls, shape masking, and so on. But we're also working on some you might not expect, like this one we call text magic, running in real time against images from the Unsplash API here. Target on this feature makes it so that the editor always ensures text is readable, making dynamic decisions about the background, line height, and so on. I hope you like it as much as I do. Yeah, it looks really cool. You know, I can't wait to read some of these stories on the web. Thanks so much for sharing there, Paul. And thanks again to Paul and everyone who took the time to join me as we kick off the event today. You know, I'm really excited about the upcoming sessions, starting with the focus on how to make your website hit its vitals and discovery through search. Now, please enjoy the show. Remember, the whole team is here to chat with you on web.dev slash live and via YouTube. I will see you there today and I'll be back tomorrow morning for the day two kickoff.