 Welcome to Chrome Dev Summit 2020. I'm Dion. And I'm Ben. We work on the Chrome team and we're your hosts for this year's keynote. It sure has been a memorable year. And we're as excited as we think you are that it's nearly over. CDS has always been about bringing the Chrome team together with the community so we can learn from each other. We're really sad that we're unable to gather in person, but we're doing our best to replicate this experience online. Both with streaming videos on YouTube where our team will be monitoring and engaging with comments throughout the week. But also with a new experiment we're launching today called Chrome Dev Summit Adventure. A playful online world where you, the virtual attendees, can interact as we try to recapture some of the joy of face-to-face interactions. We'll talk more about it later today and look forward to hearing what you think about it. The Chrome team at Google has a two-fold mission. First, strengthening the web and other open ecosystems by contributing to open-source projects such as the Blink Web Rendering Engine, the V8 JavaScript Runtime, the AV1 Media Codec, the Scea Graphics Library, and developer tools like Workbox, Lighthouse, and of course DevTools and more. And these projects are used far beyond Chrome and even the web. They help power countless other products and platforms like the Node.js ecosystem with V8, OpenVideo with AV1, and Scea, which plays a critical role in Android and Flutter. Now the second part of our mission is of course building the Chrome web browser, providing a top-quality web experience across Windows, Mac OS, Linux, and Android. And on iOS, we bring as much of that experience as we can to millions of users using Apple's WebKit framework. Across all of these efforts, our focus is pretty simple. It boils down to three things. Insuring the web delivers a safe and secure environment. Making the web browsing experience faster and more seamless. And adding more capabilities to the web, enabling richer and more diverse applications on the platform. In this keynote, we're going to give you an update on our progress and highlight the work that others in the community are doing as well. First, let's transition over to our colleague, Parisa Tabriz, to give us an update on the Chrome browser. Thanks, Dion. 2020 has been a year of challenges and new experiences for all of us. Back in March, we decided to skip a Chrome release for the first time in our history because we wanted to make sure that everyone had a stable, performing Chrome as our team transitioned to work from home. Now, we're proud and humbled that so many people around the world choose Chrome to help them stay connected and informed, especially in challenging times. Reflecting on the year, I want to share some of the ways that we've seen Chrome usage change, how our core principles have helped us be prepared for this shift, and how we continue to build improvements to help users. Now, people around the world relied on Chrome to virtually see and stay connected with their family, friends, classmates, and colleagues. Since January, time spent video conferencing on Chrome on desktop is up by 4x. And we know that performance is key, especially when using video on the web. So to keep Chrome fast and efficient, we introduced profile guided optimization, which results in up to 10% faster page loads. We also launched tab throttling, and that helps speed up just about everything. Chrome startup, tab switching, tab restoring. And it does this by prioritizing your laptop's resources to the tabs that you're actually using. So this will also reduce power consumption and improve battery life. And finally, we've also made some progress on memory savings, which you'll hear more about from Dion later. Now, when we look to Google search trends data, we see that queries related to health, news, science, government, and food all increased during lockdown. As people look to stay safe, get updates on local restrictions, and just learn how to cook at home. More generally, having access to information on the open web is so critical. And we continue to focus on product inclusion and serving the diversity of our global user base. For those who are low vision or who rely on a screen reader, Chrome became the first browser to create accessible PDFs with auto generated headers, links and tables. All of that makes PDFs more legible for screen readers. We've also made it easier to translate entire sites to your preferred language and added support for 28 additional languages. Lastly, we've begun testing a visual based search, which we hope can help users that are new to the internet, or who have low or no literacy. As many of us shifted to working from home and all that comes with that smaller screens, juggling work in home commitments and so much multitasking. The Chrome team has continued to prioritize productivity features. Tab groups make it easier to organize your tabs by project urgency, or well anyway you like. And you can even collapse tab groups just get them out of the way. We're also beginning experiments to add new modules to Chrome's new tab page to help you quickly jump back into a previous task, whether it was researching a product, planning a meal, or catching up on your favorite shows. Now, we're conscious of the increase in pandemic related phishing attacks and other online security threats, and we're proud of both how Chrome has protected users and some new features we launched this year to make it that much harder for online attackers. For example, we've launched an additional layer of security with enhanced safe browsing, more secure and private DNS over HTTPS, and many password manager improvements. We've also delivered more accessible and intuitive privacy controls, starting with the redesign of Chrome settings. And for users that want an additional piece of mind, we introduced a new safety check within Chrome, and we're now rolling out quick access to some of the most used privacy and security controls, like deleting your browsing history or opening an incognito window directly into your address bar. Finally, we've continued to make progress on some ambitious ecosystem safety initiatives. You'll hear later today about the progress we've made on the Privacy Sandbox, which is an open source initiative to develop a set of open standards to fundamentally improve privacy on the web. And Dion will announce a major update on our work with extensions. As we give users more control over the data shared with their favorite extensions, we want to give you the tools to get ready for this new reality. Now, before I hand it back over to Dion, I'd like to just conclude by acknowledging the Chrome team's really hard work this year, all while being at home, to make our users experience better, releasing eight stable Chrome builds, meeting default browser requirements for iOS 14, and improving performance, helpfulness and security in Chrome across all platforms it's available. Thank you so much. And now back to Dion to tell you more about the latest and greatest in the world of Chrome extensions. Thanks, Parisa. A lot of great improvements, and I'm really excited to see everything we're doing to make the browser more efficient, secure and more privacy preserving. Now, we think extensions are a superpower for the web, an aftermarket that allows first and third parties to add rich capabilities that can enhance a site or the entire experience of the web. We love the extensions you've built with over 200,000 active developers and over 250,000 extensions in the Chrome Web Store. And users love them too. Fun fact, we estimate that users have installed over 20,000 extensions just in the minutes since this keynote began. And users have an expectation now of strong privacy and security guarantees, and we really agree with that. So we've been working on updating our extensions platform to better align with our vision for a more private and secure browsing experience. We call this Manifest V3, and today we're announcing the official release. So for improved security, we'll start with disallowing remotely hosted code and provide developers with more tools to write secure extensions. To enhance user privacy, earlier this year we made controls more visible in the new extensions menu with some sensitive permissions made optional. Users today can turn off those permissions after the extension is installed. In 2021, we'll go a step further by withholding these permissions by default and giving users the option to grant them at extension and stall time. Manifest V3 is available to experiment within Chrome 88 Beta, and updated extensions could be submitted to the Chrome Web Store starting January 19th when Chrome 88 hits stable. You can learn more at the link mentioned here. As Dion explained, privacy is a major theme of Chrome's new extensions model. We've also been collaborating with many others in the ecosystem to improve the privacy of the web platform itself. We've been working on a major initiative called the Privacy Sandbox. It's organized into two tracks. First, reducing the cross-site tracking that happens on today's web. And second, enhancing the web with new capabilities that form privacy-preserving alternatives that can power tomorrow's experiences. Some websites use third-party cookies to track people across the web. While Chrome plans to phase out third-party cookies once these privacy-preserving alternatives are in place, we've made cookies more secure for websites and users today. This year, Chrome finished rolling out a new policy known as SameSite, which limits cookies to first-party access by default, requiring developers to declare which cookies can be sent to third-party domains. Before this change, most cookies could be accessed by third parties even when they were only intended for first-party use, needlessly exposing them to security threats, such as cross-site request forgery attacks. Now, 99.9% of those first-party cookies are automatically restricted to first-party use in Chrome. And all third-party cookies must be sent via HTTPS, encrypting them in transport. Microsoft Edge and Mozilla Firefox are also in the process of adopting this new SameSite policy. We've been working to address covert tracking, such as browser fingerprinting as well. Now this occurs when developers attempt to create a unique profile of your browser, a so-called fingerprint, that can be observed across websites. With a new project called Privacy Budget, we're experimenting with a new way to enable rich web functionality while also severely constraining fingerprinting. We also launched the User Agent Client Hints API, which is an alternative to the user agent strings that we all know and love going back to the beginning of the web. This enables developers to request only the data that they truly need about your browser. In the second Privacy Sandbox Track, we've been working with the web ecosystem on a new family of web standards that provide privacy-preserving solutions to accomplish important web use cases, such as personalized content, single sign-on and relevant ads, without the need for third-party cookies or other cross-site tracking mechanisms. I'm happy to share that the first two solutions are available for early experimentation by developers. This includes the Conversion Measurement API, a new capability that measures when an ad click leads to a conversion without using cross-site identifiers, and Trust Tokens, a new API to help combat fraud and distinguish bots from humans by conveying trust from one context to another without passive tracking. Both of these are available for testing and feedback via Chrome's Origin Trials mechanism in our current Chrome Stable release. We'll be going deeper into some of these changes later today, so don't forget to catch these sessions. Users are only a low-friction tap away from the next great experience, and they traverse the web link by link. To make that journey great, we all have to ensure that our web pages load fast. And once loaded, we want the UI to feel responsive and buttery smooth. Now, achieving fast loading and runtime performance requires that the browser and web developers work together to do the right things. Earlier, Parisa shared how Chrome is getting updates like profile-guided optimizations and tab-throttling. In addition, we've also been optimizing memory utilization overall, and have made some really substantial progress here that we want to share. For example, the prior stable release of Chrome, Chrome 85, introduced enhancements to how the browser reclaims memory, that resulted in up to a gigabyte of reduced memory usage for power users on macOS. Our next release, Chrome 87, is showing savings of up to 80 megs for each top site. And we recently updated VA with a memory compression technique that reduces the JavaScript memory footprint significantly. This can be seen here with Gmail, which benefited with a 45% memory reduction. Finally, VA is now able to load a page's JavaScript files in parallel, so scripts can be parsed and compiled and ready to execute as soon as they're needed by the page, eliminating parsing pauses entirely. Now, we're also continuing our work to make it easier for web developers to do their part to create high-performance websites. Earlier this year, we launched Web Vitals, providing clear unified guidance of the attributes for web page that are essential to delivering a great user experience on the web. We focused the program on highlighting three aspects of performance, referred to as Core Web Vitals. The Chrome User Experience report shows that the majority of sites have good interactivity and nearly half are fast-loading and visually stable. Overall, about a quarter of all sites parcel three metric thresholds, so we've still got a way to go, but we think it's a great start. Now, we dive into this data much more in the Web Almanac, which is launching its new 2020 edition today, authored by experts from across the web. I actually always love finding some absurd stats in there. For example, the longest image all attribute that we found is over 50 million characters long, which is enough to write war and peace five times. Let's get back to Vitals. Last month, the Google Search team announced their plan over the next six months to incorporate Core Web Vitals into their process for measuring web page performance. So if you value discovery via Google Search, this provides an extra incentive to optimize your Vitals and reach the three thresholds. So get started now. We've got all of the tools to support you as you measure, track, and improve. And the ROM provider ecosystem has moved really quickly here, too. Core Web Vitals is now available to you if you use Cloudflare, New Relic, M-Pulse by Akamai, Speed Curve, Caliber, and many more. But we also know that many sites are currently using Google Analytics and they want to be able to measure Web Vitals there. Last summer, we released the Web Vitals JavaScript Library, along with instructions on how to collect Web Vitals data and send it to Google Analytics. Now, we're excited to launch the Web Vitals Report, an open-source website and tool to let you query and visualize your Web Vitals metric data right in Google Analytics. This report also allows you to compare data across segments so you can see how much performance affects your business results. Now, we've seen the Web Vitals approach really resonate with developers, particularly in markets where network infrastructure conditions and device capabilities can be a little bit variable. So what does it take for you to do the work to improve your Vitals? Well, to share an experience of taking a site and iterating on it to meet these thresholds, please welcome Sunit Jindal, Principal Engineer from NICA, India's leading online marketplace for beauty and wellness products. Over to you, Sunit. Thanks, Dion. I am Sunit Jindal, Principal Engineer at NICA, India's leading omnichannel beauty destination. At NICA, our belief is that a highly-performance application is a prerequisite for a great user experience, especially for mobile users. One of the things that we have found key for performance success is to be fully aligned with our business focus teams. How soon the content is rendered to the users, possible benefit in SEO and conversion rates are some of the metrics on which we ensure alignment with our marketing and business counterparts. When Web Vitals were first announced, to be honest, it meant extra work for us, but we also knew that this was a positive change. Although the new metrics meant that we again had to dig deeper to meet the thresholds, but once we did it, it was worth the effort. We improved each of the three core Web Vitals by making optimizations within our core, adding the right CDN where needed, replacing bulky third-party scripts with lighter implementation and much more. As a result, we managed to improve all our core Web Vitals within a few months of working on them. For our users, pages loaded even faster and there were minimal or no content shifts on the page. We also saw a continuous uptick in our performance related scores across devices and network bandwidth. Post-migration, our page views per visit increased by upwards of 23% as compared to our old mobile site. The traffic from Tier 2 and Tier 3 cities also increased by nearly 28% and finally, we've seen an uptick in our other key metrics as well, like average order value and search traffic. Having led my team through this journey, I feel that core Web Vitals stand for a guideline that promotes us to provide a crisp and fast UX. We are happy with the outcome so far and look forward to what's next in the Web Vitals program. Thank you and back to you, Dion. Thanks to feedback from partners like Sunir and the developer community at large. We found opportunities to refine the core Web Vitals' 2020 metrics and how they're measured. Watch Annie's talk later today to learn about how we're thinking about this and how you can share your feedback as we iterate on the metric set for 2021. You want to be able to create fast, responsive, and beautiful UI as easily as possible. You gave us copious feedback on areas of CSS that you wanted to make simpler. Yuna is going to share some of the recent changes that target this feedback, as well as giving you a peek at some new work that we're particularly excited about. Thanks, Dion. There are so many exciting things going on in the CSS world right now and I have a lot to update you all on when it comes to making UIs easier to build and debug. We've been analyzing trends and conducting surveys, and I found that styling has actually been a really big pain point for a lot of you building Web Interfaces. And we hear you as a Web community. We want to fix the issues developers struggle with today. So what have we done recently? Well, first we've been improving tooling along with our friends at Microsoft by introducing grid dev tools and working on Flexbox tooling to hopefully relieve some of those layout debugging wolves. We're also working on a lot of other small incremental changes that will make it more clear what values are being calculated and why. This year, we also launched some pretty key updates for CSS. We've had some major strides in Flexbox cross browser behavior and shipped gap in Flexbox, meaning you can now have parent-driven control over the space between your children, just like you can with CSS Grid. We're shipping aspect ratio to ensure consistent ratio sizing of your responsive content without hacks, and we shipped content visibility, a CSS feature that can significantly improve rendering performance by skipping an element's rendering work, including layout and painting until it's needed. This allows for faster interactions with on-screen content, with that content still being searchable and accessible. Along with external contributors and help from the Agalia team, we also shipped a few styling APIs, including list bullet styling with the marker pseudo elements, more refined text decoration styles, focused within pseudo class, which enables the ability to style a parent based on if its children are being focused, path support within clip path to support a wider variety of clipping effects, and soon we're shipping hardware-accelerated SVG animations to ensure better performance across browsers. Another feature I'm excited about is app property, which shipped in Chromium 85. You can now register CSS custom properties with syntax and fallback values directly in your CSS files. This is a part of the CSS Houdini effort, specifically the properties and values API. If you're interested in extending CSS with Houdini even further, look out for my talk all about using Houdini in today's browsers, including a tool my team built to make it a little bit easier. And if you're watching this talk later, the link will be below. The feature is also looking extra bright for CSS lately. Moving into the coming year, the Chromium team is going to be focusing on a number of things, and we're going to be looking at how we can make CSS architecture more clear for design systems and products, including cascade layers to enable in-depth programming. We're working on shipping animation to enable scroll-linked animations to be styled with CSS, an operation that's currently a pretty heavy lift with JavaScript, and we're looking into how we can make CSS architecture more clear including cascade layers to enable a new injection point for style sheets between the user agent and author styles. We're also looking into CSS nesting, a favorite from PMPost processors as well as scope styles natively in CSS. Soon, we'll be shipping is and where matches selectors for more clear CSS statements when targeting multiple or longer selectors. And finally, the Chromium team is currently exploring container queries, also known as element queries. We're actively prototyping what may soon become a game changer for component-driven design, enabling elements themselves to respond to the size of their parent container wherever that is on the page and not just the documents viewport size. This is a massive shift in styling capabilities and something I can see really changing the ecosystem to help usher in an era of more individually responsive design components and systems. All of these improvements coming to the web are exciting in their own right, but together they really tell the story of UI styling becoming a priority on the web platform. We hear you and I can't wait to see CSS and the web evolve from here. Thanks, Yuna. I can't wait to see the rich fluid experiences that you'll build with these improvements. Now later at the event, we'll be talking about how content can be taken to the next level via web stories and more because we're seeing a real surge in creativity as we visualize stories in new ways with creators using rich new tools. And there's another critical aspect of seamless, reducing the friction that users encounter as they transact on the web. We continue to invest here in areas such as identity and payments. With one-time passcode support or web OTP users no longer have to copy and paste from their SMS app when using two-step verification and this works with Chrome and Safari. Now web Authent brought by a metric census to the web and with Safari recently add-in support, Touch ID and Face ID are accessible to the web. We want to bring this level of simplicity to payments and we're experimenting with secure payment confirmation which brings web Authent to payment flows. And we're experimenting right, have started to really explore here and we can't wait to see the result of these enhanced payment experiences. So check out the sessions and the code labs that will walk you through using some of these APIs. We build these UI features and the capabilities for identity and payments to enable you to build more powerful and rich web applications. You know, we've seen what happens when we lack certain capabilities that you need. You may have to resort to using systems that let you use your HTML, CSS and JavaScript skills but now you've had to give up the core benefit of web deployment. Being able to reach users on any platform, on any screen and without the overhead of forcing a large app download that a user might actually sold amuse are all things that make the web such a successful platform. Growing the success of the web means pushing the envelope, unlocking new use cases that are not even possible today while maintaining the traits that make the web great. Developers like Henrik, who I've actually known in the web community for many years now, are doing just that. So I asked him to join us to share some of his recent work that has a real human impact. Henrik, thanks for joining us. Hey, Dion. Thanks so much for having me. I wish we could be doing this in person though. I know, me too. So can you share what you've been building with us? Sure. So anytime you put somebody to sleep in a medical setting, it's considered best practice to, keep them alive. I would hope so. So I guess you'd have to monitor the patient's real vital signs and here we mean human vitals not web vitals for once. Yeah, exactly. So you had to produce something called an anesthesia record and part of that record is a chart that shows what the patient's vitals are every five minutes throughout the case and sometimes the case goes really long. So you might do that 50 times during a given case and a lot of people are still doing this on paper and producing these little charts like this and so naturally we built a progressive web app at anesthesiacharting.com that actually does all this for you. Got it. So I'm actually trying to look at that paper trying to kind of visualize how that PWA would work. Yeah, it's probably easier if I just show you. You can show us? Okay, please. So as you can see, I'm all hooked up to a patient vitals monitor over here and not only are we able to read the data directly off of that and put it in the chart, we're actually able to issue commands as well and we had some fun with the speech recognition API. So if I say something like take a blood pressure now you notice it actually starts the blood pressure cuff here on the monitor and instead of just having a little vitals monitor off in the corner, we can actually put all this data including my ECG wave forms and everything up on a big screen and that way the provider can see everything they need to. That's great. There's a lot going on here. Can you kind of chat about how you get all of these integrations working? Yeah, so we're talking a custom binary protocol that the vitals monitor expects over Web Serial where of course using the presentation API to be able to display all this content up on a separate screen that's different from what I'm looking at here and then we're also using the wake lock API to make sure that this computer doesn't fall asleep on us right in the middle of a case. Absolutely. So why did you build this with WebTec? So distributing software on the web is just so much easier. We actually do have an Electron app that we're using to do some of this. Some of the monitors actually require TCP connections locally and it's great that that exists but it also means that you know a lot of actually our support headaches come from that because people don't understand they need to go download something and they're not sure whether they're running in Chrome or running it separately and so just be really nice to be able to do it all with Web. That makes total sense. Well, it's great to see how this is used in a real world environment. What's next here and are there any other APIs you'd love to see from the web? Yeah, so as I mentioned, I'm really looking forward to that native sockets API because that would let us do talk to some of these other monitors over TCP. In addition, I think the only thing that doesn't work offline right now in our app is at the very end when we generate a PDF for you. So we actually run a puppeteer service that will take that HTML and give you a PDF back. It'd be really awesome if we could just generate that PDF in a consistently just using a JavaScript API locally. Sounds great. Yeah, feedback noted. So I really do hope that we can actually chat in person at the next CDS although it was pretty special to be able to visit you at a dentist office and this is coming from a Brit. Yeah, no, it was really great. Thank you so much for having me and just everyone knows we took all the necessary COVID progressions here as well. I even took a COVID test yesterday just to make sure we're safe. Safety first. Listening to stories like Henryx makes us proud of the work we've all done to bring new web capabilities through initiatives like Project Fugu. While Henryx use case may be a little specialized, there are so many other use cases that you've told us are important. For example, Gravit Designer made it easy for their users to read and write files using the file system access APIs, simplifying the open and save web experience. They've already started working on the new local font access API which enables their users to use specialized fonts that are installed on their own computer. We want to hear about the capabilities that you need to boost your web experience, so please keep them coming. When you bring this all together, you can create something special. We're super excited to share that Adobe Spark recently launched an impressive new PWA. It's performance capable and allows next generation creatives to collaborate and co-create seamlessly. It's a pleasure to use. Hear more from Spark as they share their journey directly in our partner spotlight talk which is right after this. When you put in the work to build great web experiences, you want to reach as many users as possible. Services like Google search are a fantastic platform for discovery and a powerful differentiator for the web. However, today many users are habituated to discovering things through app stores like Google Play. Developers who build a progressive web app that meets the recommended quality bar can incorporate it into an Android app using a trusted web activity. And in Chrome 85 released earlier this year, we extended the support for trusted web activities to Chrome OS. And developers can now also publish their apps to the Chrome OS Play Store. Chrome OS and Chromebooks are a great platform to showcase the power of the modern web and its expanding capabilities via Project Fugu and other efforts. Web apps there get seamless discovery and installation via the Play Store and they're also able to deeply integrate into the Chrome OS launcher and the overall system experience. Over the past year, Chrome OS has welcomed many powerful new desktop PWAs into its ecosystem from advanced graphics products like Adobe Spark to engaging media apps like YouTube TV and Hulu. Framer, a cross-device design and prototyping tool that was initially a single platform app on Mac OS, was able to increase their user base by four times after releasing their collaborative web version. Today, we're announcing that Google Play is adding play billing support for PWAs published in the Play Store that use trusted web activity. This integration will be live in Chrome 88 available now in the developer channel and going stable in mid-January 2021. It'll be supported on both Chrome OS and Android and enables both purchasing digital goods and subscriptions including the new standardized digital goods API. You can learn more in these sessions on next-level web apps for desktop and what's new for web apps in Play. Now we've shared the platform updates let's discuss what's new with our developer tools and how they can help you build these great web apps. Over to you, Jessalyn. Thanks, Dion. There's a lot happening in the world of developer tools. So let me dive right in. To start with, we've got some great new features in Chrome DevTools. First off, great toolings. As you heard from Yuna earlier, you can now debug CSS great much more easily with the new CSS great debugging tools. Yay! I'm particularly excited about this one. Secondly, we have added new emulations to the rendering and sensors tab including missing local fonts in active users and much more. Next up, the most used panel in DevTools, the Elements panel, will now support many more features including style editing for CSS in JS frameworks. In accessibility, we have added a few new features including accessible color suggestions and emulate vision deficiencies among others. And finally, we have added a bunch of new tabs and panels such as the new media panel to help you debug your site a lot more easily. And there's much more. So don't forget to catch my What's New in DevTools session tomorrow. Moving on, with the increased focus on web vitals, we have seen as much as a 70% increase in growth across all our insights to over the summer. So we are making sure that we continue to make further enhancements to all of them including Lighthouse, our tool that lets you work in a lab environment and provide actionable guidance on how to improve. First off, we have added three new audits to help you identify specific ways to keep your cumulative layout shift in check. Additionally, we know that third-party libraries have a huge impact on your core web vitals. So as a first step, we have added additional audits for third-party embates to help you clearly understand their impact on your site's metrics. We hope these changes will enable developers to debug and optimise their core web vitals with much more ease. Watch Elizabeth's session later today on the State of Speed Tooling to learn more. Finally, I wanted to give a quick shout out to the Workbox team who launched more flexible integrations with Create React App giving you full control over your service worker logic. And more recently, the team brought V6, which is packed with some awesome updates, including new extensibility hopes for building custom caching strategies and plugins. It also has support for Webpack V5 and a more flexible integration in Create React App V4. Finally, we have migrated even more of the codebase to TypeScript, making it easier to use from within your TypeScript projects. Don't forget to watch Jeffrey's session tomorrow for the rest. And with that, back to you, Dion and Ben. So we've talked about how to build fast, powerful and safe web experiences and shown you some examples, but we're always looking to walk the walk ourselves. You know, we ask ourselves can we create a rich web application that's fast to load and smooth to run? And given who we are, we often end up trying to pull this off with the developer tool. We announced Squoosh two years ago at Chrome Dev Summit 2018 and with the web about to get three awesome new image formats, we knew it was time to build version two. So we added JPEG XL, WebP2, AVIF, so you can start seeing how far image codecs have evolved in the last two decades. At the same time, we're making use of the latest and greatest WebAssembly features and more mature WebAssembly toolchains, making image compression go much faster in your browser. We've also asked our users what they need, resulting in a new and improved UI and we wrote a Squoosh CLI allowing you to compress many images all at once with all the codecs and settings that are available in the PWA. It was so productive to share the same WebAssembly binaries in the browser and node for the command line version. So check out Squoosh V2 and you can read about how the team built it and how you can use it to squish down your images at the link here. Now as we mentioned at the beginning, the Chrome team is focused on working with the community which includes all of you virtually here at CDS to make the Web more powerful, faster and more seamless, and above all safe. It can be dizzying just how fast the Web continues to evolve over 30 years into the platform. I think that's older than most of our team. Are we even allowed to say that? And we've continued to improve Web.dev as your one stop for understanding our perspectives on how to get the best results out of the modern Web. With tools like Lighthouse and the Web Vitals program explained fully on the site. And of course, during CDS Online this week, we'll be doing our best to explain all of the latest and greatest developments and we're available to answer your questions and hear your thoughts and feedback. We have so much great content to share and bonus deep dive material available on our YouTube channel. So next up, stay tuned to hear directly from the Adobe Spark team on how they built that lovely new PWA we got a glimpse of earlier. Now we're off to watch two and join you over in the Chrome Dev Summit adventure. My 10-year-old niece was recently given a class assignment to create a poster. The topic was to raise awareness for endangered species. She and her friends formed a group quickly adjusting to our collective virtual reality. They co-created this poster, edited, commented, shared links with each other. They used online web-based design tools to do this on their Chromebooks and they created a piece of art as a node to the dodo. Sadly, we can't bring the dodo back. But I was really excited and surprised to see how in today's world that just kind of worked. I promised myself that I will not use the phrase new normal. So without using it, I'll make the point that the web superpowers on desktop are even more relevant today. The reach, freedom, easy link sharing, safety that users have always loved, amplified by new capabilities like windowing, system access and web assembly are making it easier to bring creative and productive experiences to the desktop, to large screen devices, whether you're an aspiring student or a professional designer. And what's even more exciting is that these desktop experiences are coming to Chrome OS. Chrome OS has had a particularly exciting growth here. Unit sales have grown from 127% year-over-year while the rest of the US notebook category has increased 40% year-on-year. It's quickly becoming the laptop OS of choice. It's affordable, it's capable and it works just like the web. PWAs on Chromebooks like Clipchamp, the video editing software that allows you to compress, record, edit your videos has seen that since they shipped the PWA, quite recently they're already seeing that those users are retaining three times more than the average. Another vector design app, Gravit Designer has taken advantage of advanced web capabilities like file system access and they're experimenting with local content access within their PWA making the web app almost indistinguishable from their native app. In fact they plan to make the PWA the primary experience across all desktop OS in the future. And it's not just the design and creativity space but whether it's media, social, you name it and those categories of web apps are available on Chromebooks for anyone to use. But there's one app in particular that I want to shine a bright spotlight on today and that's Adobe Spark. Spark makes it easy and fun for anyone to turn their ideas into beautiful visual stories and social content that make an impact and it's been pushing the boundaries on Chromebooks. I'm really excited that we have Bem Jones Bay lead software developer at Adobe Spark to talk about it. The motivations, the learnings and what future projects are in store for Spark. So without further ado, welcome Bem. Thank you, Archall. I'm really happy to be here. Bem, I know the story really well but I want us to start at the beginning. Tell us about Spark. How did it come about? Happily. So we started working on Spark about six years ago as a start-up inside of Adobe. While we decided on native apps as the mobile solution, we saw the web as an ideal fit for the desktop experience. It took time to get product-market fit, but we succeeded, got more resources, launched experiences on Android and greatly increased the capabilities of the web app. We recently optimized the web app to be a PWA, installable on Chromebooks and in any modern desktop browser. We're all really excited about that recent PWA launch. I know you and I have talked about the importance of ChromeOS to Spark, especially in the classroom, so can you tell us a little bit more about that? Yeah, sure. One of our foundational goals is to empower the next generation of digital creativity. So education is an important market for us. We have 30 million teachers and students using Spark around the world and Chromebooks are used widely in education. The best part about the web app and now the PWA is that it works everywhere on desktop, from teachers sharing URLs with students to the student easily accessing the link when they're at home. The frictionless and seamless experience is a big draw for us, especially in these times. We were also excited about increased distribution via the Play Store and the ability to have OEMs pre-install Spark on their Chromebooks. All of this gave us a strong business case to do the work needed to have a good Chromebook PWA experience. That's great. We've seen that sometimes finding the right use case and audience market fit for PWA can lead to business impact. It makes it so much easier to track how much your users love your service when you can track things like installs and engagement. But it isn't easy. So what were some of the challenges Spark faced while building this PWA? Yeah, you're right. It really wasn't entirely easy. So as far as challenges, I think it's best if we start at the beginning. So we use the core PWA checklist as a reference. You bring up a great point because the core PWA checklist is one that folks just shouldn't miss. There's a short link up there. Do go check it out if you're just looking to build a PWA and want to use one as a reference to optimize your current experience. But thanks for bringing that up. Not a lot of people know we've updated it and it's important to use that as a starting point. Thanks for bringing that up. I definitely forget that sometimes not everyone is on the same page when we're talking about these things. So yeah, most of the requirements listed there really seem fairly straightforward to us. But a couple of them stood out as challenges. Offline support and performance. So for offline support, we thought that we had to make our entire app usable when the user is offline which would be a major effort given that it was never designed with that in mind. After some research, we got more clarity. One of that requirement is to make it seem more like a native application. There are native applications that only work with an internet connection but when you launch them offline, they will still launch and tell you what the problem is. While it's ideal to give users as much functionality as possible when offline, the minimum requirement is that the user doesn't see a dyno. Yeah, I know. As much as we love the dyno and other extinct species, you don't want to see that when you're trying to get work done. For sure. It really takes you out of the app experience. So we ended up designing a very nice message that's displayed to users when they launch Spark and are offline. It still probably won't make you completely happy but you'll know that the app hasn't crashed and it gives you an idea of what you might do to fix the problem. In addition, it keeps you in our app experience by speaking with our voice. So to be clear, this is only the beginning. We definitely will be working to build an even better offline experience. I really love this message because it tells your users that you care, it's in your own voice and you make an important point that it doesn't have to be the full offline functionality to start with. You can take small steps to get there. I'm glad that you have that plan in mind. What about performance? I know that's a big one. Yeah, performance is a lot of work. If we go back to the beginning, our Lighthouse performance score was pretty bad and we knew we had to improve that to get a native feel. So we set a goal of getting a performance score of 80 on desktop, which was pretty ambitious. Once that goal was set though, the biggest challenge was figuring out where to start and how to keep it sustainable. So the first step we took was to make sure we had good automated metrics. So when doing that, we looked at both field metrics and lab metrics. So field metrics directly reflect the performance as experienced by real users. This allows you to know if the improvements you are making are actually leading to a better user experience. So our field measurement system uses a combination of New Relic and a custom performance monitoring system. Our custom system is built using performance observer and the user timing API and records data to Splunk. We then created dashboards in Splunk, like the one you see here, to monitor these metrics. So field metrics really giving you a flavor of the real user experience and you're able to see whether the work you're doing is actually making a difference. What about lab metrics? How do you track those? So, yeah, lab metrics are the best way to test the performance of features during development and they are invaluable for catching performance or regressions before they happen. We primarily use Lighthouse for lab measurement. It does an excellent job at presenting performance metrics and easy to understand in actionable format. We then have automated runs of Lighthouse that enforce a performance budget on our PRs and our CI environments. This makes it much harder for performance regressions to make it to production. With all of these metrics in our tool belt, we were able to make a lot of meaningful performance improvements and reach our performance goal. While, you know, this is of course a lab metric, we also saw some improvement in our field metrics as well. We love that score. Not just in line with what you had in mind but you saw past the goal of AT you had set out for Spark. So you have a robust performance measurement and tracking system in place. How are you going to keep up this momentum? That's the hard part. Yeah, definitely. You know, I think the next step for us is to integrate core web vitals into our field measurement suite. We've been around when we started our project. We definitely would have used them. We like the fact that like Lighthouse, core web vitals give you a simple set of actionable metrics and they're integrated everywhere now. The search console, page speed insights and New Relic all measure them out of the box. And we can always use the web vitals library to integrate into custom systems like our custom monitoring. The rest of the day is actually focused on performance sessions and web vitals in particular. So folks should definitely check them out. I can't emphasize enough how important it is to keep performance front and center as you're looking to optimize any web experience. And I'm glad that Adobe is doing just that. There are many new advanced web capabilities that allow you to build native like web experiences. What are some of those that you're looking into? So yeah, we're definitely interested in those. I'd say probably the primary thing is link capturing. Because right now, if you install the Spark PWA on your Chromebook, you launch the application from your desktop and that's great. I mean, it works wonderfully. However, if you're browsing the web and you come across a link to Spark, that link won't launch the install PWA. It will just open a browser tab. When we have link capturing that'll make it so the link will open and install PWA. I believe this is a game changer for the user experience. So, you know, yeah, the third one and then going on another one that comes to mind is the file system access API. This will fix a big gap for content creation apps like Spark allowing seamless access to a user's local content. I know that Spark isn't the only Adobe team that's excited about this API. Speaking of things that many teams at Adobe are excited about, a big one is WebAssembly. We have a lot of technology implemented in native languages like C++. WebAssembly will allow us to bring this technology to the web to enable even better creative possibilities. And I mentioned this at the beginning. Some of this work is actually just amplifying the existing benefits and advantages of being on the web. If you want to learn more about the new capabilities and how that fits in this world of next level web applications do catch PJ McClarky in stock tomorrow. You heard in the keynote that PWAs are now listable in the Play Store. Ben, when can we expect to see Spark there? So from our conversations it's pretty clear that Chromebook users look for apps either in the browser or in the Play Store. And we want to be where our users are looking for us. Our experience with the Play Store on Android has been positive. So while I can't give any specific timelines we are excited to see how being in the Play Store for Chrome OS impacts user acquisition and engagement for our web application. And I'm glad that you're taking your time to do this because it's important that the web app is high quality. It meets the performance benchmarks for doing trusted web activity and listing in the Play Store. So we can't wait to see the Spark PWA in the Play Store. If I could summarize this journey really it was from a great desktop web app. You added basic installs and then offline. Focus and performance which is really great. And then now you're looking at advanced web capabilities and then of course discoverability making sure wherever users might be looking for Spark they find it. So thank you very much Bam for sharing this journey and the story with us. I've learned a lot and I hope our audience has as well. Thank you very much and take care. It's been great talking to you too. Thank you for having me and I'm looking forward to our next conversation. Ladies and gentlemen, welcome to Archeculate. It is Jake explaining words of my choosing to Paul. He cannot say the word. He cannot say words that rhyme with the word and he cannot draw the word or make sounds. He has to describe them. Are you people ready? Ready to go? Yes. Off you go. Okay, if you want to you've got passwords for websites but if you want to make it more secure you add this in. So you log in, you get an SMS or something and that's called Two Factor. Two FA? Okay. The fans of Vim really hate this text editor. Fans of Vim. Emax? Yep, thank you. Okay. On a joypad you've got the analog sticks but you've got another thing there to do directional stuff. The D-pad. D-pad. Thank you. Okay, so at the end of a line you will sometimes leave some punctuation. If you're doing an array you can miss one of these out on the last item but I like to leave them in there. A comma. A comma. It's kind of just hanging around there, isn't it? It's sort of... The trailing comma. It's kind of hanging off the edge there, isn't it? I have no idea what this is. So if you were sort of holding on to a cliff edge you would describe it as like well, I'm really doing this thing. I think... Okay. So it's got two syllables. The first syllable is something like an American would say if they're not wanting to properly swear so they'll say this. Oh, man. This is not a good explanation. I'll be honest with you. Okay. Comic book character in the UK as a Scottish comic he's desperate. Dan. Right. Dangling. Thank you very much. Excellent. Oh, wow. Dangling comma. So you've... You can call these in JavaScript. Function? Yes. But sometimes you just do that straight away. And immediately invoke function extension? Yes. Excellent. Thank you very much. Okay. So in... We have .window. You've got the window variable. You've got the self variable. A new standard was made to join together. All right. So if you were, say, in the late 90s and you really wanted some music but didn't want to pay for it you would use this software. Neely, keep naming them. Oh, what was it called? No, it begins with an S. If you had a very short sleep, what would it be? Napster. Excellent. Thank you. All right. So if you want to use the nightly it's called this. Canary? Excellent. So if you're starting a project with a set of files you would call it this. It's also the name of a CSS framework. All the sites look the same because they all use this set of CSS. Correct. Okay. So we... Blink came from WebKit but WebKit came from Conqueror. And what was the... Yes, absolutely. And we're out. Hello everybody and welcome to the State of Speed Tooling at Chrome Dev Summit 2020. My name is Elizabeth Sweeney and I'm a product manager on Chrome's web platform team. And I'm Paul Irish. Today we're going to talk to you about some of the latest in Lighthouse Scoring, third-party audits, the Chrome User Experience Report and Core Web Vitals Actionability. Elizabeth is going to get us started. So today we'd like to start off by sharing a few things about Lighthouse's performance score as well as some potential updates coming next year. So the goal of the Lighthouse performance score is to make sure that you have the ability to gauge how well your page is likely to deliver a good experience in real-world conditions with your users. To understand the goals of the Lighthouse score let's take a brief moment to remind ourselves of why it exists in the first place. So here we have real-world data for a page's first contentful paint, or FCP. So because this is field data, it is recorded from real users on their real devices. Every time one of your users loads your page it adds a single data point to this set. Because of this, a single field metric represents all of your users. Thousands of data points, variable cache conditions, network and device environments. Real-world data presents you with all sorts of variables and unknowns. When you're trying to optimize based on data that represents so many different conditions it's difficult to know where to start. And this is why synthetic or lab testing is so useful. When you run Lighthouse on your page and get an FCP value, it is a single data point collected real-time for you, calibrated to represent a user in your upper percentiles. What this allows you to do is to use a single set of values as representative of your user's experience on your page so that you can dive deep and debug against that. In other words, if you're optimizing against your Lighthouse performance score because it is calibrated to be representative of your upper percentiles, you are optimizing for the majority of visitors to your page. The Lighthouse performance score is a tool to prepare yourself to succeed with users in the real world in dimensions of quality that they care most about. So that's basically why we have it because the closer you are to that 100 score the less you're leaving up to chance for what can go wrong in the field. Okay, so we quickly reminded ourselves of why we need it, but what is in it? The Lighthouse score is a weighted, blended combination of the user-centric metrics that you see in the report. It can be viewed as a recipe with all of the important ingredients for a good user experience. So those user experience include loading performance, which is measured by metrics like first contentful paint, speed index, and largest contentful paint. One of the key ingredients for a good experience on the web is to be able to see content and to see it quickly. Interactivity is another key ingredient. Metrics like Time to Interactive and Total Blocking Time allow you to measure how quickly your page is going to be able to respond to user input. Another primary ingredient to make your users happy is the stability of your content, measured by metrics like cumulative layout shift. It's never any fun to have things jumping around on you. So we have all of these ingredients but we often get asked a very good question. How does Core Web Vitals fit into the Lighthouse score? Well, they're right there. Core Web Vitals represents the table stakes of any good experience, which is why we have them included in our scoring recipe. Not only that, but there's been a lot of work done to make Core Web Vitals more actionable in the Lighthouse report and Paul will be speaking about that a little bit later. Just a brief reminder that first input delay, which requires a real world user to measure, can be optimized by using the lab proxy metric, total blocking time. TBT will help give you a sense about how responsive your page is going to be when you have a real user engaging with it. This is the current Lighthouse performance score. And as you can see, the various metrics are weighted differently based on what we have found to be the most important for a good user experience. Core Web Vitals with one exception are the most heavily weighted metrics in the Lighthouse performance score. So when you're optimizing against the Lighthouse score, you're setting yourself up to have more success with Core Web Vitals in the field. Now, that one exception is the weighting of CLS, which is weighted less than the other metrics. When Lighthouse 6.0 came out, CLS was still a new metric, and we wanted to make sure that we had time to receive feedback from the ecosystem before we weighted it more heavily. Now that it has had time to mature, we want to adjust the weighting to make sure that we're aligned with Core Web Vitals. We are still calibrating our scoring curves and analyzing thresholds, so we don't have specific figures for you today, but an increased weighting of CLS is one of the primary changes you can expect in our next scoring update in Q2 of 2021. We'll add a link at the end of our slides for where you can stay up-to-date with the latest changes, but we also encourage you to check out the Lighthouse scoring calculator where you can explore the details of your current scoring composition. Okay, switching gears a little bit to third-party audits. We know that a big part of Web experiences are delivered using third-party code, and developers don't have as much transparency or control over the performance impacts as is ideal. Third-party services can deliver a lot of value, but they can also come with performance costs. Our goal is to make those costs as transparent and attributable as possible so that developers can make informed decisions and reason about trade-offs when choosing what to include in their sites and how to incorporate them. An example of the work we're doing to make performance impacts transparent is with the Minimize Third-Party Usage Audit. This audit is designed to help you break out what third-party code is impacting your performance and by how much. As I mentioned a moment ago, the intention is to minimize the costs of third-party code on your user's experience. Another new audit we're shipping surfaces opportunities to lazy load third-party code, and for that I'm going to pass it to Paul to share more about this new audit. Thanks, Elizabeth. So I want to take a moment and consider a YouTube embed. Now when a YouTube embed on your page loads, it's an iframe, it loads in, but then there's scripts and stuff. To be honest, the amount of resources loaded in are a little heavier than you'd expect. Now an alternative to loading in that full iframe and everything with it all at once, the beginning of the page load is to load in something that looks just like it, but it's far more lightweight. It can look exactly the same with the play button. And once the user engages with that play button, then we can load in the full fat embed behind it. So we call this pattern a facade, and we've been seeing this become a little bit more popular. It's a nice web-friendly technique. We've added a brand new audit to Lighthouse that captures opportunities where you can employ this pattern. Right now the audit finds opportunities like video embeds and chat widgets, but if there's any facade that you'd like to see recommended, please go to the web.dev documentation to see how to submit them. And now we have a few updates on the Chrome user experience report. At each of our events, we're happy to update you on the growth of the Crux corpus. And today we are announcing that we have field data for 8 million origins. The Crux data is available via BigQuery, the Crux dashboard, and the new Crux API that was launched in June 2020 this year. You can also check out some stuff was added to the API. In fact, effective connection type property was added to the payload. And if you're interested in this API, there's great documentation on how to make use of it. Moving on to Core Web Vitals actionability. It's important to us that you're equipped with the tools to not only measure how you're doing with Core Web Vitals, but to actually improve them. So I want to take a look at this trio of metrics and we're going to kind of get a cheat sheet for how we can improve these things with the tooling that we have. And for each of these metrics, I'm going to go through and look at kind of two key steps. We're going to diagnose what's going on and we're going to ameliorate or make them better. Alright, so LCP. Let's diagnose it. The key question here is, okay, we had a largest contentful paint. What was that paint? In Lighthouse, you can look at this specific audit. It'll tell you the DOM element that is associated with the largest contentful paint. You can see the same thing in DevTools. If you capture a trace, you select the LCP item and get the metadata about that DOM element and when it happened. Now let's make it better. Because this is a paint, we want this paint to happen earlier in the page load. This is mostly a matter of optimizing your network waterfall and your loading strategy of critical bytes. These are all the Lighthouse audits that help you out with that. Moving on to TBT and first input delay. Now the key question here is where are the long tasks? Each of these long tasks is contributing to our total blocking time and input delay. In DevTools, if you record a trace and look at the top of the main thread, you'll see tasks. And if those tasks are above 50 milliseconds, it's a long task and DevTools will tell you that. I also want to point out that a little bit newer in DevTools, you'll see at the bottom of the screen is the total blocking time metric. This is computed for you on the fly based on the trace that you're looking at. In Lighthouse, you can see the same kind of information about your longest tasks. In fact, we summarize the longest long tasks in descending order. You can see how long they are and what URL they're associated with. And if you have a lot of third parties on your page, you can look at the audit that Elizabeth mentioned before. For each third party, we also include the blocking time contribution for them. Now, how do we make this better? Well, this is mostly a matter of optimizing our main thread. We have to take inventory of all the work that's happening and we want to take that work and we want to break into chunks. We want to spend less time doing it. We want to defer some of it and we want to just straight up delete some of it. Just not do any of it at all. The seven audits here help with all those things. Next for cumulative layout shift. The key question is, okay, we had shifts, but what was it that shifted? In Lighthouse, you can see these DOM elements were the elements that shifted around. And for each of them, you can see their numeric CLS contribution. You can see a similar thing in DevTools. Record a trace, look at a layout shift event, select that, and you can see, okay, what were the shifts? What were the rects that moved around? Now, the next question is why did it shift? Because we found the shifted elements, but those shifts were actually the side effects of the real culprits. This is a little bit trickier, but we've added some new stuff to Lighthouse to help out with that. In fact, we have four completely brand new audits that attack this problem from different directions. So we have audits that look at, if you have animations that are not running on the compositor and that could be running a lot smoother and not affecting layout shifts. If you have image elements that do not have fixed dimensions, you have iframes that are being added and perhaps shifting things beneath it down, and if you have any web fonts that are not loading in an optimal pattern. Thanks, Paul, for that cheat sheet on Core Web Vitals and making sure that it's actually actionable. And there are actually some other advice that you can check out that will be shared in talks coming up. So stay tuned for talks like Fixing Common Web Vitals Issues with Katie and Exploring the Future of Core Web Vitals with Annie and Michael. Yep. These links here captured some of the resources in this talk. And that's it for us. Thank you all very much. Thank you. There are a handful of challenges and points of confusion about optimizing for web vitals that I see really frequently. So the goal of today's presentation is to cover as many of these issues as possible in the next 10 minutes. In particular, the three themes I'm going to be talking about are cumulative layout shift, third-party scripts and ROM. I'm not going to be giving you an overview of these topics. Instead, I really want to stick to discussing some of the edge cases and details that people tend to find confusing as well as highlighting some performance techniques that you might not be aware of. I'm going to start today by walking you through how CLS measurement is implemented in code. I want to do this because I think seeing the implementation in code clears up a lot of the questions and confusion around when CLS is reported and finalized. Particularly for single-page applications. CLS is measured by using a performance observer to observe for layout shift events. When a layout shift occurs, the performance observer invokes a callback function and this callback function adds to the running layout shift score. In other words, CLS. If you wanted to, you could also add code inside that callback function to report this intermediate value of CLS. Although you can do that, it's not necessary. Ultimately, the only value of CLS that matters is the final one. Final CLS is not determined by taking a bunch of CLS entries and seeing which one occurred last. Instead, it's determined by listening for the visibility state change event. User actions like navigating to a new page, switching tabs, minimizing tabs, closing the browser. They're all examples of events that cause a document's visibility state to change from visible to hidden. When a document's visibility state changes from visible to hidden, you know that the value of CLS at that moment should be reported. One tool that I really like for debugging layout shift is the layout shift region's option and dub tools. You can access it from the command palette or the rendering options tab. This feature highlights page elements that have shifted as they are shifting. In other words, it's not highlighting the root cause of the layout shift, but rather the affected elements. I personally find this tool most helpful when combined with screen recording. Layout shifts can happen very, very quickly, and this can make it difficult to debug them in real time. However, with screen recordings, you can step through the page load process afterwards as many times as you want at your own pace until you figure out what's causing the layout shift. Another thing that I want to mention is that you can augment your web vitals reporting to provide you with more information about the circumstances under which a particular performance measurement was observed. This includes things like reporting on connection type or scroll position, as well as pieces of data that might be unique to your app. For example, if your app uses a debugging token. Layout shifts that occur within 500 milliseconds of user input do not count towards CLS. The asterisk here, though, is that scrolling is not considered a user excluded user input. In other words, if a user scrolls on a page and a layout shift occurs immediately after, it's still going to count towards CLS. The reason why scrolling is treated a little bit differently than these other user input events is that if you think about it, if a user is scrolling on the page, there's really no good reason why a layout shift should be occurring. On the other hand, when a user is clicking on the page, it's much more likely that they're trying to navigate to a new page or opening a nav bar. And these are things that can trigger layout shifts, but they're probably layout shifts that the user doesn't mind or they're wanted because the user is trying to accomplish something. Keeping with the scrolling theme, another thing that I want to mention is that most lab tools such as Lighthouse or Web page tests do not scroll down your page. And this can be a blind spot when it comes to measuring and identifying CLS in lab environments. And I'd say this probably affects mobile a little bit more than it does desktop. At the same time, though, this blind spot might not be as big as you think it is. Keep in mind that layout shifts only count toward CLS if they're visible to the user. In other words, if there's a layout shift that occurs below the fold, but the user hasn't scrolled down the page, it's not going to count toward CLS. Lastly, keep in mind that sometimes there can be no correlation between the CLS of a mobile and desktop site. The desktop and mobile versions of the same site often use different layouts. They use different UX patterns. And as a result, they can exhibit very different layout shifts. That brings me to my next point, which is that code is only part of the solution to layout shift. Some layout shifts can be fixed strictly through code. This usually consists of adding the width and height attributes to images, videos, and iframes. However, many layout shifts are largely the result of bad UX patterns. In other words, the product was designed that way. An example that I've seen really frequently is sites popping in banners at the top of a page to make an announcement. And when this banner pops in, it pushes everything else on the page down. Optimizing UX patterns for core web vitals is a whole topic in and of itself. Luckily for you, Grimo will be discussing this in the talk immediately following this one. I highly recommend that you stick around and watch it. That's all I have to say about CLS. I'm now going to talk about third-party scripts. In the past year, we've heard a lot about lazy loading images and lazy loading iframes. Lazy loading can also be used to load third-party scripts. However, it's a bit more of a delicate art form. And the APIs are also different. On the screen, I've listed some APIs that are available for lazy loading scripts. There are a couple things I want to note here. One is that the delay set by setTimeout does not represent a guarantee as to when the callback function will execute. Instead, it's the minimum amount of time until that callback function will execute. This is because the callback function cannot be executed if the main thread is busy. This behavior sometimes makes setTimeout frustrating to use because it makes it a little bit unpredictable. But in the context of loading third-party scripts, it's actually kind of interesting because maybe you don't want to or you shouldn't be loading third-party scripts if the main thread is busy. In addition, I want to note that if you want to trigger lazy loading based on user scrolling, you should really use IntersectionObserver to do that rather than listening for the scroll event. IntersectionObserver is going to much more performant than listening for the scroll event. Lastly, I don't see many people using the performance observer for lazy loading, but it does open up some really interesting possibilities for doing things like waiting for a particular performance event to occur. For example, first paint or first contentful paint and then triggering script loading. Another issue with third-party scripts that I hear a lot is that engineering teams are really frustrated because they feel like marketing teams just keep adding more scripts to their page and there's nothing they can do about it. Ideally, you would get those teams on board with performance, but if that's not an option, you might be able to make some improvements to the situation by taking advantage of some of the features available in tag managers. These are features that give you the ability to restrict tag usage as well as get greater visibility into the usage of tags. This slide lists some useful features of Google Tag Manager, but I would expect that you would be able to get more features in other tag managers as well. In this last section, I'll be talking about techniques that you can use for improving your room setup. A question that I commonly get is how can you get page-level performance data? Page-level performance data is technically available in both crux and page speed insights. However, in practice, you might find that it's not available and that's because page-level performance data will not be exposed if there's not enough performance data available for that page. If you're running into this situation, you might find it helpful to look at Search Console. Search Console is a little bit different because it exposes performance data based on URL groups. Search Console URL groups are groupings of URLs with similar HTML structure. The idea being that structurally similar pages are going to exhibit similar performance characteristics. As a result, pages that might not have enough performance data to be displayed in PSI or crux could potentially be displayed in Search Console. In addition, this feature provides you with a way of forecasting the performance of newly added pages to your site. For example, the screenshot on the screen shows the URL groupings that Search Console detected for all the author pages on web.dev. If I were to add a new author page to web.dev, I technically don't know what that page performance is going to be like, but I can get a pretty good idea by looking at the aggregate page performance of all the existing author pages on web.dev. Search Console is a great tool, but if you want more detailed or more frequent performance data than what Search Console provides, you will need to collect your own performance data. There are two paths that you can go down when it comes to collecting your own performance data. You can set up the tooling yourself or you can sign up for a third party service. If you want to set up the performance tooling yourself, we recommend using the WebVitals.js library that is available on GitHub. WebVitals.js is a small, lightweight script that you include on your page that provides an API for measuring the WebVitals metrics. The advantage of using this script rather than something you implement yourself is that it gives you the assurance that your implementation is correct and therefore the measurements that you collect are going to match those found in Google tooling. Alternatively, there are a wide variety of third party services that support WebVitals. Most of these are paid services, however, CloudFair browser insights is available for free and supports WebVitals. That brings me to the end of this presentation. Thanks for watching. See you soon. Hi, my name is Garima Mimani and I'm a Web Ecosystems Consultant at Google. Today I will talk to you about a few UX patterns that you can implement to achieve good user experience while optimizing for core WebVitals on your site. When I visit a Web site, I have certain expectations from the site. I want content that is useful and interactions that are smooth and predictable. Site owners and developers try different UI patterns to meet the customer expectations while also balancing business needs. But the big question is what really makes for a good user experience? Well, there is no right answer to that question. User experience is different for everyone. Great user experience is intangible and can be difficult to measure. A way to measure the quality of user experience is by answering these three questions. Is it happening? Is it responsive? And is it delightful? And the metrics that correlate to those spellers are largest contentful paint or LCP corresponding to loading, the first input delay or FID corresponding to interactivity and cumulative layout shift or CLS corresponding to stability. After research on millions of pages we found that if a site meets the core Web vitals thresholds users are 24% less likely to abandon the page before the first content is painted. Raise your hands if you have had this experience. You navigate to our site it takes a long time to load and when it does content jumps around. Even though I don't see you I know you are with me. Such experiences are disruptive to the user and while they may endure it a few times but if it keeps repeating the user may not come back to your site. So let's see how this experience unfolds. A user navigates to your site the static content is rendered first no layout shifts here. The promotional banner comes in next and the layout shift is introduced you think this is it and you move on to the next content but no an ad just popped up and added another layout shift. Nowadays we are increasingly seeing the addition of informational banners related to COVID causing further layout shifts. The overall impact of each of these shifts is a poor CLS score and a poor first impression to the user. So what can you do to mitigate this? The golden rule is avoid any layout shifts in the active viewport and that means setting the fixed width and height of the images are at containers in the active viewports. So when the content comes in it will not cause any layout shift. If you have dynamic content injected below the current viewport you can go ahead and resize those elements and it will not impact the CLS. A question that we get asked a lot is that these banners and ads are injected dynamically and it is impossible to know beforehand what their sizes will be. This is indeed a tricky question and I want to emphasize that providing good user experience is everyone's responsibility and therefore requires a shift in business mindset. The website developers the UX designers and the marketing folks must align on standardizing the size of the most important content in the first viewport. So what would you do if you still had the fixed size in advance? Well, you can set the min height attribute of the placeholder. With this in place if the content returned is larger than the min height you will see some layout shift but it will not be drastic. To know what min height to use look into your historical data and see what sizes have been rendered in this placeholders and adjust the min height based on that data. Another alternative for marketing banners that we often see are overlays. While they do not impact layout shifts they could impact your LCP if they are too large. Overlays are also not necessarily fixed to the viewport. So for a smooth transition to the content you can have overlay potentially scroll with the content and the layout can be updated to insert the overlay inline at top as soon as it is scrolled out of view. Another way layout shifts can happen is when the add slot collapses when no add is returned pushing the content up. The best practice here is to avoid collapsing of the reserved space. You can obtain the placeholder with a simple text saying reserved for ads, replace it with another image or even better set up house ads in the ad server. This ensures that something is always returned for a given ad request. Risma Media is a French publisher website. Using the techniques I mentioned they were able to improve the CLS code dramatically. They used aspect ratio to determine the image placeholder size for responsive images and also added textual content and small iconographies to help users easily identify the advertising spaces. We looked at some UX patterns for optimizing page load. Let's look into the user scroll behavior now. User scroll or infinite scrolls are scrolls using the load more button by itself do not initiate any layout shifts. The issue arises when you do not have placeholders for the new content that is being injected. So the experience may look like this. The user scrolls down the footer becomes visible above the fold. New content is injected causing the footer to get pushed down. These individual shifts in infinite scrolls can lead up to a poor user experience and a poor CLS code. So how can we mitigate the shift? One, reserve enough spaces for the content before the user scrolls to that part of the page. You can use skeletal UI placeholders for it. Two, remove the footer from the bottom of the page specially for infinite scrolls. And three, pre-fetch data and images for below the fold content so that by the time the user scrolls that far it is already there. Now let's see how the page interacts on button click. When the user interacts with your page, they expect the response to be instant. Users do not want to wait and keep guessing that did this click actually work. And if the response to the click results in a layout shift, it adds to an already poor experience. These interactions are very critical to your business because if your user has an intent to convert your inability to provide timely feedback to the user click could result in losing them. So if you know beforehand that a given interaction will take time to respond and will cause some layout change provide instant feedback to the user that you are processing the request put placeholders for the changed layout so that when the content actually comes in, it does not introduce any further layout shifts. It is important to note here that you have 500 milliseconds from the user input to make any layout change and that will not affect the CLS score. If you are server rendering the HTML consider making the initial stage render with all buttons on controls disabled if those buttons and controls will require JavaScript to work. When the JavaScript finishes downloading and executing remove the disabled attribute and add the necessary event listeners to make it do something. If the user sees that the button is disabled while the page is loading they will be less likely to interact with it and this could help you with your FID metric. Apart from these UX patterns there is so much more that you can do to optimize for the core web vital metrics. NDTV is the largest media company in India. They prioritized the largest content blocked by delaying third party requests and saw an increased engagement and consumption of content directly proportional to increased revenue for their website. Yahoo Japan is the largest site in Japan and has been working on optimizing for core web vital metrics through cross-functional efforts in their organizations. A direct result of these efforts in the news site resulted in strong return on investment and some great improvements in their business metrics. So before I go away if you took three things from this talk it would be 1. Do not insert content above existing content unless it is in response to a user interaction. Provide a smooth and predictable experience to the user. 2. If content needs to shift in response to a user interaction do so immediately to not wait for the network request to finish. For requests that may take more than 500 milliseconds to complete use place holders to reserve space as the content loads. 3. For all interactive elements on the page ensure that if it looks interactive it actually is interactive. If you use server-sized rendering consider rendering your pages in a loading disabled state and then uploading them to look interactive only after hydration. Thank you and enjoy the rest of your sessions today. Hi everyone I'm Annie and I'm here with Michael. We work on the team here at Google that develops the web vitals metrics. Today we want to share our latest thinking on the future of web vitals specifically the core web vitals. As a reminder the core web vitals are the subset of web vitals metrics that apply to images and that we believe are the most important metrics for sites to measure. In addition core web vitals are unique in that they were all designed with a couple of key principles in mind. Principles we plan to stick to even as we evolve the metrics in the future. First and foremost each core web vital measures a real aspect of user experience so things that users can see like how fast a page loads and how quickly it responds to user input. We don't plan to include metrics that are not a direct measure of user experience like how many bytes of JavaScript were loaded or the timing of the network requests. Second core web vitals have measurement support across both the field for real users and the lab to get deeper insights. For any updates we make we work hard to make sure that our tools and APIs give developers the insights they need to be able to understand and improve the metrics on their pages. Third we aim for each vital to be concise and clear. We want each metric to cover a distinct aspect of the user experience. We don't plan to cover the same experience with multiple metrics. When we announced the core web vitals we said we would evolve the metrics over time but we also promised to do so on a predictable annual cycle. As we gathered feedback about the initial core web vitals one thing became abundantly clear people really like having a small set of metrics that are clear across all our tooling. We really want to keep the core web vitals concise moving forward and so with every update we're also carefully weighing the cost of changing things. So for the first annual update we're mostly thinking in terms of smaller adjustments to improve the quality of the existing metrics and respond to feedback we've received from the community. It's still several months out but we wanted to give you an idea of what we're thinking about so we can hear feedback. We've set up an email list web vitalsfeedback at googlegroups.com and we'd love if you can send us your thoughts there. Michael has put a lot of thought into where we could improve and I'm really excited for him to give you an update on where we're at. Take it away Michael. Thanks Annie. First progressive loading. As we mentioned we've tried hard to keep the core web vitals concise but we've also gotten a lot of feedback that for years we've been talking about how there are multiple key moments for the user as a web page loads. Largest contentful paint measures when the main content is finally visible but getting content on screen quickly is critical too. First contentful paint measures that part of the experience. Here's an example of a page that's been highly optimized to have a quick progressive load. I know it's subtle but the way the text and layout come in quickly even while the main image is still loading really makes it feel fast for the user and really makes the content usable quickly. We'd like the core web vitals to be more inclusive of this initial part of the experience as well as when the main content loads. So we're considering adding first contentful paint as a core web vital. Second interactivity during load. To recap first input delay measures the time from when a user clicks, taps or presses a key until the browser starts to process that input. It's designed to capture delays that result from the main thread being busy especially during page load. Here's a quick animation that demonstrates how an input can be delayed due to the main thread being busy during load. In this example the visual update came quickly as soon as the main thread was unblocked. But since first input delay doesn't include the time to handle the event and update the screen and since delay is sensitive to the timing of user input, we found that some pages can have input which feels sluggish even when they meet the threshold for a good user experience on this metric. Currently first input delay has a threshold of 100 milliseconds at the 75th percentile but our initial research shows that tightening this threshold to 75 or even 50 milliseconds could measure the user experience more accurately. Third visual visual stability. We measure unexpected movement of page content and sum up all movements into a single score cumulative layout shift. An individual layout shift occurs anytime an element which was already visible changes its position on the page and is scored based on the size of the content and distance it moved. 2020 saw several improvements to individual shift reporting such as fixes for hidden content shifting and video controls. You can always follow along with updates to CLS and all other metrics in Chrome over at bit.ly slash chrome speed metrics changelog. We think it's important that cumulative layout shift captures layout shifts even after load. Some of the worst experiences are due to shifts later in the page lifetime such as this example here. However, we also acknowledge that CLS is not always perfect for especially long-lived pages or single page applications. Moving forward, we're looking at options that are better suited to normalizing the individual shift data for such cases. Okay, but what about bigger changes? Beyond the next update, we're thinking a lot about the user experience after the page is loaded. First, we're looking into ways to have better support for single page applications. Take a look at this example. As the user clicks through the site, it feels as if they're loading new pages. But the site is a single page app and each transition isn't counted with its own largest contentful paint or first input delay. This is a tough problem because each page can have its own method for these transitions and sometimes it's not even clear whether a transition is occurring. We're looking at how we can create metrics that work better for single page apps. Second, we also want to better capture how well a page responds to user input. The first feature in interactivity metric is first input delay, but it only counts the first click, tap or key press and it only measures the time until the browser starts processing. We really want to include all the inputs on the page, not just the first. And we'd also like to include the full time until the screen updates in response to an input, not just the delay until it's begun processing. Third, we're digging deep into scrolling and animations. The best on the web have very smooth scrolling and animations, but a lot of other pages feel janky. Here's an example with the same text scrolling smoothly on the left and on the right with some hiccups. When you're trying to scroll through an article to read or keep your place on a product page, these janks are really frustrating and they can add up to a poor user experience. We'd like the Core Web Vitals to reflect this. Finally, we'd love to expand the Core Web Vitals more into other areas of user experience, like security and privacy or accessibility. We would love to hear your thoughts on the most important aspects of user experience that you'd like to see covered. We're giving this talk early, long before the first annual update, let alone the next, because we really want to hear your feedback. So please let us know what you think. Please email Web Vitals feedback at googlegroups.com to let us know your thoughts. Thanks so much for listening. I'm looking forward to hearing from you. So the next section of Sherad's Serma is going to be describing to Jake. As always, he's not allowed to make any kind of noise, draw words or letters in the air or mouth or letters or any of that kind of stuff. He has to say completely silent and describe the word of my choosing to Jake. I've sent the word across the Serma, but he hasn't seen it yet. So he's about to see it for the first time. Try and describe it for one of a better term to Jake. Jake's going to try and guess it. Let's get ready. Let's get ready for the contest. On your marks, get set. Let's go. You're welcome. Disappointed Serma. Two words. First word. String. Yo-yo. Spiral. Circle. Web. Web. Second word. Web bundle. Web package. Oh, Web. So it's like build? Web builder? It's not a bundle at Web. Web tools. Web Web builder. Web construction. Web constructs. Web. Keep going. Web construction. Web constructor. Web builder. Come on, give me another clue. It's your fault, Serma, not mine. It's great. I can insult him. He can't even talk back. Serma, you're really bad at this. You're dreadful. Are we doing back or but? But. Second word. Web is... More versions of the word but, I think he's going for it. Web bottom. I mean, I only know the rude words. Apart from... Oh, yeah, you want a rude word. Web arse. Really not getting this. So we've got... We've got a big old Web arse. A Web arsenal. A Web... Web... I'm not enjoying myself right now. What being... An American way of saying arse. Web arse. Right. What's he doing? Web... Web as... Web as... Simulate... Free syllables. Web architecture. Web... Ar... Web as... Web as... Web as... I don't know. I hate this. I'm not having fun anymore. This was fun before. I'm not enjoying this. Okay. Web as... No, I don't. I'm kind of like repeating what you're saying. Web as... Yes! No! That was... I hate everything because that's so obvious now. Hi everyone and welcome to our session on Core Web Vitals and Search Engine Optimization SEO. Today, we'll be talking about what SEO is, how the Core Web Vitals play a role there and how you can use Search Console to help track and improve your site's metrics. For many, SEO is a weird collection of black magic spells. But once you dig in a bit more, it's not at all bad. We've heard that Google is pretty good at figuring websites out and it's tempting to assume that having a clean website means you don't need to think about SEO. But it's not that simple. SEO is all about improving the quality and quantity of the traffic a website gets from normal search results, so excluding things like ads. There are multiple things a website can do to achieve that. It differs a bit by Search Engine and I can only speak for Google. In general, the main aspects are the same everywhere though. Make relevant content. Users go to Search Engines to find out more and your pages might be the resource that they're looking for. This is more than just writing well. It's also about picking topics and phrasing that's used by your audience. Make the content accessible to Search Engines. If Search Engines can't understand your content, it'll be hard for them to recommend it. Show why your site is awesome using the various signals that flow into SEO. There's a lot that plays in here. A good way to think about this is given there are multiple good and relevant results, how can you show Search Engines that yours is particularly good and useful answer for users right now? The core web vitals flow into that last group. For the other aspects of SEO, I strongly recommend checking out some of the SEO starter guides and getting help from experienced SEOs. It's useful to think of SEO a bit like usability. A little bit of information is helpful but when it gets serious, you want to get the help from experts. Over to the core web vitals. You've heard a lot about the core web vitals today already and this is really just a super short overview to help get you started. The core web vitals focus on three aspects. First, how quickly the page loads. Second, how soon you can interact with the page. And third, how stable the page is as it's loading and as the user is interacting with it. If you're curious about more details there, be sure to check out the other sessions and our documentation. When it comes to data sources we differentiate between field data and lab test data. Field data, also called real user metrics or run data, is collected from users over the course of about a month and is based on what they experience when viewing your site. This is a part of the Chrome user experience report in short crux. Lab tests, on the other hand are generated on-demand with testing tools in your browser or on a server using settings that try to approximate what users would see. For search rankings, we use field data as this is what your site's users have experienced over time. This makes the data more representative for your site taking into account where your users are located and how they access your website. The core web vitals metrics are then combined with other signals for search. We call this combination the page experience ranking factor. The additional signals are mobile friendliness, safe browsing, HTTPS security and compliance with our intrusive interstitial guidelines. These have all been around for a while and there's much written up about them so I won't go into much detail here. We plan to adjust these signals over time to best reflect a good user experience for users. We'll give a six months heads up before any change. The data is split by mobile on desktop and applied appropriately for search ranking. When a mobile page is shown as a separate amp URL to users that's what will be used then. In other words, if you're on a mobile page, the ranking is only affected by mobile data and the same for desktop. So in short, for search rankings we use the page experience set of metrics. These include a few existing signals as well as the core web vitals. We track these based on what users would see separating mobile and desktop experiences. Cool, we looked at page experience, the core web vitals and how they're used in search. Let's take a look at search console now. Search console is a free tool for site owners that gives insights into Google search for your website. Once you've verified ownership and given the tool a bit of time to collect all the metrics it's time to head over to the core web vitals report. In this report you'll see graphs for mobile and desktop showing how a relevant sample of your site's pages score. The sample and the scoring is based on the Chrome user experience report data. That's the field data collected over time. Because of that, any changes that you make on your website will take about a month to be reflected here. Clicking through to one of these reports you see a graph of the total number of URLs tracked and can see the individual issues flagged below. Going to one of these issues you'll see a similar graph on top together with a list of sample URLs for that issue type below. Keep in mind that these reports are based on the field data so not all of your site's URLs will be available here. It's often useful to focus on the bigger buckets of issues across both the poor and needs improvement categories. We try to recognize patterns such as shared templates and group those URLs together as fixing the issue once can improve large parts of your website which is pretty awesome. A good approach to improving these issues is to take a bigger issue type and to work to resolve it. After recognizing the issue the first step is usually trying to reproduce the issue locally or within a testing tool. If that's not possible for bigger issues it's worth narrowing things down in the various tools and scripts available elsewhere. Once you've reproduced the issue you can work to improve the vitals I won't go into details here but be sure to check out the other sessions for more information. As an aside sometimes you may notice that other Google products or services are slowing down your pages. Google search doesn't give any special treatment to these just like users generally won't care why your pages offer a bad user experience. Treat embeds from Google just like you would treat any other embedded resources. Once your live website is updated you can tell Search Console that the issue is fixed. This is done in the appropriate drill down report that we saw before by clicking the Validate Fix button on top. Search Console will then start a review of the URL's flag and let you know how your improvements turned out over time. Let's get back to SEO. As mentioned in the beginning there's more to SEO than just page experience. Some of the factors are listed here. When determining the rankings we have to weigh these factors appropriately. In general we prioritize pages with the best information overall even if some aspects of page experience are subpar. A good page experience doesn't override having great relevant content. For example if someone is searching for your company's name it would be expected to show your company's website even if it's slow or otherwise provides a subpar page experience. In cases where there are multiple pages that have similar content, page experience becomes much more important for visibility in Search. It's not the only factor and of course there's much more to a website than just Search. We found that when a site provides a good page experience it generally performs well with users too. For example, we found users are 24% less likely to abandon page loads overall. In particular we saw 22% less abandonment for new sites and 24% less abandonment for shopping sites. There are few changes that can show this level of improvement for online businesses and results like these are part of the reason we prioritize the web vitals metrics. The search ranking change is currently planned for the first half of 2021. With this change we're also making all pages eligible to be featured in the top stories carousel of the search results using page experience as a guide. If you're curious about the details be sure to check out our blog post. Well, that was it. Our short excursion into the world of core web vitals and SEO. In short, while there's more to page experience than just SEO and there's more to SEO than just page experience, SEO is also definitely a good reason to work on improving your site's page experience. I hope you found this session useful Don't forget to check out the other sessions here to find out more ways to improve your website for users. Bye. What do these do? Well, when you load a page, the browser tries to be smart about how much it paints. For example, it won't put effort into drawing things that are way outside the viewport until you scroll towards it. However, the browser has to do a load of layout work to figure out what's inside the viewport and what isn't. This is because an element may be at the very end of the document but positioned at the top. Or some deeply nested element could be positioned outside all of its parents. If you want to know the size and position of one element, you generally need to know the layout of its siblings, its parents, their siblings, their children. Basically, you need to know everything. But now there's an easy way to avoid that. Let's start with these content areas. We're going to give them a content visibility of auto. This means the browser can skip the layout of the children while the container is outside the viewport. But skipping that layout means the container loses its height. Now, let's take a look at the size for the element. This lays the containers out as if they had a single child that's 0 pixels wide and 500 pixels tall. Now, 0 pixels, that might sound weird, but because the container is block level, it stays full width. While we're at it, we can do the same with our smaller heading elements, this time giving them a full back height of 60 pixels. And now, when the page loads, it does a full layout for everything in the context, making things much faster. Then, if the user scrolls down, the other areas of the page will be laid out just in time. All the layout shifting you see here is happening outside the viewport. The only thing the user might notice is an update to the scroll bar. And of course, a much faster load time. But how much faster are we talking here? Well, I tested this out on the HTML spec, and it took the layout time down from 50 seconds to 400 milliseconds. And that's an amazing saving. But remember, the HTML spec is a 12 megabyte document, so this is quite an extreme case. But even on normal sites, like you can make hundreds of milliseconds of saving with very little change. Measure it with your own content and see the difference. That's the high level. For more details, see the Web.dev article that is linked in the description. But the good news is this shipped in Chrome 8E5. And of course, that includes the other browsers that use the Chromium engine. Other browser engines are thinking about it too. In particular, there's positive signals from Firefox. But using it today won't break other browsers, so your users can benefit from it right now. Okay, next up is the font metrics override descriptors. Now that might sound like something Captivicard would activate to stop the enterprise exploding, but this CSS feature tackles a longstanding frustration when it comes to text rendering. Have you ever gotten your layout looking exactly as you want it in one browser, only to try it in another, and find out the text ever so slightly misaligned? Well, this is because font layout information can come from a variety of different places within the font itself. Some fonts use all of them, but use different numbers. And sometimes browsers and operating systems disagree on which numbers to use. Thankfully, a new CSS feature tells the browser to ignore all of that in favor of the font metric override. Actually, can we just call them fmods? Like, this is supposed to be a short video. Okay. Here are the fmods. So now, for text at a given font size, I can say 73% is the ascenders, and 25% actually, this is a bad example because it doesn't have any descenders. Let's go for texts. Cool. So 25% is the descenders, and finally, 2% is shared between the gaps at the top and the bottom. And now this will be fully consistent in all browsers that support fmods, which is currently Chrome 87. But wait, what does this have to do with web vitals? Well, the quickest way to get text on the screen is to display it using a fallback font straight away, and then swap to the web font once it's downloaded. But different metrics between the fonts can cause a massive layout jump when the swap happens. We can fix this with fmods. By adjusting the metrics of the fallback font, we can reduce the change in layout caused by the font swap. Like I said, this is in Chrome 87. See the description for more information and links to demos. We also want to find ways to handle other differences between fallback fonts like different letter widths. But we'll talk more about that when things are more concrete. OK. Next feature. Ever been browsing around the web, reading a really interesting article, seen a link, clicked it and thought, yeah, OK, but I want to go back to the article. Well, meet the back button. OK, we've had that from the start. But it used to involve reloading the previous page from scratch. OK, that was still pretty fast, and that's actually because I optimised my site really well, and it's a really simple site. But you can still see the problem here. This bit of the page is enhanced with JavaScript, and when we go back, you can see it having to reload and re-execute. If the page involves a lot more JavaScript, it'll take longer. The layout may even shift around as elements of the previous page. But let's try that again in the latest Chrome. As before, I followed the link, but this time, Chrome automatically keeps the previous page in memory. But it's frozen to stop it processing and monitoring in the background. This means when we navigate back, it's ready instantly. And that's in Chrome 86 on Android for cross-origin navigations and Chrome 87 for same-origin navigations. This is also a gradual rollout, so not all users will get the feature at once. Also, it's worth mentioning that this feature has been in other browsers, such as Firefox, Safari, and even Internet Explorer for years now. It's been difficult for us to integrate it into Chrome's strict multi-process architecture, and let's just say in this instance, we've been fashionably late. There are some gotchas too. Using particular web features prevents this optimization, and this list differs between browsers and browser versions. In many cases, you can work around it by deactivating the feature just before the user navigates away using the page hide event and reactivating it when they navigate back using the page show event. Check the link in the description for all of those terms and conditions, I guess. All right, next up it's portals. Portals are a new HTML element that lets you load a page and render it inside the current page. Here's one. I've scaled it down a bit and given it a border. Okay, I know you're thinking this is iframes, but it's not the same. For instance, portals aren't interactive. If a user clicks this, they're clicking the portal element, not the page inside the portal. Okay, I know what you're thinking this is iframes, but worse. However, portals come with their own advantages. For one, they can be activated either by clicking them or calling this method. When a portal is activated, it becomes the top level page. That, as usual, changes the URL bar. It's similar to a regular navigation, except we had the page pre-rendered ready to go, so it was really, really fast. But let's try that again. And rather than have the portal sitting clumsily in the middle of the page, we'll move it out of you and wait for the user to click that article link. When they do that, we'll animate the portal, then activate it. With a little clipping and coordination, we can use this to create rather pleasant navigation transitions. But that's not all. Check this out. This is the same pages before, but I've added a button to share the article with an external service. The user clicks it and confirms they want to share it, and that's it. But what actually happened there? Let's run through that again, but with code. First up, the user taps the share button. The article site creates a portal to the sharing service, then adds it to the page, and activates it. Now example.com is in charge. The URL has changed. But the user doesn't see this blank page because example.com uses the portal activate event to get a portal to the previous page and add it to the document. To make it clear it isn't interactive. They blur it and add their own UI over the top. Although it looks like a model on top of the article, the sharing service is in full control here. They can trust interactions with this dialogue, and the user, they can confirm that from the URL bar. They know who they're dealing with. Now, the user clicks yes, and the sharing service captures that event, uses animations to bring the article back into focus, and then activates the portal to the article, and that gives it control of the tab once again. You can see the URL has changed. Not only can portals create navigation transitions, they can create flowing user interactions between sites. And that's in Chrome behind a flag. Which flag is it behind? Well, we might change that between me recording this and it going on YouTube, so see the description for details and links to more information. We're really interested in feedback on this feature. It's fair to say we've been talking about portals as a new thing for a couple of years now, but we really want to get this right, especially in terms of privacy. For instance, when one site contains an iframe to another site, browsers are moving to a model where the embedded site won't have access to a standard set of cookies and other storage. This prevents the two sites exchanging information about you, so it's a privacy win. Of course, Chrome will provide the same protection when it comes to portals, but there's an extra challenge here. If that portal activates, it's now the top level page, so it no longer needs those storage restrictions. We need to figure out the best way to handle this so the site can react to the change in storage and update its content and how it has access to things like login state and user preferences. Solving this problem will also let us do more with page pre-fetching and pre-rendering across sites in a privacy-preserving way. Check the description for more details on that. In fact, instead of thinking of portals as different iframes, you could say they're more like pre-render tags that you can display. Okay, that's the future. But what about the present? Often the quickest and easiest way to improve the performance of your page is to ensure important content loads early. The preload and prefetch tags do this, and they're well supported enhancements. We also have the quicklink library, which automates prefetching content that's likely to be needed in the next navigation. New egg used this, and they saw a 50% increase in conversions and page navigations that were four times faster. I mean, that's huge. See the description for links to the library and more information on prefetch and preload. And that's everything I wanted to show you today. Like I said, bit of a whirlwind tour. I don't know if I've mentioned the links in the description yet, but there are links in the description to further information about these topics. So if you're interested in further information about these topics, and you like clicking links in descriptions, be sure to click the links found in the description to further information about these topics. Where are these links I hear you ask? They're in the description. What kind of information is in these links? Further information. What should you do with these links? Well, they're not going to click themselves are they? So go on, give them a click. Okay, so it's time for a Wikipedia race. The idea is fairly straightforward. I start with Sirma and Jake off on one particular page of Wikipedia and I give them another page that they have to get through by clicking on the links around Wikipedia. So the starting point that they have here is the Chrome disambiguation page on Wikipedia. So this is the chrome page. So if you just type chrome into Wikipedia, it'll ask you which chrome you're talking about and that's where they're starting. They don't know where their head is yet. I haven't told them but you'll be able to see on screen both of their browsers and the idea is the first one to get to the target page wins. Does that all make sense? Are you happy? Are you clear? I'm not happy but it is clear. I was going to say are you clear? Not necessarily happy. Alright, the page that you are going to go to and I'll be obviously watching along and all the best to you both. The page that you need to get to is Empire State Building. Oh, thanks mate. Okay, right. I've got a plan. Sirma's straight into the browser. Yes. Okay. And you're allowed to search, apparently. Yes, control f is allowed. Yes. Empire State Building, right? Yes, Empire State Building. So if I remember correctly, the Empire State Building is in New York. It is. United States. Sirma's got a great plan here. Yeah, well I'm trying to do the same thing. Sirma has arrived. Oh, mate. That was good work. Okay, from Margaret Hamill. Your target page is Arctic Monkeys. The bat. Oh my word. Interesting first choice there by Paul. Come on. Okay. I see. Why do some of these not have links? This is really annoying. I'm not even sure what it would be classed as. This is so upsetting. I haven't even left them. I'm thinking. Okay, Paul has made at least across the ocean. Yeah, I'm stuck on. I have also Okay. The Beatles? That's close but not quite it. I know. I'm thinking I can find Where are they from Arctic Monkeys? Are they Sheffield? Yes, they are. That's some very generous information sharing from Jake's side there. It was another that helped me because I'm stuck. I'm stuck, I'm stuck, I'm stuck. We have arrived at Sting also in terms of music. I know and I'm still desperately trying to think so I have some famous Jake has arrived in Sheffield but it doesn't seem to be very helpful. Come on. Spice Girls also another good shot. I'm just hoping that somebody somewhere is going to reference the Arctic Monkeys from one of these famous musicians. The Spice Girls are infamously inspired. Why does the Sheffield You got it. I was like, why does the Sheffield page not list the bands from Sheffield? There's loads of good bands on Sheffield and clearly not on quite the right Sheffield page. For what it's worth, mate? You can't get to Sheffield from the Arctic Monkeys, mate. You just click it. There you go. See and now I'm on the proper Sheffield page. I bet it's like, I bet the Arctic Monkeys. There they are. I hate this game. I hate this game so much. I'm not having fun. My name is Camille and I'm a software engineer on the Chrome Open Web Platform security team. And today I'm going to work you through enabling Cross Origin Isolated on your website. By enabling Cross Origin Isolated on your website you will gain access to powerful web APIs like Sharder and Buffers on Android or performance.memory. You will also protect your application against Cross Origin attacks. So Cross Origin Isolated is a new security feature that provides increased isolation from other origins and that is trickter than same-site isolation or the Cross Origin policy. Let's get started with what Cross Origin Isolated is and how you enable it on a website. So Cross Origin Isolated is a result of sending two HTTP headers on your top-level document. These headers are the Cross Origin Opener Policy, Co-op and Cross Origin Embedder Policy, Co-op. To enable Cross Origin Isolated you need to send a Cross Origin Opener Policy header with a value of same origin with your top-level document. You also need to send a Cross Origin Embedder Policy header with a value of RequireCop with each of the frames in your page. So what do those headers do? Co-op isolates your page from other Cross Origin pages. For example, any Cross Origin pop-up you open will not be able to directly interact with your document or send it messages. You will see the pop-up window is closed and similarly the pop-up will see its operator as closed. This protects against data leakage attacks specter because the browser can put your page in a secure environment with only pages that share the same top-level origin. Co-op ensures that every service source you load on your page is same origin with you or it agrees to be loaded same origin. Service sources agree to be loaded same origin by either having a course header or having a corp header with value Cross Origin. Without this the service source will be blocked. So when you set Co-op to same origin and Co-op to RequireCop it becomes Cross Origin-isolated. You can current this by checking the result of self.Cross Origin-isolated. So starting in Chrome 88 this will allow you to use shared web offers on Android as shown in this example. So this is a brief overview of what Co-op Co-op and Cross Origin-isolated do and if you want to know more you should follow these links on the site. As you have noticed Co-op and Co-op impact how a web page works. So if you just set the headers on the page it's likely that your page will not work as it used to. To help you debug the issues coming from deploying Co-op and Co-op we have new DevTools functionalities that are coming in Chrome 88. So let's have a look at those. First you may want to check the Cross Origin-isolated status in your of your page. In the application panel of DevTools you can check the security and isolation status of your top level frame and there you can see that the page is Cross Origin-isolated. You can also check the Co-op and Co-op status of the page. Here both are enabled so my page is Cross Origin-isolated. Not that because I have both my Co-op status is not same origin but same origin plus Co-op. Let's you get Co-op support in more details so first let me open a Cross Origin pop-up. I can see the pop-up I have created in DevTools and if I click on it I see that it doesn't have access to its opener. This is because the opener is our main page of the Co-op and it is Cross Origin with the pop-up so due to Co-op the main page and the pop-up it open don't have access to each other. Okay so let's try opening the same origin pop-up then and DevTools is telling me that I still don't have access to it. This is because I haven't set the Co-op and Co-op headers on the pop-up. So let me do that right now and open a new pop-up with the right headers and as you can see this pop-up has access to its opener and its icon is also different from the other pop-ups who could not access the Co-op page. Now let's look at support for Co-op. If I look at the issue tab I can see that 4 server resource loads were blocked. As explained in the issue tab those resources do not have a Co-op header so they won't be loaded by a Cross Origin Co-op page and I can click on the specific server resources to get more details about the network load. So beyond support in DevTools we have also been working at reporting APIs for Co-op and Co-op. With reporting you can get production reports on what needs to be changed to support Co-op and Co-op. If you're familiar with CSP reporting this should be fairly similar as we are using the same underlying reporting API. To enable reporting just provide an endpoint in your Cross Origin Opener Policy and Cross Origin Opener Policy reports. This is where the reports will be sent. Not that for Co-op in an Origin Trial. So you will need to either subscribe to the Origin Trial or enable the reporting API through Chrome Flags. We also provide a report-only mode for Co-op and Co-op. So when you enable report-only the browser won't enforce the policies. Instead it will send you reports when it detects that something would break if it had enforced the policies you specified. To enable report-only mode for either Co-op and Co-op you need to use different headers to use your documents. So these are the Cross Origin Opener Policy report-only header and Cross Origin Unbender Policy report-only header. In those headers you will also specify the value of the policy you want reports for and an endpoint to send those reports to. In report-only mode you also get reporting observer notifications. Okay so let's have a look at the additional support and dev tools for reporting mode. So here I have a page with report-only Co-op and Co-op and it's the same page as the previous demo I have just changed the headers it sends. So on the application tab I can see report-only values for Co-op and Co-op and I can also see the associated reporting endpoints. And as you can see we don't have anything in the issue tab since we're in report-only mode. This means that Co-op is not enforced and we don't block the sub-resourced loads. Okay now let me open the Cross Origin pop-up. It can still access the page because Co-op is not enforced. So overall what I have shown you is the state of support for Co-op and Co-op in DevTools in Chrome 88. We do have more support plan to help to debug Co-op and Co-op more efficiently and it's going to show up in later releases. Okay so let's summarize what you need to do to enable Cross Origin isolated on your site. So with Cross Origin isolated you will be able to use powerful APIs like CharderyBuffer on Android starting in Chrome 88 and your site will also be more secure against attacks like Spectre that try to leak your user data. To make your website Cross Origin isolated you need to enable Co-op and Co-op. To help you make the transition enable Co-op and Co-op in report-only mode first. This will give you reports from production that will help you pin down the changes you need to make before deployment. Product with local debugging using DevTools application and issue panel to ensure that your web page will support Co-op and Co-op. In terms of actually enabling Co-op and Co-op, this is all about setting the right headers. First, ensure your Cross Origin sub-resources can load with cores or have a Corp Cross Origin header. This includes your Cross Origin timeframes as well. Then, ensure all the documents in your app set to Co-op require Corp header. So that means top level documents and child documents including Cross Origin timeframes. And so finally you need a Corp same origin header on your top level document. And with this, your page should be Cross Origin isolated and have access to powerful APIs as well as extra isolation that protects them against Cross Origin data leak attacks. So thank you all for watching and see you next time. Hello and welcome to the session. My name is Maud and together we'll take a look at the privacy budget. First, take a moment to think about what you see when you browse the web. You see each tab as its own isolated world. But as a developer you know that things are much more complicated than that. With an open environment like the web, common risks and browsers do a lot to create or help you create security boundaries. For example, with Cross Origin isolation features that Camille Lamy demoed in her talk. You can check the video link. And this is a win for your users security. Now, creating boundaries in the web also protects users privacy because one problem today is people's browsing activity can be tracked and linked across the web, sometimes in ways that users can easily see or control. In other words, covertly. Users typically don't know about cover tracking because it's hard to see it happening. And even if they did, there would be no way to stop it. Unlike third-party cookies that you can see and block. So things need to change. How? Well, to perform web-wide cover tracking, there are a few mechanisms that can be used or rather abused. One of them is IP addresses. There's a proposal to mitigate this problem called willful IP blindness. But it's not enough because another mechanism that can be used is browser fingerprinting. We'll look at how it works in a bit, but first let me tell you about one proposal to prevent this. The privacy budget. Both IP blindness and privacy budget are part of the privacy sandbox, a set of proposals to move towards a web that's private by default. You can check the list on Chromium.org and all of the privacy sandbox proposals are discussed in the open on GitHub. Now, we believe that the privacy budget is how we can prevent browser fingerprinting while keeping the web powerful. But we're early. We're in a research phase and in this talk, I'll be sharing with you how we're trying to answer some hard questions about how the privacy budget could work. And by the time you watch this talk, we're going to talk about pricing results or even have first insights to share. But so it's too early for you to take specific actions on your site to prepare for the privacy budget because we don't know yet how it will work exactly. All of this will come later and gradually. But it's not too early to share your thoughts with us if you'd like to. We want this to be a conversation and we are open to your feedback. I'll tell you how later in this talk. Now, let's move on and take a look at the results. Imagine you're trying to find a friend's friend you've never met before. You're told they're wearing a red T-shirt but maybe 10 people in that crowd are wearing a red T-shirt. But if you also know that your friend's friend is wearing sunglasses and a blue cap maybe, then you can identify them. Now, imagine you want to recognize someone anytime, anywhere. So the description of their clothes isn't helping anymore. You can see that they draw and their main languages. This will stay the same for a while and should be unique enough when combined. Browser fingerprinting works in the same way. The fonts you've installed locally, the way your browser renders canvas elements, your browser, user edge and string and more, are bits of information that remain somehow stable over time for one user but vary a lot across different users. And they're easy for sites to access. You can actually quantify how much needed a piece of information exposes in bits with a measure called entropy. If an API is high entropy so highly identifying, it can be used for browser fingerprinting. So it's called a fingerprinting surface. When you combine several high entropy surfaces, they may uniquely identify you. A few interesting facts about entropy. You can calculate it with a formula that's based on probabilities. For example, about 32 bits of entropy are needed to uniquely identify a single web user. But, and this is the tricky part, you mostly can't just sum the entropy of different pieces of information from APIs to understand if a set of APIs would be identifying. For example, if it would expose over 32 bits of entropy. Because entropy is about probability. So APIs can correlate. For example, if a user speaks Greek, the probability that they have a specific font installed is much higher. So if you already know that they speak Greek, that font in the local font list doesn't give you that much more information or entropy. Browser fingerprinting isn't new. There are even libraries out there. You can check out with this demo. And this can be used for legitimate purposes like fraud detection, but also for user tracking. And not only is fingerprinting based tracking covered and easy, but its usage may increase because it's an alternative to third party cookies that are being restricted in Chrome and other browsers. So what do we do about this? Well, web APIs like Canvas, local fonts and others unlock great capabilities but can hurt user privacy. So keeping the web as it is is not an option. Now we could remove support for highly identifying APIs or not implement support for new APIs. Or we could add noise to all API outputs. But this risk hurting the ability to build amazing web experiences, including for sites that have no intent of identifying users or sites that are only using one or two APIs. What if there was a middle ground way to get both capabilities and privacy? What if sites could continue using powerful APIs normally, but if a site uses too many highly identifying APIs, the browser could impose limitations to prevent the site from moving beyond the red line, namely to prevent the site from entering the red zone where it could uniquely identify users. Well, that's the idea of the privacy budget. As a developer you would decide how to spend your site's budget a bit like performance budgeting in a way, but the browser would define the upper limit and enforce it to protect user privacy. Now parallel to the privacy budget, Chrome is working on other measures to help move sites further away from the identifiable line. By reducing entropy were possible like for some sensor APIs. By refactoring existing APIs to make them more focused, more purpose-built and less identifying like user-agent clients hints instead of user-agent string and by transforming passive fingerprinting surfaces, information all sites can access without running any client-side code, like HTTP headers, into active surfaces information sites can access only by requesting it or running code on the client-side, like Canvas. This makes it easier for the browser to measure and control the budget. Back to the privacy budget. Where are we and what's the status? Well, before the privacy budget can be enforced some key questions need to be answered. Question one, where is the line? Well, very likely quite high initially so we can monitor impact and limit breakage and then it will gradually move down toward 32 bits the entropy needed to uniquely identify a web user. Question two, which sets of APIs move your site closer to the line and by how much? And question three, today how many sites are above or below the line? We are hoping that most sites are already below so that the privacy budget enforcement only affects a small number of sites. But to answer these questions, we need data like a lot of data from the web. Which is why a large-scale identifiable study is being run by Chrome. What's really exciting is this study is being run in real-life conditions. Right now there are some great sites that calculate how unique your browser is but only compare to other visitors of that site. But the privacy budget needs to work for any user and any site at scale. So, the Chrome study is run for real Chrome users visiting all sites in a privacy-preserving way. We're looking at small subset of surfaces that are randomly selected. We're excluding highly identifying surfaces and the data will be deleted after a short period of time. And across these users, the team is looking at how much identity every single site is accessing for all 300-plus identifying APIs. So, how is the team doing this? Well, first, they measure how much identity each API exposes. For example, they look at how locally installed font files differ across users. Second, they measure how APIs correlate. Remember how we said entropy couldn't be summed? Well, this is another special thing this study does. The team is looking at how APIs actually influence each other in practice. From here, they derive how much identity subsets of APIs expose. And in parallel, they're measuring which subsets of API sites are using. And finally, they combine these insights to find out how much identity is leaked to each site. So, what's next and what does this change for you? Well, nothing yet we're in the exploration phase. And Chrome's goal is to find a path for the web that's private by default and some sites will need to change. But we know we can't just overnight impose limitations on the APIs you're using because we also want to keep the web powerful. Which is why the team is running this in-depth large-scale study to strike the right balance between usefulness and privacy and find the line. We're pretty excited about what this will tell us. Privacy sandbox changes are rolling out gradually and developer tooling will be made available for the privacy budget to help you out. We know you'll have lots of questions about this. So, if you're interested to hear more about the results of the study stay tuned by subscribing to BlynkDev or following ChromiumDev on Twitter. And use these channels to share your feedback. Also, take a look at some of the new less identifying APIs that are already available. For example, user agent client hints. And by the way, we have other new APIs that support third-party use cases without cross-site tracking including an API to measure ad conversions. And on this, check out Charlie Harrison's video about the conversion measurement API. And that's it. Thanks for watching and I'll see you around. Hey, everyone. I'm Charlie Harrison, a software engineer working on the privacy sandbox. The privacy sandbox is a set of proposals to satisfy third-party cookie use cases without third-party cookies or other tracking mechanisms. I'm here to talk about one new API we're developing to enable conversion attribution for ads without the need for third-party cookies. So what is conversion attribution? This is all about measuring which ads lead to things like purchases or other valuable actions on an advertiser's site. This is effectively measuring the efficacy of online advertisement. And it can answer questions like how well is a particular ad campaign performing? Or is the campaign a good return on investment for advertisers? These are questions advertisers and ad tech companies need to answer. And we know this information is critical for a functioning ads ecosystem that helps fund the open web. Without it, advertisers and publishers are completely in the dark. And it could even lead to perverse incentives where ads optimize for clicks rather than actually providing value to the people who click on them. The privacy sandbox is a project about making the web more private by default while still supporting critical use cases like conversion measurement and we're committed to making this use case possible. However, before diving into the technical details of the API, it's useful to recap how this is done today using third-party cookies. Conversion attribution at its core involves connecting two events an ad being served to a person and a future event when that person later converts or buys something. With cookies, this is easy. As long as the same party observes the ad served and the conversion, they can use a cookie with a unique ID to link the two events together on their server. In this example, the ad tech company can use the cookie to see that the very same person that saw the ad on the news site later purchased the shoes on the shoe merchant's site. As Ma discussed in her talk, we're working to improve the status quo here. The information cookies provide is so powerful that it can be used to track a person as they browse across many websites. In the example here, the ad tech platform that uses the cookie learns detailed information about both the ad event and the conversion event joined together. That's the kind of data that could be used to build a profile based on a person's browsing history. In particular, we risk linking auxiliary data about each of these events. Let's say on news.example I use one email address to login and to purchase shoes I use a different email and also share my shipping address. With the power of a cross-site identifier, it is possible to link up all this information to build a profile of me online. Now, how could we improve things here and allow for this important use case in a more privacy preserving way? The biggest change we can leverage in a new browser API to preserve privacy is to perform attribution between the ad click and the purchase all on your device locally within the browser using a cookie. Because the browser has control over the linked data it will report, it can apply a bunch of techniques to preserve privacy. Firstly, instead of the advertiser learning what conversion happened after an ad click, we can limit the advertiser to learning only a little bit of information about that conversion. Unlike with cookies where you can learn an ad ID alongside an associated purchase ID, in our API we drastically limit the amount of information you can learn about a purchase. The conversion data to only a small enum that describes it like what type of conversion it is or maybe a broad product category. This protects by default your identity on the advertiser site from being revealed to the context that served you an ad and avoids the problem cookies have where arbitrary information can be linked. Secondly, the browser can decide to sometimes randomize that little bit of conversion information. The goal here isn't to mislead the advertiser. In fact, we'll tell them exactly how often we plan to randomize. But they won't know exactly which conversion reports are accurate and which had some noise added by a random number generator. This will let advertisers see the big picture without them being sure about any one particular action any person did. We understand analyzing data with this kind of noise in it can be challenging. We've added a script in our repo which illustrates a technique to correct noisy data. Third, we add a delay before sending out information in conversion reports. This helps us further disassociate information about the ad click from information about the conversion or even what day a conversion occurred. All of these mitigations help us provide conversion measurement as a capability of the web platform via a new purpose built API designed to provide a certain amount of information while ensuring robust privacy guarantees on a technical level. Note that currently the API we've built only supports attributing conversions to clicks and not yet views but this is an enhancement we're working on supporting. Here's a diagram showing the overall flow of the API. The API allows the ad on the publisher site to register its impression with the browser and for the conversions to be registered on the advertiser site. All this information gets stored internally by the browser and sent in a report at a later time linked together. Here's some code pointers for how to enable impression and conversion registration. Impression registration involves adding a few new attributes to the anchor element leading to the ads landing page and conversions are registered by responding to HTTP requests with a redirect to a special URL the browser recognizes. We'll go over these in more detail in a quick demo. Remember, the two key pieces needed to use the API are configuring ad creatives and conversions such that the relevant data can be registered and stored by the browser. Let's start on a publisher page which embeds an ad iframe. Within the iframe there's an anchor element that controls navigating to the advertiser's landing page. To configure this ad for measurement this element needs a few new attributes. The impression data specifies an identifier for this ad that will show up later in reports. The conversion destination specifies the intended destination of the resulting navigation from clicking the link. The reporting origin specifies which origin should receive the report in the result of a future conversion. Once someone clicks on an ad with these attributes specified, event data is stored in the browser and eligible for distribution. For debugging purposes these ad clicks can be seen in a special internal page built into Chrome at chrome colon slash slash conversion internals. Ultimately at some point in the future there will be dedicated developer tooling but for now this Chrome internals page gives us what we need. It displays all the information from the API that is stored in the browser. Now, let's convert on the advertiser's site. We can check out as a guest. Clicking on the buy now button takes us to the order confirmation page. This page includes a hidden single pixel image from the reporting origin. In other words, once our purchase is complete a request is made to the reporting origin confirming that a conversion occurred. To signal a conversion the reporting origin servers need to respond to this request with an HTTP redirect to a special URL recognized by the browser. We can see this in the network panel if we look at the request for this pixel. This signals to the browser internally and any impressions targeting this advertiser's site are marked as converted with the conversion data set to two. Just like for the ad click you can view the conversion in the conversion internals page two. Here you can see all the information that will end up in the report. As you can see these quick conversions that occur right after the ad click will be delayed for a few days from the click. For debugging purposes however, waiting this long is pretty inconvenient. We built a mechanism within the internals page to send reports manually. Let's do that. At this point Chrome has sent reports via an HTTP request to the ad text servers. Let's navigate to a page that displays all the received reports. Here we can see all the information embedded in the report. The impression data, the conversion data or the ad was the last clicked on or not. And there you have it. At this point the advertising platform can say that the ad associated with the given click ID led to a purchase. Maybe they'll use this information to show more ads on the publisher site. If you're interested in trying this API there's a few things you can do. First, you can test this out locally in Chrome today by setting a few Chrome flags listed here. The demo shown here should also be publicly available along with an open source code on GitHub. If you want to run this in a production experiment you can register for an origin trial. The conversion measurement trial also supports our new third party origin trials. Please try out the API and give feedback. We'd love to hear how the API works to enable your measurement needs using attribution on your device without the need for cookies or other tracking techniques. And we welcome engagement on issues and other issues. For more information and documentation take a look at our web.dev article and check out our GitHub repo for future enhancements we're working on like support for view through conversions and aggregate measurement. Thanks very much. Have you ever tried to create an account and log into a website and found it surprisingly difficult? Now in this video you'll learn some simple techniques to make sure your site does a great job of handling but before I start, do you really need your users to create an account? You know the best login is no login. Don't gate features behind login just because you can. Asking a user to create an account is asking them a favor. And remember that every password and every item of personal data that you store carries with it something I call privacy data debt. If you just need to save information for a user between navigations and browsing sessions, use client-side storage instead of forcing the user to create an account. You can find out more about client-side storage on web.dev. For example for shopping sites, forcing users to create an account to make a purchase has been cited as one of the major reasons for shopping cart abandonment. Now if you do run an online store make guests check out the default and offer to save customer details and create an account once a purchase is completed. With all that out of the way if you must get users to sign in make it as quick and easy as possible. Firstly make it really clear where to sign in. One big login or sign in button is good not just some obscure icon or vague wording. And once users have signed in make it really obvious how they can access their account details. In particular make it simple to change passwords. Now you may be wondering whether to add buttons or links for account creation as well as to sign in. Well many sites now simply display a single sign in button and when the user clicks on that they also get a link to create an account if necessary. Now that's a common pattern now and most of your users will understand it. Make sure to link accounts for users who sign up via an identity provider such as Google or Facebook and who also sign up using email and password. That's easy to do if you can access a user's email address from the profile data from the identity provider and match the two accounts. Now I keep saying this but whatever you do make the most of the powerful cross-platform functionality that's built into form and input elements on all modern browsers. You can find out more from our article and video about sign-in form best practices on Web.dev that also has lots of great tips on how to improve form design, layout and accessibility. You know in the sign-up flow your job is to minimize complexity so cut the clutter and keep the user focused. This is not the time for distractions or temptations. Collect additional user data such as name and address only when you need to and when the user sees a clear benefit from providing that data. Every item of data you communicate and store incurs cost and liability and you need to remember that and by the way don't double up your inputs just to make sure users get their contact details right. With autocomplete that makes no sense. Instead send a confirmation code to the user once they've entered their contact details and then continue with account creation once they respond. That again is a well known sign-up pattern now and a good approach. One simple technique you might want to consider is to allow password free sign-in by sending users a code every time they sign in on a new device or browser. Sites such as Slack and Medium use a version of this. As with Federated Login this has the added benefit that you don't need to send passwords. Now in this case you'll need to make a careful decision about session length how long the user remains logged in and what might cause you to log them out. But you'll need to do that whatever approach you take to user identity. There's no one hard and fast rule here but you need to consider mobile versus desktop and whether users are sharing on desktop or sharing devices. You can get around some of the issues that you might face by enforcing reauthentication for sensitive features, for example when a purchase is made or an account updated. So here's the elephant in the room. You know what's up with passwords. Passwords I say last century we need to wean ourselves off passwords but that's not going to be an easy journey. You should of course offer Federated Login via identity providers but the reality is that some users are more comfortable with email and password login. The problem of course is that users may give up on your site if they forget their password and especially with a new phone or computer and on shared devices. So what can you do? Well you need to help third party and built-in browser password managers do what they do best suggest and store passwords so that users don't need to choose, remember or type passwords themselves. Password managers work really well in modern browsers syncing accounts across devices across native and web apps and for new devices and this means the single most important task for you is to make sure you code your forms correctly. If you do one thing after watching this video please double check your HTML to make sure you're using the correct auto-complete values in your sign-up form. The sign-up forms use auto-complete equals new password for new passwords and auto-complete equals email for email addresses. Coding forms correctly enables browser password managers to understand your code in order to save and to suggest strong passwords. Now enabling password managers to suggest strong passwords is the best option but many users want to enter their own passwords so you need to implement rules for password strength. I won't go into the details here but the US National Institute of Standards and Technology also known as NIST provides full guidance. Now whatever rules you choose for passwords you should never allow compromised passwords. Once a user has entered a password you need to check that it's not a password that's already been compromised. The site HaveIBeenPwned provides an API for password checking or you can run this as a service yourself. Chrome password manager also allows you to check if any of your existing passwords have been compromised. If you do reject the password that a user proposes tell them specifically why it was rejected. Show password problems in line as soon as the user has entered a value not after they attempt to submit the sign up form. You should however allow password pasting. There are a number of sensible use cases for copying a password from one context to another. Now just a couple of quick things about password policy on the back end and the single most important rule here is salt and hash your passwords please. In other words do not store or transmit passwords in plain text. And don't try to reinvent like your own hashing algorithm. Also don't force users to periodically change their password. Research has shown that forcing password updates can be costly for IT departments of course and doesn't really have much impact on security. It's also likely to encourage people to use insecure memorable passwords or to keep a physical record of passwords. Instead you should monitor for account activity and warn users. You should be doing that anyway. If possible you should also monitor passwords for your existing users to check for passwords that have become compromised because of data breaches. You can also give your users access to their account login history showing them where and when a login happened. You should of course make it really simple for users to reset their password if they forget it. OWASP that's the open web application security project provides detailed guidance for this and lots of other identity use cases. You certainly should not use password hints to re-ferrify accounts. These are highly insecure. You should consider supporting multi-factor authentication especially if your site handles personal or sensitive information. I really recommend taking a look at AG Kitamura's work on this including his new video about SMS OTP best practices. So you know in the real world you'll need to implement password handling. However you should also enable your users to login via a third party identity provider also known as federated login. Now this approach has several advantages. For users who create an account using federated login you don't need to ask for, communicate or store passwords. You may also be able to access additional verified profile information from federated login such as an email address which means the user doesn't have to enter that data and you don't need to do the verification yourself. Federated login can also make it much easier for users when they get a new device. And you really need to think about this to consider first day experience. Now remember that many users are now expecting login from their phone, their laptop, their desktop, tablet, like TV in the car and you know on other platforms as well. And this is a moment where you risk losing users or at least losing contact with them until they get set up again. You need to make it as easy as possible for users on new devices to get back up and running on your site so this is another area where federated login can really help. Whether users access federated login or not, you should make account switching simple. Many users share devices and it really reduces friction to be able to easily swap between accounts. Now here's a thing that's crucial for keeping your users and your business safe. On many sites it's surprisingly difficult to work out how to change your password. It's especially important to help your users change their password if they discover that it's been compromised. To make this even easier you should add a well known change password URL redirect to your site. This enables password managers to navigate your users directly to your password management page. This feature is now implemented in Safari and Chrome and is coming to other browsers and you can find out more from our web.dev article. You should also make it simple for users to delete their account if that's what they want. So let's talk about names and usernames and the first rule here is don't insist on a username unless you need one. Well if you do need usernames don't impose unreasonable rules on them and allow users to update the usernames and other personal information. It sounds obvious but that's why on the back end you need a unique ID for every user like not an identifier based on user data and user name. Now you might want to validate names and usernames on the front end but you need to be as unrestrictive as possible with characters and alphabets. So here's a top tip for names, addresses and usernames don't use regular expressions that only match Latin characters. Use Unicode letter matching instead and your back end storage should support that securely as input and as output. Okay one last thing for sign up forms it's crucial to implement analytics and real user measurement. For your sign up forms you need to monitor page analytics for page views, bounce rates, exits and so on and make sure to add interaction analytics such as goal funnels you know where do your users like abandon sign up and so on and events like for example what proportion of users click your forgot password link and lastly track performance metrics so you can understand the real experience of your users. You can check out web vitals on web.dev to see how you can access real user performance data for core metrics. So that covers some of the basics to find out a whole lot more take a look at the article and the codelab that go with this video and thanks so much for watching. Hi everyone I'm Eiji in this session I will tell you how to improve your sms RTP forms. You will be able to make a huge difference to user experience with just a few small changes signing users in is a process to prove that the person trying to sign in is the same person who originally registered on your website and many websites choose to use passwords to do so however passwords are known to be vulnerable there are many traps waiting for your users and your website asking for additional evidence of ownership will help the user to prove their identity. One of the most popular ways to do that is to use sms RTP one time password as a second step for authentication the user can prove their ownership of a phone number by entering an RTP delivered via sms because phone numbers are universally unique and sms message can be used to prove a user is who they say they are let me show you how it works let's take a typical two-step verification as an example a user enters a username and a password and then the website asks for an RTP because the user has already registered their phone number on the website the service can send an sms message to the number with an RTP the user then opens the sms app and copies or remembers the OTP and enter that in the form the website examines the submitted OTP and verifies that user this is how sms OTP is used in a typical two-step verification there are a few more use cases for sms OTP phone number verification some services use a phone number as a user's primary identifier in that case users enter their phone number when signing in and enter an OTP received via sms to prove their identity sometimes it's combined with a pin to constitute a two-step verification account recovery when a user loses access to their account there needs to be a way to recover it sending an email to their registered email address or an sms OTP account recovery method payment confirmation in payment systems some banks or credit card issuers request additional authentication from the payer for better security and some of them choose sms OTP as a way to achieve that before diving into the best practices there is one caveat even though I'm speaking about best practices on sms OTP form you should be aware sms OTP is not the most secure way of authentication by itself this is because phone numbers are known to be recycled and sometimes hijacked and the concept of OTP itself is not phishing resistant if you are looking for better security I would recommend using web authentication you can learn more about web authentication from the talk I gave at the Chrome Dev Summit last year with that in mind let's dive into sms OTP form best practices Sam has already done an amazing job covering general best practices on building a signing form you can learn more on his article and the video but I will give you three specific tips on constructing an input field for an OTP don't use type number for the OTP input since that will add up and down increment arrows to the input on desktop which doesn't make sense for OTPs I recommend simply using text but it doesn't matter much if you follow the next tip use input mode numeric since typical OTPs are digits using input mode numeric helps mobile browser users enter an OTP by showing the optimized keyboard for entering numbers some good news is Firefox recently started supporting input mode which makes it available on most mobile browsers another important attribute is auto-complete one-time code this is only effective in Safari but as it helps the browser outfield I strongly recommend using it iOS 12 and later heuristically detects an OTP from the sms message and shows a suggestion on its keyboard for the user to enter the OTP to the form now let me talk a bit about sms text message there's a format you can align with to get the best out of OTP messages and the good news is that the same format works across Safari and Chrome just append the following information at the last line of the message the domain you are aiming to bind the OTP to preceded with an act and the actual OTP preceded with a hash by specifying a bound domain the browser will assist the user entering an OTP only when the domain matches by specifying an OTP in the format the browser can retrieve the exact OTP you intend to deliver and prevent the browser from picking the wrong numbers using heuristics starting from iOS 14 and macOS BigSar Safari will respect this format with a bound domain the message and assist the user to enter the OTP Chrome also uses the same domain bound text format but with a different approach which is called Web OTP API by using the Credential Management API the website can obtain an OTP via sms as an OTP credential upon a user consent and handle it imperatively in JavaScript you can check if the future is supported by examining if OTP credential is available simply calling navigator credentials get with a type of OTP and a transport of sms instructs the browser to start waiting to receive an sms message as soon as the user receives an sms with the domain bound message format a dialog is displayed after the user presses the allow button the API is resolved with the OTP which you can set as a value of the input field or unlike autocomplete you can pass the OTP directly to the server this is available in Chrome, Opera and Vivaldi for now but we are hoping this feature to be available on more browsers in the future as well there are a few partners seeing amazing results by adopting the Web OTP API Tinder a matching app improved OTP completion rate by 2.5% OIL, a hotel booking service reduced time to login by 37% Goibibo, a travel booking service reduced sign up retry by 25% other players like Shoppay from Shopify, Twitter Facebook are preparing to bring this feature as well I can't wait to see Web OTP API to be available in many websites finally let me recap when you are building an SMS OTP form use type text input mode numeric and autocomplete one-time code for the input tag use the domain bound message format for an SMS message delivering an OTP use the Web OTP API to assist the user entering an OTP you can learn more about these best practices at WebDev SMS OTP form and more about Web OTP API at WebDev Web OTP Thank you for watching Hello, it's Paul here Thank you for joining us today we've had a great time hosting you and we hope that you've enjoyed it Today, we covered two critical areas that we think you need to be aware of Firstly, our privacy improvements via the Privacy Sandbox and secondly, enhancements to user experience and performance through core Web vitals If you focus on these two areas we believe that you'll be delivering better experiences for people all around the world So, please join us again tomorrow at the same time to learn how to take advantage of the latest capabilities, PWA and modern design come into the Web And also, don't forget to like and subscribe to our YouTube channel to stay up to date with everything that Chrome team works on