 Welcome back to Web Dev Live APEC Edition. It's been great to be together over the last two days, and now let's finish strong on day three. We've spoken about the role the web has played with you building websites that share COVID data online, allowing work and learning to happen from home, much-needed entertainment, and keeping up with world news. One of the big shifts we've seen is around retail and commerce. Now, online retail has nearly doubled its share of credit card spending, and e-commerce is now 30 percent of retail, growing 15 percent in just six weeks. There's been a flood of activity as businesses rush to get online. Pedro Freitas, head of Loja Intergrado, which is part of the V-TEX platform, shared how they saw the doubling of sites being created every single day. When they saw this, they kindly offered unlimited plans for healthcare clients. When it comes to food, we're back to 1992 levels of share between groceries and restaurant orders, and my omelettes are certainly looking a lot better. Then there's the story of Foodler, a couple of university students quickly build a site to allow people to order directly from the local street hawkers in Singapore, bridging the online offline gap. Then there's Sebastian Cabanero, a high school student from San Marcos, California who jumped into action to create food banks. A website that helps you find local food banks. Now, we continue to be impressed with the ingenuity that some of you are displaying as you help people in need right now. Speaking of being impressed, we're really fortunate to have a really healthy framework ecosystem on the web, where different approaches can be explored in different ways. Now, View is a popular and well-loved framework with particularly large usage in the APAC region. We wanted to invite Evan Yu, the creator of View, for a chat. Hi, DL. Nice to be here. Hey, Evan, thanks so much for joining us. Now, I'd kind of love to start at the beginning of the history of View and the story as you've kind of gone through different evolutions from version one to two and now what you're working on with version three. Sure. View started out as a personal experimental project back in 2013 and was first made public in February 2014. The initial goal was really just to create something that I would enjoy using myself. It was a very small library, combining the data binding that was inspired by Angular 1 with a ES5 getter-setter-based reactivity system. Because we used getter-setters, it was not IE8 compatible when it came out, which a lot of people took issues with, but I'm kind of glad we made that decision in the early days. Later on, we released version two and started adding more and more supporting libraries to it. So, for example, we added a router, we added a CLI. So, as we added these more parts, it started to become more like a framework, but we still kind of loosely followed the concept that it should be incrementally adoptable. So, it's not as monolithic as some other solutions, but also it's no longer just a single library that does only one thing. The major change in version two was the introduction of the virtual DOM as the underlying rendering layer, which opened up a few interesting capabilities at the time, for example, service-side rendering or rendering to other platforms. Right now, we are hard at work in finishing up version three, which is a major rewrite, and it brings up a lot of new interesting features and changes. There are so many things. So, I'll just mention a few highlights here. We added the Composition API, which exposes the lower-level reactivity APIs inside view for advanced larger composition and reuse in large applications. We re-rolled the reactivity system using ES2515 Proxies, which greatly improved performance. The rendering layer rewrite also has seen great performance improvements. The bundler, the library itself is now fully tree-shaking compatible, so you will see smaller bundles. We added first-class TypeScript support, more modular internal architecture for better maintenance and tooling integration. So, there's a lot more. We are shipping in view three. We're still hard at work in ironing out some of the rough edges, but it's going to be ready very soon. Got it. That's great. So, you've been working on the web for quite a while, with 2013 being the birth of view, and obviously, you worked on the web before then. I'm kind of curious as you've seen the evolutions and seeing what's going on right now, how do you feel about web development in 2020? What are the most pressing issues? What are you focusing on? What are the problems that you see the web developers have, and how do you feel like you and Vue can help there? Sure. I think the ecosystem right now is at a transition period where we are seeing a lot of these new language standards features, new platform capabilities, are being finally stable and consistent and shipped in all these evergreen browsers. So, all these mainstream browsers in their latest version now have a very consistent support for these latest features. And IE11 is finally on the way of phasing out, so that sort of presents an opportunity for new stacks or new tooling to break free of the shackles of these legacy problems and sort of rethink how we can best take advantage of these new features that now we can finally use for the larger majority of our users. So, I think, for example, most of these major browsers now support native ES module imports, which presents some interesting technical possibilities that we can leverage to sort of rethink our development workflow. Got it, yeah. I've actually been watching you in your Twitter feed there, kind of explore some of these things. I've seen you working on Veet and VeetPress. I was wondering if you could explain a little bit more on what these are doing. Sure. Veet is a work, I would say it's a web development build tool that combines a dev server with a build step. Now, the interesting part is in the dev server, we are leveraging the browser's native ES module import handling to provide a bundle-free development experience. So instead of bundling your whole app, Veet lets the browser import your modules as needed using native imports and only processes them on demand when the browser actually requests them. So this has a few advantages over the traditional bundling-based dev servers. The first thing is when you have a large app, you may have a lot of modules in your application. But when you are working on a specific part of your application, maybe you only need to import a subset of these modules. So on-demand import compilation using Veet's approach allows you to only compile the necessary part for modules for the part that you're actually working on. So this results in much faster server startup time in large applications. The second part is Veet also supports hot module replacement on top of native ES module imports. Now, with native ES module imports, because we don't have to do the whole bundling scope crawling when we're handling hot module replacement, the implementation is actually much simpler and much more efficient. So it will stay blazing fast even as your app grows, so which keeps the development feedback loop fast, no matter how big your app is. So I believe this really presents an interesting option because development experience is so close to the old days where we first got into web development where you just have an index.html page and you import some scripts and you just get things going. Veet tries to sort of simplify the whole development setup and just present a development experience as close to the original vanilla web development experience as possible. But without giving up all the modern new tooling capabilities that we are used to. So it's still an interesting kind of experimental at the stage, but we're getting a lot of good feedback from users already. Got it, that's really exciting. I love this kind of new trend towards in dev time, not having to deal with bundling and the like in production. We can still obviously do a lot of that, push out all of the optimizations we can. Obviously everything we can do to users makes sense. So that's Veet. What about Veepress? How does that tie into this? Oh yeah, so if you don't know Veepress, so Veepress is a remake of ViewPress and ViewPress is a static site generator built on top of View, which allows you to write markdown, use view components in your markdown and write custom themes as a view application. Now, ViewPress was based on Webpack. So Veepress is essentially a remake of ViewPress, but using Veet as the underlying build tool. And we took the opportunity to bake in, so first of all that provides obviously a much faster development experience, faster server startup, faster updates when you edit a page. Another aspect of it is we're baking a lot of performance best practices into Veepress. For example, some of these, we're seeing a lot of static site generators based on universal JavaScript frameworks. For example, we have Next.js, which is in Next.js review, which are both excellent projects, but when we are using service rendering to send static content to the client, we often face the problem of the double payload issue where you're sending the static content as HTML, but you're also sending a lot of JavaScript, which was used to render the same content, and which is kind of useless when it's on the client. And then we're spending time on the client to hydrate those content using JavaScript. So there's a lot of room for improvement here. And Veepress tries to tackle that. It takes advantage of View3's compiler to do static analysis, and we automatically detect all the static parts that won't change in your page. And then we concatenate them, stringify them into static strings, and eventually safely remove them during the production build. So that decouples the current page JavaScript payload of your page from the content. So you're only paying for the dynamic bits inside your page, and all the static content will have no impact on the JavaScript payload size, and have no impact on the client-side hydration performance. So I think that's a pretty significant improvement to how Veepress was handling things. And we're excited to see how this would, when we finish it, we're excited to see how much of a performance improvement this can bring to our users across the board. Got it. I have a feeling like we're gonna be hitting some core web vital thresholds here. That sounds great. Thank you. Now, we've got, we're going live, the live stream right now to an APAC-friendly time zone. And we talked about how View is particularly popular in APAC. And I was just kind of curious on what you think caused that and what we can learn from that. I think View's popularity in Asia, in APAC, definitely has a lot to do with me being Asian. Personally, I am very active in the Chinese developer community as well. And a lot of people probably heard about View through me, but at the same time, I think a more important aspect of it is good localization of our documentation. So when we first worked on the documentation, because I am native Chinese speaker, and I know a lot of Chinese developers sort of struggle when they see really dense technical text written in English. It's not that they cannot attempt to read it. It's just when you are reading something that's not in your native tongue, it's just takes so much longer. It makes it so much less efficient to learn. It takes much longer to get the click when you're learning something new in the language that you're not used to, right? So I wrote the first draft, the first version of the View's documentation in Chinese, and later on when we had a bigger community, community members started to contribute more and more to these translations. They started taking over the maintenance of the Chinese docs, and we were seeing a lot of contributions to translate View docs into other languages as well. So I think this sort of good internationalization, localization effort is definitely critical in helping View's adoption in these areas. And from my personal experience, what I've seen is particularly in a lot of areas where English is not the first language, you typically see that the channel for local developers to keep up to date with the latest information, to keep up to date with the new things that's happening in the front end world is they sort of rely on a few key community leaders who are proficient in English to translate the content for them. So it's great after by these community leaders, but they are not obliged to do this. And when there are not enough of them, this kind of creates a bottleneck for the content and information to flow into these, to reach these developers in these areas. So I think if say a framework or a tooling or a community that takes internationalization and localization as a first class concern, it definitely will help a lot in reaching these developers in a much bigger scale. That makes a ton of sense. I've seen that on a few developer sites too. I remember one tried to add translations just through machine translation and they noticed that a lot of the developers were switching to English, thinking that it was just that they were familiar with English and the like. And then it was only when they went through, a lot of it was the community actually taking the time to build really high quality translations like you're talking about. And then everyone kind of flipped back to that. So that makes a ton of sense. Now, before we go, I happened to notice that you're a karaoke fan as am I. And I just wanted to ask if you have a go-to song. Yeah, Don't Stop Me Now by Queen. Nice, nice. Yeah, mine is Sweet Child of Mine. So yeah, on that note, before we start singing, I'm already kind of getting ready to jump to the mic here. We'll have to have a karaoke session sometime. But thank you so much for joining us, Evan. Thanks for all of you do, all that you do and all that the view community does for the web. Thanks for having me. Evan just spoke about his experimentation with a bundle-free developer experience. And on day one, we spoke about the work we've been doing to understand bundlers with tooling.report. Now understanding what's going on in these bundles is really important and is one area that we're looking to explore in Lighthouse. So please welcome Paul Irish to tell us more. Thanks, Dionne. So we all spent a good amount of time configuring our bundles and our approach to bundling strategies. But ultimately, there's a lack of transparency when it comes to the JavaScript getting all bundled up and shipped into production. They're just these big files and it's easy for us to treat them as black boxes. And some of us on the Lighthouse team have worked on community tools like Source Map Explorer, Source Map Visualization to help understand what's happening inside of these files. And we've long wanted to bring some of that inspection into Lighthouse itself. So I'm gonna show you a little sneak peek of some upcoming Lighthouse features that we hope are gonna help a little bit. The first one is a new audit called Remove Unused JavaScript. Now, this is actually in Lighthouse 6.0 and it is listing all by file, the top files that are top JavaScript files that are unused and it's giving you an idea of what percentage of them, how many bytes of them are actually unused. This is great and it's actually just like what you see in the coverage panel in Chrome DevTools. But we can improve this if we know a little bit more about the actual JavaScript bundles. We can actually explode them and see, okay, for each bundle file, what are all of the original modules that were on disk source files and understand how much of them were unused and used. And this really helps us understand a bit more about the kind of code that is not actually being run. Another new audit that we're introducing is one that identifies duplicate modules across JavaScript bundles. So in this case, you actually see that low dash is in two separate JavaScript bundles. But we are really happy that this helps you see exactly cases where you probably could reconsider your chunking strategy to make for a little, something a little bit more optimized. Another new audit is called legacy JavaScript. What we're doing here is we're trying to find all the cases where you're shipping JavaScript to production and specifically to modern browsers where you're including things like polyfills and you're compiling down to a level where the modern browsers don't really need them. Modern browsers understand a lot of modern JavaScript. And so we wanna make sure that you're not over compiling. Legacy JavaScript identifies polyfills and transforms and tells you what exactly it found in each file so that you can re-optimize your deployment strategy. Got it. And is this all powered by source maps? Yeah, yeah, exactly. Source maps are super useful and they enable this really powerful analysis. And actually, if I can, we're working on a interactive UI for exploring this stuff in more detail. And so I wanna show this, these are early mocks but we wanted to provide that kind of rich, interactive tree map experience inside the tool as well. So this UI might look a little familiar to you if you've used some of these tools before but giving you a view of across all your JavaScript files and if there's a bundle and if we can understand what's inside of it, we'll show it and you can explore that in more detail. We're really excited to provide a bunch more information around here and augment it with data around code coverage so that you can see exactly what's happening at a table view to help prioritize all of the major things that you should be paying attention to and let you explore it in detail so you can really find out what's happening and what you could change. So again, this is early, this will all change quite a bit but we're really excited about bringing this experience to you. It's all coming soon to a light estimate near you. Got it. I'm always love kind of opening up some of the old bundles and seeing all of the mistakes I've made in the past. So excited to play around with this. Now a few weeks ago, we also shipped Lighthouse 6 and I believe that's the version where we now have started to include Core Web Vitals. Is that right? Yeah, that's right. All right, so Core Web Vitals, we got these three metrics, first count on paint, first input delay and cumulative layout shift. Though it's worth pointing out that FID, first input delay is a field metric. And since Lighthouse is a lab tool, in its place we have another metric, TBT, total blocking time. We like to think of this as FID's lab companion. So they're not measuring exactly the same thing but they're all about interactivity, they're all about long tasks on the main thread and heavily influenced by them. So it really works well. So in Lighthouse 6, you'll see these three metrics up top. And if you're looking at it and your report looks like this one, then you're like, wow, okay. Well, these metrics are, they're not great. It looks like I can make some improvements here. All right, but what's the next step? We've added some new audits to help point you in the right direction. So I'll just go right through them right now. First one I'll show is this, avoiding long main thread tasks. So here we're just listing the longest tasks on the main thread that you have, how long they took and what URL we can attribute them to. And then two more, we have the largest contentful paint element. The metric itself is telling you at what time that paint happened but that paint was associated with a particular DOM element. And here, this will just tell you. And similarly for cumulative layout shift, you wanna know what the actual shifts were. Well, these were the DOM elements that that shifted and these ones contributed most to your total CLS value. So you could just read this and so like you'll see this content container. Okay, yeah, I know what that is, but then if you're looking at these other two items and you're like, okay, but which actual DOM element is this? If you run Lighthouse through DevTools, we actually have a nice little surprise for you. So there we kind of upgrade these element references. So if you hover over them, we'll actually just apply that element inspection hover tool tip that you normally get in DevTools. And if you click through, you'll see it in the DOM tree elements panel. So nice kind of integration to help you understand exactly what thing you're looking at. Anyways, this is just a small slice of the new stuff in 6.0. Please, everyone check out our blog posts for all the rest of the fun stuff. Got it, I'm always a sucker for these audits. It's great to be able to kind of just go from the metrics themselves and the scores and like really go deeper to understand what's affecting them. Thanks so much, Paul. I love how we're making sure to open up as much data as we can with the Chrome user experience report. And I keep seeing great mashups and analysis from the community, such as the Onely map, which lets you kind of zoom around the world and kind of explore the performance characteristics for different device types and obviously different locations. Now, Lighthouse is a great tool to help you audit your web apps and it helps you not only with just performance, of course, but really across the board. And we want you to be free to create the highest quality web apps that give you the biggest reach to users across whatever device they're using. Now to hear more about the latest in PWAs and advanced capabilities, let's welcome Peter Page. Awesome. Thanks, Dion. We believe that you should be able to build and deliver any kind of app on the web. Web apps should be able to deliver the same kinds of experiences with the same capabilities as native apps. Combining the installability and reliability of progressive web apps with our capabilities project, we're working to close the gap and help you build and deliver great experiences. To do that, we've been focusing on three things. First, we've been working hard to give you and users more control over the install experience, removing the mini-info bar, adding an install promotion to the OmniBox, and more. But one of my favorite things about the web is how ubiquitous it is. We know that for some businesses, it's really important to have your app in the store. At Chrome Dev Summit, we previewed a library and CLI called Bubblewrap, then called Llama Pack. It makes it trivial to get your PWA into the Play Store. In fact, PWAbuilder.com now uses Bubblewrap under the hood. In just a few mouse clicks, I can generate an APK that I can upload to the Play Store. Second, we're providing tighter integration with the operating system, like the ability to share a photo, song, or whatever else, by invoking the system-level share service. Or the other way around, being able to receive content when shared from other installed apps. You can keep users up to date or subtly notify them of new activity with app badging, and make it easy for users to quickly start an action using app shortcuts, which will land in Chrome 84. And finally, we're working to enable new capabilities that enable new scenarios that weren't possible before, like editors that read and write files on the user's local file system, or getting a list of locally-installed fonts so that users can use them in their design. Of course, there's plenty more, so stay tuned. But I hear, Dion, you've been playing with the TikTok PWA. That's right, that's just my 10-year-old son, Josh, who started to create some really fun TikTok content, even though he was pretty restricted these days to the house and the pets and the like. And he kept bugging me to join in and play myself. And, you know, I was a little bit gun-shy, so to speak, on that side, but it actually gave me a great excuse to play with the really well-built PWA that the TikTok web team put together that I didn't even know about. It was really, really a nice, rich experience there. That's really cool. And all of these things that we've been working on, things like the app shortcuts, are gonna be a great addition to them. The idea that you can push and hold on the app icon and be able to quickly jump in and create a new TikTok or push and hold and say, hey, I wanna see my friend's TikToks or the ability to be able to show app badges when a friend has posted something new or something like that. So there's plenty of great stuff coming that they can take advantage of, and so can you. That's excellent. Thanks again, Pete. Thank you. Now with that, it's time to close out the last opener. It's been a great pleasure being with you over the last three days, but we're not quite done yet. We've got a great set of content coming up today. Next, you're gonna hear from Google engineers who help you improve the reliability of your experience with advanced patterns for building PWAs, then how you can get them into the Play Store and how to help you increase your conversion rate for notifications and so much more. Now, if you're watching live, we'll be on the chat to answer your questions at web.dev slash live and of course on YouTube. But the fun isn't gonna end today. Thanks to our amazing Google Developer Groups all across the world, we've got a set of follow-up events in the weeks to come where we'll have Googlers, Google Developer Experts, as well as experts from across the local communities joining you to share more insights and guidance. Just check out the regional events section on web.dev slash live starting tomorrow to find the event that's in your time zone. Thank you so much. If you are building website today, chances are you are using some sort of build tools. All right, so one question. When you are starting a new project or you are given brand new repo and then like, hey, Soma, please start this project. Where do you start? What tools do you choose? Tell me about your setup. This has evolved so much over time. I take a long time to decide and I usually start out without a bundler. My preferred starting place is, I just want to write latest JS, latest CSS, and the rest of it is out of my way until I'm going to solve it. It really depends on what the project is. I've been building a lot of static sites lately and Eleventy is a recently, I guess a relatively new tool for static site generation and I've been converting some Jekyll sites into that and it's just felt so nice and not super robust, which is what you sort of need for a static site. But then if I am building a more dynamic project, I'll either go with like Next.js or Gatsby if I want to react with a static site because I like how it parses out to just HTML, CSS, and JavaScript. But there's a thousand tools and it really just depends. That's the answer, it depends. So I was a Webpack user and I use Webpack because at the time it was the only one that supported code splitting because I knew I just needed to add this tag in the head but the HTML plugin went, no, this is mine now, you may not touch. And then other things like in the earlier days of service worker, I wanted to, I just wanted like a list of the output files. I want like, you know, let me know what the hash is going to be for these files but reading the Webpack plugin docs, I just, ah, I was getting so frustrated I couldn't figure it out. I used to use Gulp or Grunt a lot because at least I understood what was happening and then with Webpack, I really didn't understand what was happening. Now over time, I have kind of fallen in love with Rollup. So I have my own build system that I've been managing. It's probably my sixth build system I've made. If you go look at my GitHub history, I have a Grunt one, I have a Make one, I have Gulp ones. I've been following all these build tools for a long time. Here's how I feel about Web projects, is complexity is inevitable. There's no way to get around it and you're either injecting complexity from the beginning and making it worse or you're waiting until your complexity confronts you. Yes, even if you start simple, complexity is inevitable. Idea of making a website in principle is straightforward. You make HTML document, add style to it and add some functionality to it too. But in practice, web development gets a lot more complex. Your application code may depend on outside libraries or different modules. You might be importing web fonts or you might be pre-lending portion of a page as a static site so that it can get delivered faster to the users. And chances are you're probably using build tools to manage all of these complexities. But because tools expect certain setup, sometimes seemingly simple tasks like add online to HTML gets harder to accomplish. There are many challenges like this in web development. So let's look at how we manage JavaScript. In the past, we've written everything in independent file or different script tags and carefully combined them or added to HTML by ourselves. The way we write JavaScript, we used to like have this like massive file, the humans needed to know which one goes first and which one goes after. Yeah, and even like you're making me remember where I started with bundling, which was a PHP script that concatenated all my JavaScript files together, which now feels wrong, but at the time felt very powerful. But now we have modules, which means dependency of each modules are specified in the code, which means a build tool can analyze the file and create bundle for us. Even better, some tool like webpack, analyze which part of the code is actually being used and extracted to make smaller bundles. When we started writing JavaScript, this way I feel like webpack came in the scene as a tool of choice with, you know, things that has a lot of bells and whistles and do things like tree shaking and scope hoisting. So could you explain how webpack handles this module JavaScript field? Yeah, so webpack supports like a huge number of module formats. Like some of the common ones, obviously ES modules, Kana.js, everybody knows about, but it actually supports parsing and understanding the structure of system.js modules and AMD modules and even Walsum imports. And so it takes all that information and in its in-memory graph representation, it attaches that and can use it to, you know, if you only imported one thing from a module, it can essentially delete the export from that module that you didn't use. And that way that code path won't end up in your bundle. And when you take that and you've fragmented out, you know, maybe that export was using another import from another module and now that's unused. You can see how flowing that information through the graph eventually you could end up removing a fair bit of code. So webpack doesn't actually convert modules to an internal source format. It is more focused on understanding them and their structure as they exist on disk. Our JavaScript file does not have to be a single file. So tools like Lollap will split them up into smaller chunks. Can you explain why Lollap is really good at it? Yeah, so webpack has its own loader and so does parcel. Whereas rollup by default is equiskip modules. Like that's where it lives, it lives in that world. So the output it generates is like way simpler than the other tools. In terms of code splitting, rollups implementation is very pure, I would say. Like it will create the smallest number of chunks that it can, but it will create a small chunk. Like maybe a chunk just containing one function if that's the only bit shared by two entry points. And that's something that webpack and parcel don't do. They will duplicate that module in both of their bundles, whereas rollup will always just create a separate chunk. It's very pure, it'll never duplicate code. And different script runs in different thread. Ideally, common dependencies are exported as one chunk, but sub-tools doesn't understand it, so it creates duplicate. What was interesting to me was that the parcel supports walker and main thread splitting out of the box. Is there any back story for that? There is. And I think I can take a little tiny bit of credit for parcel supporting that because it was, I think like February 2018, you, me and some others were working on Squoosh. And Squoosh made heavy use of WebAssembly and put the WebAssembly in the webworker and then used comlink to use those webworkers. And we built Squoosh using webpack and we figured out over time that actually the way Webpack built that project, it put a copy of comlink into the worker and into the main thread. So the user ended up loading that code twice. Now comlink isn't that big, so it's not that big of a deal, but with bigger dependencies, that could actually become a significant problem. And so I filed a fairly long bug on Webpack, like with a graph and everything, explaining why that should change. And it hasn't been fixed yet, but it's been discussed a lot. And I think John Larkin told me that Webpack 5 will finally be able to address that problem. But shortly after I opened that bug on Webpack, Devin Goved, who is the maintainer of parcel, actually opened an issue himself on parcel for himself saying, I think we can do this. I think parcel can fix this problem. And so I don't know how long I talk about it. I guess a couple of months later, the bug was closed and suddenly parcels now supported this code splitting across worker boundaries. And that was just really, really cool to see. Beyond making JavaScript bundles, build tools help us manage assets too, sometimes separately and sometimes through JavaScript. So I think that as CSS has evolved, the tooling around it has grown as well. And I think the first like really big instance of CSS tooling came about with SAS and less and stylus and all of those pre-processor tools. And that was when Ruby was like really big and they were written in Ruby and it was sort of clunky and slow because then you had to wait for your CSS to process before it was spit out from SAS to CSS, for example. And then Node came around and that became a lot faster and SAS was rewritten and people were still using a lot of the benefits from that language, the SAS language. And then post-CSS sort of replaced a lot of that need because it allowed for you to do some of these same things. But instead of pre-processing and waiting for the developer to see their CSS file be exported, you could then run through the CSS file after you had already written it and then apply transformations and changes with post-CSS. And that also enabled you to have pluggable, very small bits of code that was way less robust and just large in terms of developer fingerprint in your dev files and your architecture by enabling like these small plugins like a auto-prefix or plugin or something like for size where if you wanted to use a size keyword, you could use that and you could even write your own. So that was really cool. And I think that that has continued to this day. People use post-CSS all the time and now you're seeing a lot of additional CSS tools that allow for things like tree shaking and some of these optimizations that are beyond just minification. And that is really coincided with the switch to framework-based JavaScript as being so prevalent in the way we architect our projects now. You know, naturally, I think CSS and JS happens or maybe it's JS and your CSS, but it's sort of like inevitably what will happen is you'll find a moment where you needed something that was just so richly dynamic that some declarative static styles might not work for. So I've been putting CSS and my JS for a long time but if we're talking about the new CSS and JS libraries to sort of take your object notation or let you do extending and abstracting in different ways at the client side, those are great too. They all options for writing styles still have foot guns. So you just need to be careful and you'll learn your tool the more you use it. Okay, so what's the quirks when dealing with assets? Yeah, so, I mean, Webpack has like a super long history with various techniques for doing this. And it's simplest. There are tools for essentially you can take a loader, which is a transform you apply to a module and you can apply that to something that isn't JavaScript to turn it into a string in a JavaScript file. And so like historically, the way that assets have worked in Webpack has tended to center around turning them into a JavaScript thing even for assets where the asset itself might have dependencies either on JavaScript or on other assets. It will turn that into a JavaScript module with a string and turn the dependencies, you know, the CSS import statements into JavaScript require calls. And so it will sort of inject them into the graph by converting them to the equivalent JavaScript code. Now, this is changing in Webpack 5, but for other tools like parcel, assets has been a center of it from the beginning. One thing that says parcel apart from many others is that they actually don't use JavaScript as their entry point, but HTML. So what is considered an asset in many other build tools is their main entry point in parcel land. And that makes sense because on the web, that is the thing we go to. We go to HTML pages and from there on, we reference assets. So whether you reference an image from JavaScript or from within HTML, parcel will understand that. And that's actually really cool. And it does this to many, many layers. So if you reference a CSS file from HTML and that CSS file references some images, all of these things are tracked by parcel and they will be hashed and they will get a version number and whatnot. So it actually builds an entire asset graph. What Soma just explained is called asset hash cascading. If one file in a graph is updated, then the file hash changes. And because of that, hash of the file that used the updated asset should also change. This is important so that we can control cache for better performance. Let's see how Lola handles it. Yeah. So right now, hashing is Rolex weakness. And it's something they know about and it's something they are going to be working on fixing very, very soon. So when you generate the hash for a file, that little bit of the file names going to the end of the hex letters and numbers, you need to do that as late in the process as possible. And you want it to just be based on the contents of the file, like not the directory it's in, not your config settings, anything like that. If you've got JavaScript, which imports JavaScript, which imports JavaScript, it does the right thing with the hashing. You change a leaf one and it changes all of the other ones in the chain because all of the URLs have updated. Whereas with assets, you use a magic string in rollup and it will change that to be the assets URL, but it does that after hashing. So you update your asset, it changes hash. It updates the JavaScript file fine, but it doesn't update the hash of that JavaScript. So it's a weakness, but they are fixing it. These differences in gatchas in our build tools are a constant source of frustration. As Yuna said, the best tool for the job really depends on your project. And so we wanted to make it easier for you to navigate this landscape. Tooling Report is a new website that gives developers like you an overview of various features supported by different build tools. We built this website so you can evaluate and choose the right tools for your next project or maybe you are in the middle of migrating from an infrastructure and hitting a load block. Tooling Report should help you answer your questions. We've essentially written a test suite for different tools based on common web development practices. So you can lead why those tests are a little event and see how each tool handles it. And when you are ready to implement yourself, you can look at our test code to see how you might integrate certain features into your build setup. We also welcome your contributions. So if you think certain features should get tested, please raise an issue on the repo. Thank you and please reach out if you have any questions. Hi, I'm Yuna, a developer advocate on the Chrome team focusing on CSS and Web UX. Thank you for joining me today. I am super excited to get started and talk about some magical lines of CSS that do some serious heavy lifting and let you build robust modern layouts. Before we dive into that, there are a couple of key terms that will help you in your styling journeys and are super good to know about as we walk through these techniques. Most of the items I'm mentioning today are used in conjunction with grid or flexbox layout. And you can denote those as display grid or display flex on the parent element. The first key terms are FR and auto. These are used with grid layouts to denote fractional units of space, that's FR, or automatic space, that's auto, based on the minimum content size of the items within that element. For grid layout, we also have the minmax function, which lets us set a minimum and maximum value with our layout bounds to enable responsive design without media queries. We also have separate minmax and clamp functions available in some browsers that bring logic to CSS. The browser determines which value to choose based on the function provided for min and max and for clamp, we'd set both a min and max with a relative value in between. We'll definitely be covering these within the video with a demo. And within these layouts, we can also use auto fit or auto fill to automatically place child elements into a parent grid. This is another tool for dynamic responsive layouts without media queries. We're going to be going over all of those terms and more with demos galore. I'm a really big nerd. I don't know why I made that rhyme and put that in this video, but I did so here. So let's just dive into the demos and one line layouts. I made a handy little site called one line layouts at one line layouts dot glitch dot me so that you can follow along or play on your own and have a reference for the power that CSS can bring to your layouts. For our first single line layout, let's solve the biggest mystery in all the CSS land, centering things. I want you to know that it's easier than you think with place item center. I call this the definitely centered layout. What we need to do is first specify the layout method which is display grid here and then we are going to write place item center. This is that one magical line of code and I have these highlighted under the titles here. So what happens here is no matter what you put in here, it is going to stay centered to that parent element. So if we look at the HTML, we have this parent and it's just getting background blue and then this child with a coral background and it's content editable. So we can actually just type in here. We can have vertical content, hello world. And even as I'm typing, this child element is staying centered within the parent box. So I think that this is a really cool technique. Place item center will solve all of your centered dilemmas. Next we have the deconstructed pancake. This is something that we see all the time on marketing sites. You see this row of three items and usually on mobile we'll want that to be stacked which is why I called this the pancake stack but deconstructed because as you increase the size of the viewport, those items will start spanning into the same line. So it starts when they're all on top of each other and then they start to deconstruct as you increase the size of the viewport. So the way that we're going to be doing this one is using the shorthand for flex box, flex zero one base width is what we will be using for this look that you're seeing right now without the stretching. But if we did want to stretch, we would set that to flex one one and then the flex basis. And the reason why we do this is because the flex shorthand stands for flex grow, flex shrink and then flex basis. So again, here we have flex zero one 150 pixels. That is the basis here. So as we increase the size, it is not going to be growing much. Also when we decrease it, it's going to be staying within that 150 pixel size. When we do change that to one, I'm just going to delete the sign of code here. Now we see that this is going to be stretching. So here you can see that it stretches to fill the space even as it wraps. And as I increase it to be even larger of a viewport size, these items are going to be filling that space. So this is the deconstructed pancake, a really common technique that we see in marketing sites. This is usually going to be like an image with some text about the product. And you can write it by using the flex shorthand. I like to call this one sidebar says, and it takes advantage of that min max function for grid layouts. So that is the line here. We have grid template columns, min max and then min and max value. That's pretty straightforward with min max. But what this is doing is essentially as we are increasing the size of the viewport, it's going to be taking that 25% size, that max size. And as we decrease it, it hits this point where it's 150 pixels and where 25% is smaller than 150 pixels. So it's going to clamp it at that minimum size. So it's going to be increasing when it can take up that viewport space. But if we have content in here that we don't want squeezed or we want it to stop being 25% and at a minimum be 150 pixels or whatever value we set here, then that is exactly what min max is doing for us. And in the grid template columns here, we are writing grid template columns, min max, that base value, and then the relative value, so 25%. And that second element is getting 1FR. So if we look at the HTML, we have these two elements, this yellow sidebar section and then this purple content section. And they are taking up the units of space that we are specifying. In this specific case, we could actually even set this to auto and it would be looking the same because we are only setting the size to that first element. We're going to go over auto versus 1FR in the next example. But I think that this is pretty neat, a great way to set a minimum size but then let your element stretch on larger viewports to fit those layouts a little bit better based on how your user is seeing your website. Next we have the pancake stack. Unlike the deconstructed pancake, this one does not shift when the screen changes sizes. This is a very common layout that we see for both websites and applications across mobile and desktop functions. So it looks like this as we're increasing and decreasing the size, this content is not changing. And what we are doing to create this is we are writing the grid template rows to be auto, 1FR and auto. So what we are doing is essentially telling the first and last row to take up the space that the internal elements allow. So if this header was two lines, then it's going to increasingly take up more space within this vertical layout. With this auto section, I could have more content here, but it's not going to increase the size because what we're doing is setting the auto placed row first to take up that specific size based on the content within it. And then the remaining space is the remaining fractional units. So as we increase this also vertically, you could see that the first and last row are not growing. They're still only going to be taking the size that they need, but if I decrease the horizontal space, as footer content moves on to the next line, it is going to be taking space up within that layout. So with grid template rows, auto, 1FR, auto, you can create the pancake stack. You could even have an application toolbar down there. Again, this is the one that we see really commonly and grid template rows, it's a great one to know. Another very common layout is the Holy Grail layout. Does this look familiar? I think we've all seen a website or two that looks exactly like this at some point on our web journey. We've got your header, your footer, your left sidebar over here, your main content and then a right sidebar. And we can write all of this in one line of code using grid template. So grid template allows us to write the grid template rows and grid template columns at the same time. So that's pretty neat. It's also super responsive. Again, you see this auto content taking up what we are specifying here internally and then this main content taking up the 1FR. And so what we are specifying here is grid template with auto, 1FR for this middle space and then auto for the footer. And then we've got our columns of auto, 1FR and auto. So you get the whole Holy Grail layout with grid template and auto, 1FR, auto slash auto, 1FR, auto. This slash is what denotes our rows versus columns when we are writing the grid templates. So I think that this one is great to know. Grid template is something that I use all the time. This is not like exactly one line. I mean, all of these, you know, you have a little bit more to add. But with this, when you're already in grid template, you can then specify grid columns and grid rows for each of these elements. So here I'm specifying what grid column I am placing the header in. I'm going all the way across all three. So one slash four to get those grid tracks from the first track to the last track. And then the left side, we have it going inside the first. So from one to two, this main section going from two to three. So that's going to be taking up the middle section. And then the right side is from three to four. So that's going all the way to the end of this grid that we've created. And then for that footer, it's going all the way across just like the header. So you do specify where items are placed when it is a little bit more of a complex UI. And in here, if we look at the HTML, I have this parent, then I have header, then I have left sidebar, then I have the main section, right section, and then the footer. So you can write all of your layouts in one line using grid template property. Next, we have another classic, the 12 by or 12 span grid. You can quickly write grids in CSS using the repeat grid function. And here we're setting a repeat of 12 columns. So that looks like this grid template columns, repeat the number that we want to repeat and then 1FR. So this would be the same as writing grid template columns, 1FR, 1FR, 1FR 12 times. But because I don't want to do that, I'm just going to write repeat, 12 and then 1FR. And then we have a 12 by grid. So now we can place our items within this grid, however we want. We want it to span all the way across the 12 columns. We would use this span 12 element. I just gave it a span 12 class and have it going from one all the way to 13. So it's taking up the full space of the 12 columns ending at the grid track of 13, which would be the end of that 12th column line. For the span six, this is going from one to seven. For span four, this is going from four to nine. It can go anywhere. I could start this at the first line and then make it go all the way to nine. I could have this go to five and have it still span four. But the cool thing is you can just place this wherever you want inside of your UI. So if I wanted to have this start from six and go to 10, I could do that. And then we have this span two and this is just going from three to five. Again, that could be placed anywhere. And then a peep at the HTML. We just have this parent elements. And then the inside of here, we're just giving it classes based on those span elements that we were just adjusting within that grid template. So the repeat function is very, very useful when you don't want to keep typing out one of our auto multiple times. It just lets you quickly write information. We're going to expand on the repeat function for number seven. And this technique is super cool, super useful. If you take away anything from this video, I think this would be a great one to keep in your repertoire. I like to call this the RAM technique, which stands for repeat auto and min max. And the line of code here looks like this. You have the grid template columns, repeat auto fit or auto fill, min max the base value and then one FR for a fractional unit. And here we have that base value as 150. I'll show you exactly what's going on. So as we increase the size here, these elements are going to fit to take up the space, these four boxes. As I decrease it, they're going to hit this 150 base value and then they're going to wrap onto the next line. But here we have them auto fitting. So they're going to be spanning to take up as much space as they can. There's some really cool algorithms at play here. Now, if we change this to auto fill, this will look a little bit different. So let me just update that to auto fill. Here now, as I increase the size, it's not going to be spanning and stretching to take up the remainder of the space. It's actually going to be using that 150 pixels as a baseline and stay within that. At smaller sizes, there's no difference here, but you really see the difference at larger sizes as you have additional space. And I use this technique on the page that you're looking at here itself, this auto fill, so that these two segments would stretch and shrink but not exceed a specific space that I wanted them to. And then I have them centered. So you can center this within the parent as it spreads to a larger size. That's always an option. And a really great technique to know. Remember, R-A-M, RAM, repeat, auto, fill or fit, and then min max. And you can get these responsive boxes. You can use these for images. You can use them for a lot of things. It's something that we see all the time for cards. And this is a great use of all of these fancy new grid capabilities in CSS. For our next layout, we're heading back to Flexboxland. I wanted to include this one because I just find it so useful and I use this all the time. I call it the lineup. Why? I don't know. I'm tired. If you have a better name, please let me know. Leave a comment. But the main thing that I wanted to demonstrate here is justify content for placement of items. And here specifically, I wanted to highlight justify content, space between to place items at their edges. And so in this example, we have these three cards. And you can see that as I'm stretching or shrinking the size of this viewport element, they are maintaining the same height as each other. And in fact, we are having them fit to the top and bottom and this interior content. So this, this description, like I can keep typing here. This is then going to be centered within the remaining space. And so the reason that this is happening is because for these cards, we're giving them this flex direction of column, this display flex, the Flexbox mode. And then we're justifying content space between. So because this is a column for the flex direction, the space between is going to be right here. In between these three elements, this little box here, the description and the title. And so here, as I'm having additional space added or removed vertically, they are centering themselves. And this just makes for a much neater layout. I've used this a lot here without that justification. It looks a lot more messy. But because of the stretching here, because the Flexbox, because this justify content space between, I use this all the time and I think it's really important to know that with Flexbox, you can change the direction and you can justify your content in unique ways. It doesn't have to be just centered. You can also do center. You can do space around as well or space evenly. But in this case, I think that the best way to justify is space between because this way we are ensuring that the first and last element here in our layout, which is going to be this H3 and this little visual box, remain flush to the top and bottom of our cards. Here's where we get into some techniques with a little less current browser support. I like to call this one clamping my style and it's a really neat trick. Remember at the beginning of this video, we talked about min, max and clamp? Well, here's where it can come into play for layout and element style. Here I'm specifying the width using this clamp function and I'm setting clamp 23 CH, 50% 46 CH. What does this do? Let me show you exactly what's going on here. As I increase the size of this element, the parent container here, it is going to increase the size of this card. And as I decrease it, it decreases it. But what we are setting here is a minimum and maximum size for it to clamp to as it reaches that 50% size. This card wants to be at 50% width, but if that 50% means that it's bigger than 46 CH, which stands for 46 characters, then it's going to stop getting bigger. The same thing happens with the smaller viewport size. It's going to stop decreasing in size when 50% of its parent size means that it is smaller than 23 CH, which is 23 characters. And these specific character units can be used to make legibility a little bit easier. So in this case, I don't want this card to get any smaller than that because then the paragraph gets harder to read. And same thing with getting bigger than 46 characters, then it gets too long and hard to read. So you can use width, clamp, minimum size, the actual size and the max size to create some really cool responsiveness within this element itself. And you can also use clamp for font size. That's a really great use case too. You can have this responsive flexible typography. You could have like clamp 1.5 REMS and then a like say 10 VW for the viewport width and then three REMS as the biggest size. And that way as you resize your window, you're going to be having a minimum value of 1.5 REMS, maximum value of three REMS and then you're gonna have that grow and shrink into that remaining space. So it's pretty cool to see this actually working in a browser. Again, this doesn't have full browser supports but it is a really great technique. So if you're using this, make sure that you have fallbacks and do your browser testing. And finally, we are at the end and this is the last layout tool and this layout tool is the most experimental of the bunch. This was recently introduced to Chrome Canary in Chrome 84 plus. And there is an active effort from Firefox in getting this implemented in browser but it is currently not in any stable browsers at the time of this recording. I do want to mention it though because it is such a frequently met problem and it's just simply maintaining the aspect ratio of an image or a video or of an eye frame. And so what this is is respecting the aspect, respect for aspect, that's the name I gave it and it is the aspect ratio property. Oh my gosh, I'm gonna be so excited when this is implemented in all browsers. What this is doing is as I am resizing this parent elements, the image here, so this is just like, I gave it a class of visual for this box here that is right here. This is getting an aspect ratio of 16 to nine and no matter how I increase it or decrease it, it is going to keep that aspect ratio. This is something that is so needed for when you are pulling in content from a CMS or otherwise and you have a specific dimension that you have that media at. And this, this for example, in the previous example, if I scroll up, as I resize here, since I'm only setting a height and it's just getting the 100% width of whatever remaining space that I'm providing here, it's actually changing the aspect ratio of the sole visual box and that's not what we want. That is going to make you force a decision if you want to either fit the content inside of there so it's smaller and fits it in the space or if you stretch it out and you have it fill the content and then you're only seeing a piece of that media, not the full image and that's also going to cause all sorts of problems. So having this aspect ratio is very exciting. You can also set this to a square if you do one over one. You could have this be one over two where you have this actually longer than it is wide within this parent here. So that's going to be maintaining that aspect ratio and it's just exciting. So I wanted to mention it and let you know that it's coming down the pipeline. And those are the 10 really powerful lines of CSS that I wanted to talk about in this one line layout video. I hope that you all learned something new and if you're looking for these demos, check out onelinelayouts.glitch.me. Thank you all for watching. If you want more CSS content and to dive deeper into layout techniques, including all the bells and whistles of CSS Grid and Flexbox, check out the CSS podcast that I do with my co-host Adam Argyle. It is at pod.link slash the CSS podcast. Thank you again. Enjoy the rest of your web day. Hello, I'm Jake. I'm Jason. Earlier in this conference, we released, is this a conference? Actually, is this a conference or is it a question like an event? I don't know. Let's call it a conference. Earlier in this conference, we released tooling.reports. This is a website. It looks at all the different JavaScript bundling build tools, shows the things they're good at and the things they're not so good at. But in this session, we are going to write our own plugin. Right. So cruising around the web, we see lots of sites with performance issues. And a lot of times, these can be solved with something simple, like a preload tag for a web font or some selective code splitting. But the thing is, when these are controlled by a build system or even deep in layers within a build system, making those changes can be really tricky. So supposedly simple changes can start to seem impossible. Yes. But once you're familiar with writing build plugins, sure, that means you can write build plugins, but it also gives you insight into how the build tool works. It makes it easier to debug things when they go wrong. And of course, it means that you can go and help with community plugins as well and do PRs and add features and fix bugs, that kind of thing. Definitely. So we're going to create the same plugin for rollup and then for webpack, just because those are the two most popular and growing build tools that we've seen. So what are we actually building? We are going to build a service worker plugin. Of course we're going to build a service worker plugin. So the actual subject matter of the service worker plugin itself doesn't matter all too much. We picked this because it touches on a couple of different parts of the build process. So it's one of those things that seems simple on the surface, but there's actually quite a bit of complexity when you dig in. Right, so here's the app. Now, we're not going to go through the whole app, but it has JavaScript, CSS, images, and HTML, all of which are part of the build process. Right, so the bit that we care about for this session is specifically the registering of a service worker. So the URL for service worker should never change. And this is because we need to be able to go and check for updates at that URL. But it'd be really nice if we didn't have to duplicate that URL here and in our build configuration and on disk, and instead that was just filled in by the build system. So here's a code for the service worker. We're not going to go over all about how service workers work. There are articles for that, but here are the things we need for this build system. We need a list of URLs to cache. So this should be our HTML, CSS, JavaScript, and images. These are the things that a native app would ship as part of its bundle. Now we can't just hard code these because the build system is going to change their file names, it's going to make them cacheable. It might also code split as well, which is going to generate whole new JavaScript files that we didn't know about in advance. So because we can't predict it, we need the build tool to fill it all in. Right, so we also need a version for the cache. And this is because when we're installing a new version of the app, we want to use a new cache. That way we don't potentially disrupt any current version of the application that's in use maybe in another tab. And so because of that, we need the version to be unique for each given set of assets. Yes, and if we don't change the assets, the version number should stay the same. But if we say, like, update the HTML, the version number needs to change. Easy, right? Right, so the challenge is that we really want the service worker to be part of the main build. We want it to potentially take part in code splitting. We want to minify it just like we would our application JavaScript and also apply any other plugins and optimizations that we have to that code. The thing is, it also needs to know about the final build result. And that is only available once the build is finished. Okay, here is how I would do this in rollup. We need to get that service worker URL, so I'm going to import it. Now note it looks a little bit magic. It's got this sort of fake URL scheme. This divides people, but I really like this because I want this to look weird if it's a special part of the build. Because it means you look at this code, and you're like, huh, that's odd. And it's like, yes, it is odd. The build tool is going to do something special here. And it also means, like, if the build tool doesn't do something special, it is going to fail really, really early. But like I said, this divides people. Jason, how do you feel about this? I mean, I like this for the fail early reason. Like, that's a nice thing. But if you pull this code into a new project, it's going to tell you, hey, this isn't a thing, which is sort of exactly the message you would want to receive. But I also like that, you know, on the right-hand side of that colon, it's still just a module path. So it's really clear, you know, for you when you're visually looking through, this is the special part, this is the regular part. Yes, so all we need to do now is create the build plugin, I guess. So here's the config, the rollup config. You've seen this before if you've used rollup. I'm going to import this new plugin and add it to our list of plugins. Now, all we need to do is write it. Rollup has a one-pager on how to write plugins. But it's quite a long page, but I have to say, like the API is really well-designed and the documentation is really well-written. So let's get started. This is what rollup plugins look like. It's a function that returns an object. It has a name as part of that, and that's for error handling and logging. But everything else, the rest of the object, this is methods that are called throughout the build. They're essentially callbacks. Rollup calls them hooks because they're hooks into the build process. The first hook we're going to look at is Resolve ID. This is called for every module that's loaded. So it's going to be called for the main file there, but it's also going to be called for this import and everything it imports, like including dynamic imports. The ID you get is the raw unedited import ID. So it was going to start with the dot slash in this case. And in this case, it's going to include our special made up URL scheme as well. The second argument is the importer. This is the path to the module which did the importing. And this is the only time you get this piece of information. The job of the Resolver is to return a full absolute ID that doesn't need the importer anymore. So any relative paths should be turned into absolute paths. Now, if you've used Rollup before, you'll know that it doesn't support node modules out of the box. You need to add a plugin to be able to do that. It's an official plugin, but you still need to add it. All that is is a Resolve ID hook. It sees these bare modules and it goes, oh, okay, I'll try and find that in node modules. But also if you want to create like path aliases to make them work in Rollup, all you need to do is create a Resolve ID plugin. So we don't need to do anything with this first import. We just need to handle the second one with the special URL. So here we go. The first thing I'm gonna do is just exit early. If it doesn't start with that prefix, actually we're gonna do that sort of thing a few times. So I'm gonna pop that in a variable up here because it's gonna come up time and time again. So there's an early exit and it means if it doesn't start with that prefix, we can hand it off to other plugins or the default Rollup stuff. Now I'm gonna remove that prefix and Resolve it. This is a Rollup API. And what it says is like, take this ID and go and find it for me. It couldn't do it by default before but now we've removed the prefix, off it goes. And that's gonna actually pass it back through all of the plugins that do Resolving, including our own, but we're gonna ignore it because it doesn't have the prefix anymore. But it means like, if you have a service worker in node modules or if you have like path aliases, it'll all just work. So if there's no match, just bail and that will cause an error eventually because it means that we couldn't find that service worker. Otherwise, I'm gonna add that prefix back on so we can pick it up later on. But now we have an absolute path to the service worker. We have achieved a thing. We can try and build it. Here we go. Whack whack. It's not gonna work but you can see in that error message it's failed because of that special prefix but the rest of it is an absolute path to the file as it is on my system. So all we need to do is tell it how to load that script. This is another hook, load. It gets the ID as it's been fully resolved. Again, we want to ignore it if it doesn't have that prefix and let other plugins deal with it. And what we need to do now is return the content for this module. So we could just actually make up a module. I'm gonna do that. I'm just gonna return hello. And now we've done that. The project will actually build. It's gonna turn that fancy custom import into hello. This is kind of silly but just knowing those two things knowing how to resolve and knowing how to load you can do all sorts with that. That you can do all sorts of cool code generation loading of a files, that sort of thing. Like it's not necessarily what we set out to do but like you could totally see how this is basically like a constants plugin. Yeah, exactly. And I've built constants plugins and that's exactly how you do it. But let's improve on that. So instead of just returning hello, we need to bring the service worker into the build system and return it to URL. To bring a file into a build, I'm gonna use emit file. This is a rule of API for adding stuff into the build. I'm gonna say this is a chunk and that's telling rule of like this is treat this like a JavaScript entry point. So it should be processed by all the same plugins that deal with JavaScript like, you know, minifiers and code splitting and all of that sort of stuff. It could also be asset. And if it's an asset, that's like an image, CSS, I don't know, WebGL shader. It's like anything that's not JavaScript essentially you can process in this way. I'm gonna give it the ID and that's telling rule of where to find it. So we've removed the prefix again. And I'm gonna give it a file name and this is not something you would usually do. What this does is it overrides the naming system that rule of would usually use because rule of would usually add a hash to this file but with service worker, you don't want its name to be changing. You want it to be in a static location. So that's how we achieve that. Actually, I'd say like this is something that you would offer to the user as configuration. So I'm gonna do that. I'm gonna add an option object to the top here and I'm just gonna like change the variable down here. So it's using that. So that means the user can change the path of the service worker, the file name, all of that sort of stuff. Now what we need to do is get the URL for this file and strap in, it's a bit of magic. Woo, there it goes. So I'm returning the import meta and then this sort of magic string that ends in a file ID. And we got the file ID from emit file. Roll up, we'll see this and it will turn it into a URL for the resource. So here's what that looks like. Here's the input and here's the output. So it's created this URL object and it does that so it can create a relative URL from this script to the service worker. Cause on the web, if you've got an image tag with a source or whatever, or when you're calling service worker.register, by default URLs are relative to the page, but roll up doesn't know anything about the page so it creates this script relative URL. You can actually configure this in roll up so you can tell it, hey, all my stuff is actually at the root of the origin. All you need to do is just add a slash to the start and obviously that will clean up the code a bit. It just doesn't do it by default. But also in our build, we've got the service worker. So that bit is working. All we have to do now is deal with these two things, the version and the assets. We'll handle the assets first. To do this, we need to know the full details of the build. So, cause we need those hash file names and all of that sort of stuff. So we need a hook that is towards the end of the build process. We're gonna use generate bundle. This happens just before things are written to disk. The options program there, you know, it's the output options. It's not very interesting for us in this case, but this, this is where the party is at. This is full of interesting details about the build. It's a JavaScript object where the key is the file name and the object has details on the file. So this is an assets. So you can see it's got the file name and it's got a buffer representing the content. But for chunks, look at all this, look at all this stuff we get. This is amazing. It tells us things like the imports and the exports of the file. I've used this in other plugins for if I'm generating HTML and I'm putting a script tag on the page. I know that that script is just going to import other scripts and I want to preload those things. Well, I can do that here. Like, so this will tell me which things it's going to immediately import so I can turn those into preloads. Anyway, loads of interesting info. We're gonna make use of this. In the generate bundle hook, I'm gonna get that service worker chunk just from that output option because that's the file name of it. And then we need to figure out which assets the service worker needs to know about the things that it wants to cache. Gonna iterate over the bundle, get the values and I'm gonna get everything except the service worker item itself. If a service worker caches itself, like it doesn't cause the world to end like you might imagine, it's just a bit wasteful. We definitely designed around that. We were worried people would do that. So it's actually okay. But this is a point where people might want to configure the things which are actually cached by the service worker. So what I'm gonna do is add another option here, filter assets, returns true by default and I'm gonna call it down here. Slight complicating factor. We want these paths to be relative to the service worker itself. And unfortunately, roll up doesn't give us access to its URL resolving magic stuff. I have filed an issue for that but we're gonna have to do it ourselves this time. So I'm gonna use node's posix path resolving stuff. And the only reason I'm doing that is because posix paths have the forward slash and that's the same as URLs. So we can rely on that. And then down here, this is how I create the relative URLs. This is done by saying, give me the relative path from the folder that the service worker is in to the item from the build. And now we've got that. I can prepend that to the file. So this bundle object is live in that any changes I make here are actually going to be written to disk. So here I'm just taking the code for the service worker chunk. I'm adding that assets line to the start. I'm using JSON stringify to correctly escape everything. And now that will be part of the service worker. And if we build, there it is. There it is in the output, the assets. We've created a problem though. And this is something that catches a lot of people out when they're doing service workers. It's this, it's easy to think that the service worker is caching files but it's not, it's caching URLs. And we don't want.html in our URLs. We don't want index.html in our URLs. So we need to fix that. Back in our plugin. This is where we were before. How can we solve a problem like this? Regular expressions. Of course it's regex. I'm not going to go through how this works. I see a back reference in there, I don't know. I love back reference. I often say that regular expressions are right only. You can't read them again afterwards. But trust me, this is removing the.html and it's removing the index.html as well. This could land us with an empty string which is a problem because that will resolve to the service worker script itself. So in that case, just going to output a dot. And that's solved the problem. There we go. Index.html there is replaced with a dot. Okay, last thing. Just need to deal with that version thing. Thankfully we're most of the way there already. This is going to be a hash. So I'm going to include nodes hashing stuff and then down in the generate function create one of these objects. I picked char one because I'd heard of it before and it wasn't MD5 and I'd heard it was worse. Yeah, this is very secure. I mean, yeah, security doesn't matter in this case. It's just for hashing. Oh, don't quote me on that. Like security generally matters. JGARTABLE, security doesn't matter. Just, what a quote to take out a context. Okay. So now here I'm adding everything to that hash. So it's either the source or the code depending on whether it's an asset or whether it's a chunk. And now I can get a hex version of that hash and include it in the old output there. And that's really it. That's it. We can see that the hash is now in the service worker. I wanna, there's a couple of caveats here by editing the source right at the end there. That was after source maps were generated. So we have just broken source maps just for the service worker. I've filed an issue with rollup because I would like this to be an easier way to do this. So hopefully they can add something. We could probably work around this by adding those blank lines in a transform step so we can fill them in later. Cause that will, those extra two lines at the top will at least be taken into consideration with the source maps. I don't know, it would be good if those are proper way of doing this. But anyway, that is it. That is the full plugin. So, Jason, how would you do that in Webpack? Right. So building plugins for Webpack works a little bit differently. And to illustrate this, I'm gonna use a fairly similar approach and naming to what we saw in Jake's rollup version. Much like with rollup, the place where we're gonna start will be the bundler config. Now a typical Webpack.config.js file specifies some entry points, which are gonna be the modules where bundling starts, some options for the format and location of files generated on disk and plugins, which is what we're concerned with today. All we need to do is import our plugin that we're going to write and then add it to the plugins array. At its core, a Webpack plugin is really just an object with an apply method. And that apply method gets past the compiler instance. So wait, one second. I see you're using classes here. So in rollup, we were using a function which returned an object, but in Webpack land, it's all class-based? So technically, Webpack does not care. It just needs an object with an apply method, but all of the Webpack core modules are classes. And I think as a result of that, all of the ecosystem plugins tend to be classes just because when people import it, they expect to instantiate it with new. You could totally write a Webpack plugin that was a function that returns an object. I was even actually tempted to do that for this talk to make it look more like the rollup plugin, but we'll stick with the thing people actually do. All right, so if you remember in rollup, plugins define special hook methods that get called in response to events. In Webpack, we do the opposite. Your plugin taps into events. And so what we wanna do is we wanna handle a special service worker colon import prefix. And so to do that, we need to tap into normal module factory. And just to preempt, because everybody gets confused with this, normal modules are source modules. These are the code you write, the code you get from NPM, code that you're going to put in your application. There are other types of modules, which is why there's other module factories for things like loaders. Basically, they're more on the infrastructure side of things. So what we're looking to do here is resolve our own import. We use normal module factory. And so similar to with rollup, anytime we tap into something, we wanna pass it a name, which is the name for our plugin. And this gets used for debugging and logging purposes and also when taking performance profiles. So within normal module factory, we can now hook into Webpack's resolver. And this gives us a reference to Webpack's own resolve function, which is the thing that sort of handles finding modules and loading them from disk. But it also lets us return our own custom resolver function, which gets passed a dependency description and a callback, because it's asynchronous. And so this function, we can call the original resolve function, we can do something custom, we can combine the two. And to make things easier, we're just gonna take those three bits of information that we have and pass them to a new resolve ID method that we're going to write. And that's sort of everything we needed from apply. So our resolve ID method is gonna be called for each import specifier in the app with a description of the dependency, that original resolve function from Webpack and the callback to call when we're done. A dependency description is an object with context and request properties. Context is the directory of the module that's doing the importing, so in this case, dot slash. And request is the unmodified import specifier. So in this case, it includes our service worker prefix and also the relative path to that file on disk. So similar to rollup, the first thing we wanna do is figure out whether this is an import that has our prefix that we want to handle. If it isn't, then we can just call Webpack's original resolve method. It will do what it would normally do since there's no prefix, we don't really care about that module. But if it does have the prefix, we need to remove that prefix and then pass it through resolve. And so if the module can't be resolved, we can just forward that error up, that will break the bill, there will be an error in the console saying, hey, couldn't find module SW slash index.js. If it does work, then it'll pass us back a new dependency object where the request property is a fully resolved disk path. The thing is we don't actually wanna resolve this module to a path, right? We wanna resolve this module to a string somehow, right? The string that contains the location of our service worker. And so to do that, we need to sort of, we need to come up with a way of producing a virtual module, even though we're inside of the resolver. Essentially, we want to intercept this service worker prefix to request and then resolve it to a module that we create on the fly that doesn't actually exist that just exports the string URL of our service worker. And then one thing we do need to account for here is when we're working with URLs in Webpack, we need to make sure that we respect any Webpack public path magic global that has been set. And so this is either the output.publicPathConfigurationValue or some value that has been set at runtime. Right, okay, so this is another difference from Rollup. So Rollup had to do some of the magic around like resolving URLs and stuff. Whereas here, you've just got a string, which is like where in the web route is it? Right, yeah, so like Rollup, they have automation here to sort of give you module relative paths. In Webpack, it's a configuration value. And the nice thing is, because everybody uses that configuration value, all plugins tend to respect it. But so here, if we combine these two strings, we'll get the slash, which is sort of our default Webpack public path value. And that means that the result of this import will be the service worker's URL. The thing is, we haven't actually explained how to construct a virtual module yet. We know what we want to do. We don't know how to do it. So first we wanna start with that code that we wanna generate. And you know this here, I've parameterized the URL from that code string. And that's because we want to make that configurable. So in our constructor, we'll accept an output parameter. The developer can pass us the URL to use, we'll store that. And then we can inject it into our code by JSON stringifying it, which will escape it and wrap any quotes. And then the last bit is to take our piece of code and pass it back to Webpack. And to do this, we're going to use something called raw module. And we're gonna import this from Webpack core. So raw module is basically a way for us to provide code back to Webpack in the same module format as it would expect modules loaded from disk. And the reason why this matters is we need to provide Webpack with an object that has a source method that returns a code string and an identifier method that returns some sort of a unique identifier for that module. In our case, based on the code string. And in doing this, Webpack will actually grab the code that we send back to it and avoid ever going to disk to resolve this module, or just use whatever we pass. So now we have our import resolving to the URL of the service worker, but that URL doesn't exist because we have not yet generated our service worker. Let's do that. For this, we need a new plugin hook. And so when we need a new plugin hook, we have to jump back into apply. We need information that's gonna be codified into the service worker, those version and assets globals. And to get this, we need to tap into something that happens at the end of the build once that information is available. So for this, we can use the emit hook. You'll also notice we're using tap async here. And this is because our emit hook is going to be asynchronous and we want to hold back compilation until we're done working. So the emit hook gets passed a compilation instance and a callback, which we're gonna forward onto a new method just to keep things clean. In Webpack, each compile pass is referred to as a compilation. And the main thing that we're interested here is the generated assets, which are an object where the keys are file names and the values are asset descriptors. How do you recognize this? This is the same as the rollup bundle thing, right? Like we had in generate bundle. Yeah, it's really similar. I don't think I actually realized until we were sort of looking at these things side by side. They contain really similar metadata. So it has the dependencies and is this an entry chunk and all that information. It's also for both JavaScript and non-JavaScript assets. And in both cases, there's a source method on the asset descriptor that will return either a buffer with the non-JavaScript asset contents or a string with the JavaScript code. So we need a list of files for the service we're gonna cache. And to get that, we could just take the keys of the assets object. Similar to with the rollup plugin though, it's generally a good idea to make it possible to filter these assets before they get embedded into the service worker just to avoid a huge list. So we'll add a filter assets constructor parameter that a developer can set and provide a function that takes a file name and returns a boolean indicating whether that asset should be in the list. Now we have our list of asset file names. The next step is to calculate a version based on a hash of their contents. And so to do this, we need to pull in the create hash method from nodes crypto module. Then we'll loop over each file name in our filtered list and add its corresponding asset source, which will be that code string or buffer to the hash content. Then our version string is just going to be a hex digest of the combined contents of all of those assets. So that's the magic version global done. Now we can do the assets magic global. So as Jake explained in the rollup walkthrough, our assets array contains file names, but our service worker needs to work with URLs. And so to fix this, we need to pretend any configured public path value to the names in case we're deploying somewhere that isn't the root domain. Right, so this is again, this is gonna be simpler than it was with the rollup case because you've just got the string that you can add to the start. Right, so because everyone knows to configure that public path, we can basically count on it being there. If it's unset, we'll just use slash, but we're gonna be able to resolve these things at build time. So I'm showing some of my biases here, but I think this is way more complicated than the rollup solution, but it's interesting seeing these little details that I'm like, oh, I wish rollup had that. I mean, I know why it doesn't, but that would be so much simpler for me if it just had like a public path string or whatever. Again, maybe that's part of rollup adding that, exposing their built-in resolution is maybe there's an option you could pass that's like, oh, by the way, like resolve it all against this public path. That might be a convenient thing, who knows. So while we remap all of those file names to prepend that public path, we can also remove any trailing index.html because that doesn't appear in the URLs. We wanna catch the URLs, not file names, and we will use that dot workaround to safeguard against an empty string actually catching the service worker, which we don't want. Instead, we'll have it catch the directory the service worker is in. So that's both of our magic globals prepared for the service worker, now it's time to generate it. And here we're gonna create one last function to handle compilation of the service worker. So to build our service worker module, we're going to use something called a child compiler, which is essentially a nested webpack compiler. And we'll configure this to output these sw.js file. And one thing to think about when using child compilers is they can have a completely different list of plugins compared to the main compiler. So we can pull in a plugin that generates worker compatible code because we have a service worker target. So there's gonna be a difference here as well, isn't there? Because like, well, actually you tell me because you're using a child compiler and it's like a whole different pass, does that mean that this service worker bundle is not going to be able to share code with the main bundle? It's not gonna be able to code split there. Correct, yeah. So any split points that end up happening in the code generated by this child compiler are gonna be totally independent from the main compiler. They'll be a separate set of files. Okay, that's an important difference between the two. I mean, it's not common to share code between your service worker and main thread, but the roll up version could do that. Yeah, exactly. So the other thing that worker template will do in this case is if there are split points, rather than using a script tag to load those chunks of code, it will use import scripts, which is available in workers. Cool. And so the last thing we need is a second plugin also imported from Webpack Core. And this just lets us specify the entry module to start compiling from, which is in this case, our service worker source module. So finally, we run our compiler as a child of the main compiler. And this just ensures that if our compiler takes a while to run, the main compiler won't finish and terminate the process before we're finished. And then once it's built, the callback will be called with our compilation and also any error. So just like with the admit hook, we are gonna grab the generated service worker asset from compilation.assets and then we can pass that back to admit. If there was an error in the child compiler, say a syntax error in our service worker code, we'll just bubble that up, that will fail the build and print it to console. If there wasn't an error, we do now have our compiled service worker asset. The thing is it doesn't yet have version and assets, magic global variables. So to inject those, we need to pull in something called concat source from the Webpack sources module. And this lets us concatenate strings essentially. It's very similar to string concat. But with the added benefit of, it is able to produce source maps. Ah, yes, okay. So this is interesting because I broke source maps in my solution, but this is going to just work. Now I could have done the same. In rollup, I could have created essentially that it doesn't support child compilers as such, but you can just call rollup as a sort of separate layer. That would have been a lot more complicated. So it's interesting now, like some of the complexity that you're using here with Webpack is actually paying off. Right, yeah. So like the initial bundle of service worker was harder, but as it turns out, it's the same number of lines to support source maps as it is to not support source maps now that you've opted into some of that complexity. And so that's sort of an interesting trade-off between the two. So yeah, so the last step, we've constructed this asset, but it's only just an object in memory. So what we need to do is merge it into the compiler's generated assets. Like with rollup, this is a live object and anything that we merge into here will get written to disk. And so with that, our plugin is done. So we can call the callback, compilation will finish. And what we get is our generated service worker source. We can see here that we have assets with the generated array of URLs. And we also have version, which is that string based on the hash of their contents. So we did it. Amazing. And that's it. Yeah. And I guess that's bringing this talk to a close provided that everything has recorded properly and we don't have to do it again because I can't tell you, this recording has been absolutely cursed. We have done so many takes due to just Bluetooth stacks failing on my machine, recordings not happening on Jason's machine. So who knows? Maybe this will be the final record. If you're watching this right now, then I am so happy because it means we don't have to do all of this again. Right. Okay. If you want to see those implementations, they are both on tooling.report along with how to handle assets, code splitting, hashing, transformations, all of the stuff. And not just webpack and rollup as well. We also have parcel. And we also have browserify. Remember browserify and gulp and all of that sort of stuff. I mean, you might laugh at that, but this is, if you look at the stats on NPM, that is still absolutely huge and climbing. So, and we did actually find some cases where it really holds its own and was maybe better than some of the more modern, modern things that we deal with. So go on and check all of that out. Thanks so much for watching this hopefully final take of this talk. Thank you very much and goodbye. Hi. Hello, mate. Hell. This is like remote two or three, isn't it? It's good. It's weird. We have almost all the technology that we need to make this work without feeling different, but it's still different. We'll let other people be the judge of that if it works or not. That's true. So I'm here to talk about image compression. It's something I've been wanting to talk about for a long time, right? I mean, we wrote an entire app around it. We did, and we are gonna be seeing that as part of this talk. Nice. I like it. If you look at the studies they do across webpages, images are the biggest thing on the page. 50% roundabout of each page is just... 50% of the bytes that go over the wire for any given page are on average images. Yeah, and sometimes it's fun. I'll go and take just some images from a web page and recompress them. And usually I can halve the size of them without losing any of the visual fidelity. So it's just so much stuff on the web to just be compressed better either by using a different format, a better encoder, or just better settings within a particular encoder. Yeah, that's what I found as well. I've recently done a little experiment and found even without going into the very deep settings of an encoder, often people just don't optimize their images at all and just doing the default setting in Scooch will already give you a really decent compression rate without losing visual fidelity. So if you're watching this at home, if you're the kind of person who just saves your images out of Photoshop for the web, then this is the talk for you because I have things to discuss because I think you're missing out on huge performance gains. So I want to dive into Image Codecs. I want to show how they work and I want to explore some of the lesser known settings. So here are the formats, right? We've got Lossy, JPEG and WebP. We've got Lossless, PNG, GIF and also WebP. Hang on my friends, hang on. You put GIFs under Lossless. They are Lossless, they are Lossless. With their 256 colors. Yes, but you make that change before compression. Like the GIF compressor won't accept images more than 256 colors. That's a step you have to run before the compression. It's Lossless, trust me. Because we are doing the same for PNG as well. But I've got a question for you. If you have a digital drawing, like a drawing that has been created on a computer, which format would you use for this? If at all possible SVG. Yeah, I agree with you. Because a vector can be any resolution, so it's ideal for drawings. But sometimes the drawing is really complex. So they're actually smaller and much faster as a Lossless image. But then again, if the drawing has a lot of gradients and shading, it might actually be smaller and acceptable in a Lossy format without it looking bad to the user. So a lot of the talks and articles I see about image compression, they try and bucket kinds of images into formats. And I think that can be misleading. So instead, what I want to do is I want to talk about what these formats actually spend their bytes on. So Lossy images, it's like sharp edges. They really struggle with that. That will increase the size of the image. And also small details in undetailed areas. That sounds confusing. So we'll look at a few examples of that in a moment. For Lossless, it's color change, especially unpredictable color change. So another thing we'll look at some examples of that. And also total number of colors impacts the file size as well. With Vector, number of shapes and also shape complexity, there's a little difference with Vector and that's the complexity of the shapes will actually introduce a CPU cost in a way it doesn't have the other formats. So that's something to bear in mind. That's the TLDR of this talk, really. That's it, that's it done. Well, thanks for doing this and I'll see you next time. Bye-bye. No, no, I wanna get to know these codecs in detail. I'm gonna dive in. Well, just to be clear, these are categories of codecs, as you said. Like each of these categories has multiple codecs to offer. And I think the basic advice should be that you just try the same image. Like if you don't know which category your image falls in, just try the image in all of these codecs. That's kind of the ideal process, right? Yes, but you can make some educated guesses based on knowledge of how these things work. So, Soma, have a look at this. I know this is difficult because you're watching this over WebRTC, so some of these demos are slightly limited. Do you notice anything about this picture? Yeah, if it's not the stream compression, then I'd say the clouds have blocky artifacts. Going about. I think that is your stream compression, so... Okay, then let's try this again. When you say notice, do you want me to find something that is wrong with the image or just something that should help me make a guess on which codec is best for this? Well, I'll tell you what, if you haven't noticed it straight away, then that's good enough for me. So, if at home, if you're watching this in full HD, that's 1.920 by 1080, obviously you're getting video compression on top of that, so you're not seeing exactly what I'm seeing. But this picture, the brightness data is full HD, 1.920 by 1080, but the color data in this image is 96 by 54. Like, that's 0.3% of the data, and you barely notice. Close up, you can start to see it, a bit of discoloration around some of the fencing. One of the fence posts has turned into a ghost. Yeah, like if you have that little resolution for the colors, the colors will start bleeding across boundaries, even though the light data still gives the contour. So that's, but yeah, from here it's not really, yeah. And this is because you've got bad eyes, Sirma, like you have these terrible human eyes that are really bad at seeing changes in color, but you're actually pretty good at seeing changes in brightness, though. So I'm gonna take this image and I'm gonna flip it around so that the brightness detail is low resolution, but the color detail is high resolution. And here it is. And the first thing you'll notice is it looks bad, but it is the same amount of data just flipped around. Here, it looks fine. You can't go this extreme in all examples. So here's an image that has a bit more complex data around color. You can see this yellow stripe and the red stripe on the cars, like there's some of the details quite blocky, but as we start increasing the amount of color data, again, it becomes difficult to notice. With this image, once I get the color data to around 5% of the brightness data, it becomes impossible to notice for me. I actually built a little web app to research this. I had a lot of- Of course you did. This is the first WebGL I ever wrote, Sirma. I hope- Oh, you did it with WebGL? That's fun. Yes, to do the color conversion. So it's something you can, I'll put a link to this so you can throw your own images at it. You can get some cool effects. You can see the individual color channels because this is what Lossy Codecs do. They call it chroma subsampling. It rather than RGB, they do brightness and then two chroma channels, two color channels. And that means they can just discard loads of that color data and save a lot of file sizes as a result. As you mentioned earlier, you get some color bleeding as a result, but it's barely noticeable, especially in like photographic imagery. One more thing that Lossy Codecs do in terms of Lossy stuff. It's another quiz for you, Sirma. I've made an alteration to this image. Can you see what I have done? There is a very ominous circle between the trees. Yes, I've drawn a circle in the sky. There it is. But that's not all I've done. Down here, I've replaced the brightness data with noise. Just noise. But you don't really notice that at first. And here's the interesting thing. Here's a diff of the image versus the original image. Now, the circle in the sky is so subtle, it barely shows up. In fact, if you're watching this with video compression, it might not be visible at all. I can see it right now. But the noise in the bottom, that it's like a huge difference. But again, you have terrible human eyes. You're good at seeing small changes in what we call low-frequency areas. Like the sky, there's not a lot going on. You spot that small difference, whereas you struggle to see the changes in the high-frequency areas. This reminds me a bit of the blind spot that we have in our eyes where our brain fills in the gap for us because I see the circle and you showed me the diff and fade it back to the full image. I saw the circle and then the shape of the circle started blending into the environment and now it looks like a bush. I know the circle is there, but I can't actually quite see it. So I think it's just my brain auto-correcting and I'm quite frustrated with my brain for doing that. It does. And this is something that Lossy Image Codecs, they take advantage of this. So what they do is they'll take an image like this and they'll divide it into eight by eight blocks. And rather than describe these blocks pixel by pixel, they use this. They multiply these shapes together to form exactly the same image. Like not at that intensity. Each block can be at a different intensity. It can even be negative, but you can reconstruct any eight by eight image using these shapes and this blows my mind. Like I don't believe this. I still don't believe this even though I built an app to show it. Look, I've made a little pixel art 203 for HTTP 203. I'm not very good at pixel art, but that's what I tried to do here. But what you see below is all of those shapes at the intensity that it uses to reconstruct that image. So we can actually sort of apply them one by one. And you can see as it does the low frequency things, it starts to build up like this blurry picture. And then as it goes and starts to do the high frequency bits of data, the full image comes into view. It's incredible to me that this actually works, but it doesn't save much data, really. But let's actually take a real bit of the image. This is the picture of the woods from before. I'm gonna pick up some of the grass. Now, what Lottie Codex will try and do is they'll try and discard some of this data. So you can see already it's not using all of those shapes. That's where these zeros are coming from. But as I drop the quality, and this is like JPEG quality in this case, we can see more and more of those zeros appearing. And the thing about sequences of zeros, they compress really, really well. So we're left with an image that isn't 100% accurate, but this is high frequency data you don't really notice loads of data is saved. I mean, even here in the side-by-side view, it is hard to tell which pixels are incorrect. Exactly. But let's take a different example. So this time a curve, which might be part of like a logo or something. And in this case, even a very small drop in quality is introducing noise that is very noticeable, especially if the color is solid on either side of this curve. That's low frequency data. This small amount of noise is really noticeable. It's like the circle in the sky. You'll see it very, very easily. And that's why lossy compression is not very good at sharp lines next to solid color. So now, if you look at a picture like this, which is a heavily compressed JPEG and zoomed in, you can see what it's made of. You can see the color bleed from the subsampling. You can see the eight-by-eight blocks. You can see those waves that it's using to construct that data. And that's it. You now know how lossy compression works. There's only two elements of loss in both JPEG and WebP. And that's it. You've seen them both now. The rest of it is just a lot of questions. I have a question because I'm trying to remember WebP, I think, also does this frequency-based transform, the discrete cosine transform, as it's called, the conversion to this frequency patterns and you add them up. No, that's what we just saw. Yeah. I think WebP does it also at bigger block sizes. If they are low noise. But I'm not quite sure if I'm remembering that correctly. Yeah, so WebP is just better. So there's a few different things. Like you say, it can use different sized squares. It can also, when it starts a square, it can use a nearby square as a starting point. Also, JPEG has to use the same mathematical conversion for every eight-by-eight block in a given channel, whereas WebP can have four. So it can have a different strategy for high-detailed areas versus low-detailed areas. And it has a better lossless compression thing as well, uses arithmetic compression rather than Huffman. And it has better post-ecode filters as well. So it is just better. It also supports alpha transparency as well, which JPEG doesn't. So why have I wasted my time talking about JPEG? Well, Safari does not support WebP. It's the only one browser that doesn't. So that's the only benefit of JPEG, really. We can still use WebP for browsers that support it. This is the picture element. So we're serving WebP for browsers that support it and falling back to JPEG for Safari and older browsers. There are also client hints to do it on the server side. I'm not gonna go into it. I'll put some links for that, if that's what you wanna do. So let's actually compress some images. Now, the compression tools in things like Photoshop are not very good, which is one of the reasons that we built this. Not just us two, we're a part of a team, but yeah, Scrooge.app. This has the latest WebP build. It also has more JPEG, a really good JPEG encoder from Mozilla. So I'm gonna use the F1 image from before because it was a bit of a tricky one to compress. First thing I'm gonna do is zoom it out. So it's the same size as it will appear on the site. So if you've got an image that's gonna be 1,000 CSS pixels wide, but you want it to look good on a high density screen, you want a 2,000 pixel wide image, but then you wanna zoom it back out so it's the size it will actually appear on the screen. Now, I'm gonna start with WebP in terms of compression and all I'm gonna do, I think you mentioned this before, just bring the quality down until it looks bad, you know? And especially with a high density image like this, you will be surprised how low you can go on the quality. Like it's still really barely noticeable. Now, it's really tempting to do this, but please don't, like don't put your nose to the screen, don't zoom in. Because if you do that, it'll look bad. It's not gonna look good, right? The more you zoom in, the worse it will look. Like, oh, look at that, that's disgusting. But users can't do that. I mean, honestly, considering that we are at, that we just removed 98% of the data and it still looks that good, even when zoomed in is kind of cool. It's, yeah, absolutely incredible. And so if anyone comes at you and says like, oh, I took one of the images from your website and I zoomed it in by 1,000% and it looks bad, like just say, I don't know, okay, zoomer and ignore them because that is not what real users are going to do and you should be optimizing things for real users. So keep it roughly the size it will be on the site. I actually wanted to look into some of these advanced settings in WebP because when we first built Squoosh, I didn't know what these did. I just took what the code was doing and made UI for it. There's a lot, but I now know what they do. I did some research. The really interesting ones, auto-adjusting the filter. This is a good thing to do. This really slows down the encoding time, but it improves the visual a lot. Now the filter is actually a decoding filter. It's just what it's going to do to remove that blockiness, but it increases the encoding time because it tries to figure out by looking at the image what the best kind of filtering is going to be. But yeah, it's just a one bit flag, or not one bit, like a couple of bits. Three bits, I guess, if there's eight levels of filter sharpness. Yes, exactly. I think there might actually be more. I think it might actually go from zero to a hundred, but it's probably a bite. The other one that I always knew, like, oh, this makes a big change to the image, but I don't know what it's doing. This is spatial noise shaping. Now, like I said, WebP can have these four different strategies. Did we misspell spatial? Oh, maybe. That's a classic one. Do you know what the amazing thing is? I've actually spelled it wrong in my notes for this video. Well, that's a buck report. Anyway, yeah. So the other option is spatial noise shaping. And so WebP can have these different strategies for different parts of the image. SNS, what that does is it changes the extremity between those different strategies. So higher SNS takes bites away from the stuff that at least needs it and gives it to the stuff that needs it most. It's like Robin Hood for image compression, basically. In some images, turning that up to a hundred has a positive result. It doesn't hear. 50 is actually pretty good for this image. Can you go too high and sense it like going higher makes the image worse again? Yeah, so, and with this image, it does go worse. Interesting. So if I make it extreme here, it's giving more data to the road because that's where the blockiness is, but it starts to introduce, like it makes the tires of the car look bad. I see. Okay. You start to notice the artifacts there. Because it's taking data away from the high-frequency area, giving it to the low-frequency. The image of the woods that I had on before, actually SNS of a hundred works really, really well because it takes lots of data away from the leaves and stuff that you can't see the noise and gives it to the area of the sky. So that works really well. We're gonna need a JPEG as well. So just for fun, I'm going to make the JPEG the same size as the web P just to show the difference in quality of the compression here. So I have to take the same again, call it you write down till it's about 40 kilobytes. And I'm gonna do what I said not to do. Because, okay, this looks really bad to my eyes right now at this level, but obviously with video compression, it might not be as obvious. So I am gonna zoom it in. JPEG creates this horrible blocky effect across the road. And like I said, this is low-frequency data. So this is gonna be like the circle in the sky. It's just super obvious. Around the yellow strip, the transition from the yellow strip to the road makes it really obvious where the data is lost. Yeah, you can see the blockiness there. And this is where the filtering that web P does is, yeah, it's much better. So yeah, I just have to increase the size of the JPEG until that horrible blockiness goes away. And at that point, it's like double the size, over double the size of the web P. There are things you can do to improve it if you wanna spend a time. More advanced settings. The most interesting advanced setting here, I think, is the chroma subsampling. This is what we discussed before, right? This is just reducing that color data, especially if you're on a high density screen, you can get away with less. In fact, most of the time, I would say you can just turn this up to four. We saw earlier that this particular picture is kind of sensitive to color reduction. So three, but it was a good one there. But that knocks 10K off, so it's worth having. I also often combine it with the separate chroma quality so that you can basically introduce the blockiness into the colors, but not into the brightness, which is often much less predictable. That's interesting. I've never had success with that. I've always tried that, and I've never, I've always been unhappy with the results, but yes, like mileage may vary, right? The option is there. So if people can play around with that. Absolutely. But yeah, I think it's worth in this case to serve both the web P and the JPEG because it's like half the size for the browsers that support it. But that's it. That's everything I really had to say about lossy images. Well, you said you had major research about the options. What exactly is pointless spec compliance? Pointless spec compliance removes the progressive encoding and some of the other things that, according to a strict interpretation of the JPEG spec, it's not supposed to do that, but everyone supports it and it's good for file size. I thought I would totally put you on the spot, but you actually know the answer, well done. Well done. Yeah, do you know what? I didn't research a JPEG, but that was one of the ones that I knew from looking into the encoder there. So lossless, completely different world. A lossless image describes pixels one at a time from top to bottom, left to right, but rather than describe each color from scratch, it will use some of the surrounding pixels to make a prediction and then it just encodes the difference. There's a number of strategies that you can use. It can look at the pixel to the left. It can look at the pixel above. It can use an average of some of those pixels, but here's the gotcha. It can't change the strategy for every pixel because that would just add too much data. Instead, it has to stick with a particular strategy until it decides to change strategy. Now, PNG can only change strategy at the start of a line. WebP can define 2D blocks of different strategies, but this is why lossless compressors can be really slow because it's just brute force going through all of these different strategies to figure out which one is best. Like trying all the combinations, but it means this compression works really well if the pixel next to it is exactly the same or the pixel above is exactly the same because there's no difference then, really easy to compress. And it's why lossless format struggle with photo data because there's lots of organically changing color. Oh, one more trick that lossless things have. If there's 256 colors or less, it just sticks those in a table and it can reference them by number. So rather than every time describing the red, green, blue and alpha, it just says it's color number five, everyone. There it is, pop that in there and that compresses really, really well. So difference between the formats then. WebP, not many people know about the lossless WebP format. It's like a completely different codec. In fact, it is a completely different codec. The only relation between the two is when you do lossy WebP with an alpha channel, it's using the lossless format for the alpha channel. Pointless fact for you all. Oh, that's good to know. Yeah, it's just better. It's got more strategies. It's got the 2D thing going on. It compresses better. But you do need the PNG for Safari. That's all we're talking about. And just never use GIF. Just don't stop it, please. Never use it. It's just, it's bad at everything. It's just bad. So stop using GIF for everything. But let's put this into practice. Back in Squoosh, this time I'm going to use something with a lot of flat color. And I say thanks to Stephen Waller, who donated me some of his artwork to use here. He's an artist I used to work with. He makes these wonderful drawings. Apparently these are his teammates that he worked with at the time. I didn't ask him about the bear, but fair enough. I would say the first thing to do here, same as before, zoom it out. Like keep it the same size as it'll appear on the site. So the kind of scaling that you would actually use to use the high density devices. And now I'm just going to go to use WebP because it's the best and I'm going to pick lossless. Yep, completely different codec, whatever. Very few people know about it, but it's really good. We're straight down to 43K. This is slight loss option, which I think we found very funny. It's great, isn't it? And what it will do is it will change the pixels to try and make them more predictable. I actually don't like the visual effect it introduces, so I don't really use it, but your mileage may vary. It's fun to play with. But remember when I said earlier that these formats do really well with 256 colors or less. We're going to do that. We're going to reduce the palette down. So it doesn't look like this image has a lot of colors, but because of the anti-aliasing around all of the curves, it's got lots of slight variations, blends of one color to another. I've also changed the codec to browser PNG, and that's just because it's really fast to compress. So it's going to be, I'm going to get a quick response when I'm changing the number of colors. I'm also going to turn off dithering. If you reduce the number of colors of something, you can get this banding effect that you can see here. Dithering tries to remove the banding by using pointillism or these dots to recreate the shading. It's super effective, but lossless deals well with flat color so it will increase the file size. So I would avoid it unless you really need it. Although I am impressed, I have compressed a couple of photographs to PNG with dithering and it will actually look, if you zoom out, if you have it small enough, it will actually look almost the same. Often it's imperceptible. Yeah, and there are some times that you need to do that. So now, the same as before, I'm just going to reduce the colors as much as possible until it looks bad. And this is why, this is one of the reasons we built Squoosh because it's so much nicer to do this with a visual response than getting a computer to try and figure it out because it's not going to do it as well. I'd say important advice, same as before, don't zoom in because it'll look bad. You can see the harsh edge here, but zoomed out, you barely notice it, so it's fine and that's how users are going to see it. So there we go. So we've been able to reduce the colors down to 68 so now if we're going to go and just switch back to WebP, turn the effort up, let it use all of its iterations to figure out the compression and boom, there we go, it's like 13K. So it's incredible how much that saves. For Safari, we need the PNG. PNG compressors are really varied. So we use Oxy PNG, which is a pretty good one and that will take it down to what we get, 17K. That's a close second. Pretty good. It's close and I would say, I would give a shout out to Image Optin, which ships with Oxy PNG but also lots of other PNG compressors that we would like to add to Scrooge at some point but we haven't yet and it brings the file size down even more. In fact, it's only 3K off the WebP. So in this case, I would probably not bother unless I had like a load of images on the page and that difference adds up, I would say just serve the PNG, it's fine. Like 3K, it's not worth the complexity. So yeah, rather than think of a compression format for a particular kind of image, like my takeaway is like, think about how the compression actually works and use that as your first guess. But as you said earlier, it's always worth trying it in something else. So I'm gonna look at a couple of edge cases to finish up on. Look at this, look at this, this is beautiful. This is SVG, believe it or not. I was about to say that it looks like one of these things that the CSS artists built with just like a billion divs and gradients and shadows and then a photorealistic thing turns out to just be CSS. But I guess SVG has. That's pretty much what is happening here, but with SVG, I think this was manually created. Yeah, very similar. If I minify it and SVG OMG, which is a SVG compression, well, it's a GUI I made, but it's based on SVGO, which is a node app. I can get it down to 37K, but this is so complicated and uses so many filters. Like look at the jank there on the resizing and this is on a high-end MacBook. It is taking a ton of CPU to render that image. So over in Squoosh, Lossy WebP does a really good job of this, like 53K. It's actually bigger than the SVG, but the saving in CPU makes this a better decision. And that's actually interesting because that's something that Squoosh doesn't really allow you to quantify the saving in the CPU. Yes, exactly. Yeah, it'll be nice if we could find a way to do that somehow. But yeah, if it's struggling in... With this image, you notice how slow it loads and you notice in SVG OMG how slow it is to resize. So that's a good signal. The PNG equivalent, because we can't use JPEG for this because it's got alpha transparency. The PNG is 86K. And I've had to reduce the colors and that introduced a lot of banding, so I had to introduce dithering. And that's actually noticeable that you can see the dots. It's not great, but without the palette reduction, it's 300K. So in this case, you would kind of say, oh, I'm sorry, Safari users, you don't get quite as nice an image, you know? Yeah. But it's worth it for the file size. Although I guess you could also serve the SVG because the picture element, you specify the sources in order of priority. So you could put the weppy first and could put the SVG seconds? Yeah, you could decide to say, well, you know, Safari users are likely to be on a high-end device, so the SVG is less bad, maybe. But it's still gonna cause jank to other things on the page, you know? It is, like, as we say, on a high-end MacBook, it's bad. But yes, you could make that judgment call. Like, you could say the jank is better than the 300K or the banding or whatever. One more example I'll leave you with. This is another one of Steve's drawings. It has a lot of flat color. So my instinct for this would be to go for a lossless format. But there's also a lot of shadows, a lot of gradients as well. So again, Lossy WebP does a really good job of this. Like, and it takes it down to 28K. If you go for a lossless format, even with the palette reduction, it's 118K and you actually get a lot of banding with it. So yeah, it's like, tools, not rules. That's my takeaway from this. Like, think about the code in terms of what they are good at. Think about how they work. And with that, go and make your images smaller. Please, please make your images smaller. I've actually enjoyed this. This has been like, should we do some more two or freeze like this? Should we do some remote two or freeze? Cause I don't know when we're gonna get back to the office. I was planning to. Absolutely. Well, we've got the equipment. So let's do it. Happy next time. Happy next time. Bye. Hello and welcome. My name is Steven Fluen. I'm a developer advocate on the Angular team here at Google. My job is really fun because I get to help developers be successful with Angular and I get to spend a lot of time listening and engaging with the community so we can reflect your needs onto the team. One of the things that we've learned having worked with a lot of developers across the world is that staying up to date is really, really important. And so today's talk is all about how to keep your application as fast as possible and as fresh as possible using Angular. We're gonna be diving into some of the best practices that you should be following as an application developer. And we'll take a look at live code and see how you could actually go in and debug and build your application out. So let's actually get started. So if we take a look at what we've got here, this is a application that I've built using the Angular CLI. So it's a very, very basic application. This should look very, very familiar. It's basically we ran ng new and then we installed something called Angular Material. And so if you take a look at the app component, I've put in an app dashboard. And so if we ran a command like ng serve, what we would see is a normal dashboard of an application that's coming from Angular Material. So we've got a few dependencies installed. We've got a few other things installed and all of that should just be coming to you kind of automatically. So here we go. This is just the default blank dashboard. And so even though we haven't written a ton of code, we've got a application up and running and it's pulling in a ton of code, right? We're pulling in the features of Angular necessary for these animations, for these menus, for these dialogues, as well as for all of this content around this dashboard that a user could interact with and engage with. And so the first thing that I want to do in order to make my application fast as possible is I really want to understand my application. And so we're going to make a couple changes within our build configuration in the Angular.json file. So what we're going to do is we're going to find the Angular.json file. You're going to find this directly in the root of your application. And what we're going to do is we're going to find in the production build settings, we're going to change two settings here. We're going to say source map. I'm just going to change that from false to true as well as in our named chunks, we're going to turn that onto true. What source map does is this will actually take my application and map all of the generated JavaScript that comes out of the build and map it back into the code that that JavaScript came from. So if some of the generated code came from Angular material, we'll see that. If it came from Angular itself, we'll see that. And we'll even see things like the generated JavaScript being mapped back to my HTML and into the application code that I'm writing for all of the components that I'm creating. So source maps are a really great way for you to understand what's going on in your application and the relationship between what happens in the code you write in the IDE and the code that's being delivered to your users. And so there's a lot of steps that we end up taking along that way, right? There's the Angular compiler that takes that HTML and turns it into JavaScript. But there's also other steps that we do like optimizations where we'll change some of the property names. You've got bundling so that we get all of this JavaScript coming down as a single kind of cohesive bundle that makes sense to browsers. And so source map allows us to follow that path, follow that process, and give us really good insights. And so what we can do is having made these two changes, we can actually now go and run a ng build dash dash prod. And what that will do is that we'll create a production build of our application, which is normally what you ship down to browsers. But because we've turned on source maps and we've turned on name chunks, we'll actually see the JavaScript and be able to analyze those chunks. And so we're also gonna be using another tool today called source map explorer. There's a bunch of other tools out there that developers use to analyze bundle size and chunk size. But what we on the Angular team actually really strongly recommend is you only use source map explorer because there are tools out there like Webpack Bundle Analyzer that categorically misrecognize some of those steps within our process. And so we've done that production build. And now whenever you want to actually take a look at what a source map looks like, we can just run source map explorer. If you don't have this installed, you can just install it globally. You're on global ad source map explorer, or you can install it local to your project, whichever makes sense to you. Just go ahead and do that. And what we'll do is we'll run source map explorer and we will pass it in the JavaScript file that we want to analyze. And so we can see that we've got our disk folder here and we're gonna look at the ES2015 version of our JavaScript. So if we run that, it's gonna pop open a web browser and we're gonna see this visual interactive story about our application. So we can see our overall bundle size in terms of minified but not compressed JavaScript. So it'll actually be a little bit smaller when it goes over the network. We have about 452 kilobytes of application. And what's really, really helpful here is that you can see where that bundle size is coming from. So you can see animations is 64 kilobytes. Angular material is about 56 kilobytes. We've got things like HTTP where we might actually need these things. And we might not even know that they're in our bundle. We might have forgotten it some point along the way. So you can see I'm pulling in forms and HTTP. If we actually go into our application, now that we know that those things are there, we can say, hey, I don't actually need that for this dashboard, right? Like we're not, if we look at the content that I'm showing here, there's no form data here. There's no data coming from the internet. And I can go into my app module and I could pull those out, right? So I'll just comment these out because we don't need HTTP or forms. And what should happen is when I do my next production bundle, all of that code is gonna be left out. So we're gonna be able to see that our bundle size will come down from around 450KB to a little bit less than that, which is really, really nice. And it should actually be exactly leaving out the size of the HTTP bundle and the size of the forms bundle. And so source maps are one of the best tools in your toolbox. And we really recommend that every developer be using source maps or using this on a regular basis to understand your application, especially when you're doing any sort of performance optimization. Now, let's say that you are looking at your bundle and you're saying, it's still too big, but I need all of those features. I need all of those capabilities. Fortunately, the Angular CLI offers a lot of really smart capabilities. And one of those capabilities is being able to take an application and build out lazy loaded modules. And so let's actually go ahead and do this. And so I'm gonna use the CLI here, and I'm gonna say ng, which is the Angular command, and I'm gonna say generate module. And I'm gonna give it a root module that it's gonna connect back to. So what we're gonna do is we're gonna create a separate part of our application where we can pull in the features that we need and just leave all of that code outside of the main bundle. And so we're gonna generate a module here and I'll just link it back to the module, known as app module, which is the root of my application. And now I'm gonna give it a route. And so what this does is it automatically wires up this module to be lazy loaded whenever a user hits that route. And so we'll just make, for example, an about page. And so what this is gonna do, it's gonna generate a module called about module. It's gonna be able to route to it. And then it's gonna give me an about component. So we can see all of that in our source code. So let's just close all these files. And we're gonna see it created a new about folder with our new module and our new code. And so if we do another production build, what we'll actually see, because we turned on named chunks, we'll actually get out now, not only a main chunk, a main module, we're also gonna get out an about module and we can see and independently verify the size of that module and independently verify what the dependencies of that module are doing to the overall bundle size of our application. And so what you'll see is that by default, an Angular module is really, really small. It's very, very thin. It doesn't add a ton of bundle size to your application. It's almost always the dependencies that you are pulling in as a developer, the features that you are reaching out for. And what Angular will do is intelligently based on the where you do the imports, we'll lazy load and split that code. So if you remember, we pulled HP and forms out of our main chunk, we can actually now pull that in. And so you can see we've generated a, let's just clear this out. Let's take a look in our disc folder here. You can see we've got a main chunk and now we've got this nice new about module. And so you can see in the ES 2015 version, our about module is only about 1.1K of code. So it's just that about component because it doesn't have any dependencies. But if we add back in more dependencies into that chunk, if we wanted to pull in, for example, Angular material into that about module, then it's gonna pull in that code but it's gonna do it very, very intelligently. It's gonna lazy load all of that code. So if you haven't set up lazy loading before, it's really, really easy. Just generate a new module with that command that I showed where we generate a module and we hang it onto the routing structure of our application. All right, next up, I wanna talk a little bit about what happens after we build an application, after we've been building it out and we've added more and more features. It's very, very easy to backslide. One of the things that the Chrome team has seen consistently is that even applications that spend a ton of time building great performance into their apps tend to backslide because we as developers, we wanna add more features. We wanna push more functionality to our users and that often comes with more dependencies, which sometimes we don't realize is negatively affecting our bundle size. And so one of the features that's built into the Angular CLI is called bundle budgets. And so again, if we go back into our Angular JSON, so we'll just search for that, we can see this nice little budgets section. And what this does is it allows you to set several different budgets for your application. And by default, we give you two out of the box. We give you an initial budget. So this is the JavaScript that it takes to load the initial page of your application. So right now, we are warning at two megabytes. So we'll see a warning as part of our build if our application is over two megabytes. And our build will actually error out if it's more than five megabytes. And so those are very, very conservative defaults. We recommend you turn those as small as you can to really just give yourself the knowledge of when you're increasing your bundle size that it's a conscious choice rather than an accident. You'll see we also have a few other types of budgets, including any component style. So what this says is that any scoped styles to a component, we wanna keep those under six kilobytes and we are gonna error out if they become above 10 kilobytes. And there's a few more that you can see if you just take a look in your ID. You've got all scripts, so that's just what is the total sum of all the scripts in my application, any scripts or any individual scripts should be bigger than this, the whole bundle overall, all those sorts of things. So we've got a whole bunch of different features that allow you to really take control and understand and prevent bundle size increases. So that's a really, really helpful tool. So those are some of the top things that you should be doing to keep your application as fast as possible, but one of the best things you can do that doesn't take a lot of work is actually staying up to date with Angular. So if we jump back to the terminal here, what we'll see is that I actually created this Angular application on an old version of Angular. It is running version eight. So version eight is not the latest version, version nine as of this filming is the latest version and version 10 is coming out, but it's always the same. So if you are on an out of date version of Angular, your bundle size is gonna be bigger and slower than it needs to be. Because what happens is over time, every time we do a release, the Angular team is looking at what can we do better? With the release of version eight, we actually made a huge step forward by automatically doing something called differential loading where because we have this opinionated control of the entire tool chain, we understand how to make your production bundles. And so what we did is we said, hey, modern browsers are capable of loading JavaScript differently than legacy browsers that don't support something called ES modules. And so using that knowledge, you can actually conditionally force modern browsers to load modern JavaScript and legacy browsers to load legacy JavaScript. And this is a really nice schism that allows you to have two bundles that really gives you the best of both worlds where you can ship the smallest, fastest JavaScript to modern browsers that support the most recent capabilities like classes, like modules, all those sorts of things. And you can leave an older legacy bundle for browsers that don't have that kind of support. And so we actually did this out of the box by default. So you notice when we were doing those builds, we were actually getting the two JavaScript bundles for each of the files. So we had an about module 2015 and we had an about module ES5. And so again, this is one of the changes that the Angular team did behind the scenes without having to make you change any of the code in your application. And this is something we do every single release. And so what we're gonna do now is we're gonna make our application as fresh as possible. So we're gonna update to the latest version of Angular and we're gonna automatically get more improvements to our bundle size. This is something that just happens every time you keep your application up to date using Angular. And so we're gonna again use the ng-cli command and we're going to use ng-update. And we're gonna update our application in a couple of stages. First, we're gonna update the Angular core packages, so core and the core package as well as the cli package. And then what we're gonna do is we're gonna go and update our dependencies. So we're gonna do it as two steps to make sure that we don't accidentally enter a mistake or an error where application is no longer compatible with one of those dependencies. So let's go ahead and get this started. So I'm gonna run ng-update. And what we're gonna do is we're gonna pass it Angular-cli and Angular-core. Now, if you were on an even older version, what we do is we recommend that you go one version at a time just so we can apply things cleanly. And if you wanna do that, for example, if you were on version six and you wanted to go to version seven, you can just say at seven and that will just do that for you. But we're gonna go all the way from version eight up to version nine, which is the latest. And so I'm gonna run this command and theoretically things should work. There's a few places where this might fail. If your repository isn't clean, then what should happen is we're gonna throw this warning, which is what we're seeing right now. Repository is not clean, please commit. Because what we want is we want you to have a very clean history so that you actually know what the Angular update process did because this isn't like a normal ng-update where we're just modifying the node modules of your project. We're actually making changes to your application so that you stay compatible with Angular. And the reason we're able to do this is because the Angular team at Google, we have thousands of projects across Google that are not run by our team that are using Angular and it's actually, according to Google policy, our responsibility for keeping those applications working as we make breaking changes to Angular. And so in the only way that we can scale this at Google is to build really great automation and really great tooling. And so what we did is we baked that same sort of tooling into the public world into this ng-update command. So whenever we need to change a method name, whenever we need to deprecate something, we're gonna try and update your application as best we can. And so that's why we always make sure that you wanna have a clean history. So we'll just get add everything, commit that, and now we should be able to run our update command. So again, what's happening behind the scenes is it's gonna be downloading and installing the latest version of Angular CLI and then it's gonna be updating the packages of my application and changing my code where needed. And so if there's any sort of migrations which should happen is while you're doing the update process, it's going to report what migrations it's doing. So you can see Angular workspace migration. So it made a few changes to the workspace layout. So you can see it updated in my Angular JSON file, it updated in my tsconfig.app. It updated the package JSON. So it's making all of these migrations, even ones that don't really have any changes to my app. It's still double checking all those things so that we know that my application's gonna keep working. Now there's a couple other places that you can look for the latest information. You'll notice here at the very, very bottom of the update, it actually says for more information, please see this link. And so we actually have guides on what changes we're making behind the scenes with every version of Angular so you can know about things like deprecations, you can know about things like removals and changes to the way that APIs work. The other one that I wanna point you out to would be update.angular.io. So behind the scenes when you ran ngUpdate, we made a lot of changes to your project. And so if you actually wanted to see what you need to do, you can just go to the update guide and you can say show me how to update. And we'll say, oh, make sure you're on the latest version here, make sure you're using this version of Node. It walks you through the changes that are gonna affect you as a developer. And you can even tune this based on the number and amount of features and the depth and complexity of your usage of Angular because most applications don't need to care about all of the changes we're making out of the hood. But let's say you have a large application, maybe you have several hundred components, you have component libraries, you're using things like Universal. You can check these boxes. And what we'll do is it will show you all of the information about the update, all of the changes we're making behind the scenes so you can have a full, complete understanding of what's going on. The other way you can do it is if you just take a look at the Git history, you can now go in and see what all the changes we made were. So for example, when we moved to version nine, we turned on ahead of time compilation by default. So that's gonna make your build a lot faster. You can see that we've installed all of the dependencies and not just on Angular. We've also installed things like RxJS to the latest version. We've updated your components so that they actually continue working as we make changes to Angular. So this is really, really powerful. And then we can actually do a test to see if this works. So let's just run on ngserve and we can say, yes, let's let Google Analytics track some of our CLI usage anonymously. And what will happen is when we return back to the browser window, we're gonna get the latest version of Angular and our app is generally just gonna keep working. And then the way that this really affects you is not only keeping you up to date as a developer, making sure you're using best practices, but again, it's also gonna improve the performance of your application. One of the things that you're gonna see is that every time we make an update to Angular, we're trying to look for ways to make Angular more tree-shakeable to make your build system better. And so there's lots of experimentation, lots of ideation going on there because the state of bundlers in JavaScript are not static. Webpack keeps getting better, roll-up keeps getting better, cursor keeps getting better, and these tools are changing and they're evolving and the way that they relate to each other is changing. And so what we do is, as the Angular team, we're trying to stay on top of that for you. And so let's go ahead and refresh. And if we take a look in the DevTools, we should see we are now on the latest version of Angular. So 919 at the writing, but whenever you use NG update, it's just gonna move you to the latest version. It's gonna do that automatically for you so that you are staying up to date, staying fresh, staying fast. So once you've updated Angular and you've updated the CLI by core income, now what I'm gonna do is I'm gonna go and update all of the other packages. So if you take a look, for example, we did not, you know, our package JSON, we didn't update the CDK or we didn't update Angular material. And so we're gonna do one more NG update command to update Angular material and Angular CDK. You can do this with any of your dependencies. A lot of dependencies are starting to support these automatic migrations. This is something we've been pushing very hard for in the ecosystem. All right, let's just run that NG update Angular material and Angular CDK command again. And again, it's fetching the latest version of those packages. It's updating their dependencies in your package JSON. And then it's gonna execute any migrations that you need to be on the latest version. And I think we're gonna just run NG serve one last time and we will see that our entire application kept on working and our bundle size should have gotten a little bit better. And again, it's doing a little bit of compiling ahead of time. Generally, this is a temporary thing that you're gonna see with Ivy and with version nine is that we do this extra compilation. That's really to offer the latest version of Angular and the latest features of Angular while still staying compatible with the rest of the ecosystem. So this is just a extra compilation step that the Angular team does to optimize things. All right, as soon as this compilation is complete, what we're gonna see is we're gonna be on the latest version of Angular. And again, we updated the package JSON. So just all of the dependencies, including removing things like HammerJS and our application should just keep working. Yep, there it is. We have a great dashboard, just like we had before right in the beginning. Everything works, everything animates. We're now in the latest version of Angular. We're now in the latest version of Material in the CDK. And our application is as fast as we can be because we're budgeting. We're making sure that we're understanding our application, making sure it doesn't grow over time. We're analyzing the application bundle size with source maps and we're doing things like laser loading so that we only make users pay for the features that they're using at the current time. That's gonna be it for us. Thank you so much for watching and have a wonderful day. Hello and welcome to security and privacy for the open web. I'm Mood and I'm a developer advocate for the Chrome team recording from Berlin. And I am Sam coming to you live-ish from Tooting Beck in South London. Okay, first, Sam, let's travel in time a bit and understand what the open web means because the web is open by definition, right? Yeah, well, I mean, sort of. The web was originally designed to be open and transparent. In fact, Tim Berners-Lee envisaged the web as a kind of read-write medium. You know, the browser would be an editor as much as a reader. And right from the early days, the limited scope of what browsers could do helped keep them safe for, you know, anonymous, stateless document rendering. But then capabilities and expectations grew and, you know, we wanted to log in and then e-commerce and so on. And then we got phishing, malware, man-in-the-middle tax and all the rest. Yeah, so in response, you know, we've had browser sandboxing, vicarters, HTTPS, protections against malicious sites, plugin deprecation and all that stuff, which is great. However, some of the web technologies designed decades ago aren't being used in the simple way that was intended. You know, it was difficult in the 1990s to envisage how user agent strings and cookies would be used beyond their intended capabilities in the 2020s. And this evolution in how some web technologies are able to be utilized, paired with the rising expectations from users to have more control and transparency over their personal data, has led the whole web ecosystem to evaluate long-standing practices. And now, more than ever, I think, the web needs privacy and security by default for all users. I think we'll see an increasingly high proportion of people on the web who've never used a browser before or who are simply outside their normal browsing habits. People, you know, may be looking for information who may be in a crisis or just feeling vulnerable. And I actually think these are the hardest, most complex and most important problems to solve on the web right now. But of course, this is not easy, even in terms of user interface design. For example, how do you make quite complex concepts such as cookie management understandable for billions of users? Yes, so that's why users need transparency and control in their browsers. And it's not just browser features, right? Web standards and defaults also need to change. Yeah, that's absolutely true. I mean, cookies, for example, and data such as user agent strings that can be used for device fingerprinting and to covertly track individual users. Also features such as the referrer header that can reveal private browsing data. As developers, we need to rethink the way we handle user data. Do you really need all the data you access and is it clear to your users what you're doing with their data? Yeah, because like as a developer, you're actually best placed to understand potential problems and help fix them. So, okay, let's run through today's sessions. We'll be talking about security, privacy, payments and identity. Yeah, first up, our colleague, Rowan Merewood has some recipes to help you manage your cookies. Whether or not you've heard about same site and changes to cookie defaults, if you're using any kind of third-party content such as ads or you're doing anything with cookies on your site, you should definitely check out his session. Second, thinking about the cross-region web, you need to prevent information leaks and that's where powerful protections like the coop and co-app policy headers come in. To understand how this can protect your website, check out the great session from our colleague, Ajikita Mora. Okay, so that's all nice, but how do you actually debug that stuff like same site cookies and coop and co-app? Well, the new issues tab in Chrome DevTools is there to help you. The issues tab makes it much easier to find and fix problems. Instead of console message clutter, you get clear instructions on how to fix problems. So with these sessions, you learn techniques to not passively leak your user's data and there's a pattern here. Debugging is important, but how can you and your team develop a mindset around privacy and security and prepare for the future? Ron and I thought about this and put together strategies with concrete examples of web APIs and HTTP headers you surely know to help you make sure you're using just the data you need. And speaking of user data, a crucial entry point for this is the ubiquitous sign-in form. And this is particularly important right now when lots of sites need to be accessible to new users. Now, in my session, I'll show you how to use cross-platform browser features to build a simple email password login form that's secure, accessible, and easy to use. Just like the sign-in experience, you need payments flows to be clear and safe. So what's new in web payments? I, Jakita Mora, will guide you through this. So if your website uses payments, make sure to check this session out. Okay, so that's it for the sessions, but we'd also like to tell you about one really important initiative that we're all involved with, the Privacy Sandbox. Now, more than ever, people need to know that private data stays private. On the other hand, people want to do stuff on the web that's private and personal. You want to go shopping, use your bank online, communicate private information, and so on. Keeping data safe can't just be about constraints. We can't just say no to everything. The problem is keeping your users safe is not just about getting your own house in order. It's not just about first-party interactions because most websites use services from other companies to provide analytics or videos and do lots of other useful stuff. Most notably, ads are included in web pages via third-party JavaScript and iFrames. And ad views, clicks, and conversions are measured via third-party cookies and scripts. But when you visit a website, you may not be aware of these third parties and what they're doing with your data. And actually publishers and web developers themselves may not have visibility on the entire third-party supply chain. Ad targeting, conversion measurement, and other use cases currently rely on stable cross-site identity. Historically, this has been done with third-party cookies, but today browsers have begun to restrict access to these. Other mechanisms for cross-site user tracking are also being used, such as covered browser storage, device fingerprinting, and requests for personal information like email addresses. And this is a dilemma for the web. How can legitimate third-party use cases be supported without enabling users to be tracked across sites? In particular, how can websites fund content by enabling third parties to show ads and measure ad performance, but not allow individual users to be profiled? How can advertisers and site owners evaluate a user's authenticity without resorting to dark patterns such as device fingerprinting? Well, this is where the Privacy Sandbox comes in. And just to avoid confusion, this is not the same as the browser sandbox architecture you may have heard of, though it does build on some of the same ideas of keeping your data safe. The Privacy Sandbox is a set of proposals to implement privacy-preserving APIs to support business models that fund websites in the absence of mechanisms like third-party cookies. Now, the Privacy Sandbox supports five core use cases for a world without third-party cookies, measurement for ads and other third-party content, relevance features for advertising, fraud detection, distinguishing real humans from bots and spammers, getting rid of covert tracking, and finally, secure and simple identity management across sites. So we need browsers to support third-party use cases in a future without cookies, but whatever happens, users need cookie security and choice right now. So how do the Privacy Sandbox proposals satisfy third-party use cases? So I'm not gonna go into the gory details of every single one of the Privacy Sandbox proposals here. Shameless self-promotion, but you really should read our article, Digging Into the Privacy Sandbox, which outlines all the proposals. But what we really need though is feedback on the proposals, in particular, to suggest missing use cases and more private ways to accomplish their goals. The Chrome team is responding to feedback on the proposals on GitHub and in W3C forums, and we really hope you'll join the discussion. And this is especially true if you work for a publisher, advertiser, or an ad tech company. And if you have other questions, feedback, or ideas to share on today's sessions, finger us on Twitter at Chromium Dev with the hashtag WebDevLive. Yeah, thanks, Maud. And if you wanna get in touch, we'll be online to answer questions in the live chat during the Web.DevLive event, or you can just add a comment to this video. So thanks for watching. So let's get on with this and first, cookies. Hello there, my name's Ron, and I want to chat to you about cookies. Like most delicious things, cookies are great in moderation, but too many of them are messing up the recipe. That's bad news. Okay, so what are we gonna chat about? Two bits. First, a good HTTP cookie recipe for all occasions. And second, how to work out when your cookies are going wrong. Debugging essentially, because discovering bugs in cookies, gross. We'll worry about that later, though. First, let's get the recipe right. We're gonna start with a base that comes from MyQuest, who's really one of the head chefs of the cookie world. This cookie recipe should be your starting point. Everything else we do is gonna be a variation on this. So let's take a look at it. It's a lot, I know, but it's mostly static. If you set this up as your default configuration, so it's just added automatically, your code still remains as clean as set a cookie with this name and this value. And all of these attributes or ingredients come with benefits, so let's step through them. First, that host prefix. Now, that's actually gonna set up a few rules for us. It's enforcing these first two attributes, secure and path equal slash. Secure means that this cookie will only be set or sent over HTTPS connections. If you haven't fully migrated your site to HTTPS, now is really the time to make that a priority. Browsers are getting more and more functionality behind HTTPS and being more explicit and communicating to users that plain HTTP is insecure. Now, path equal slash is interesting and we need to look at that along with another attribute that host is enforcing. But in this case, enforcing that it is not included and that's domain. No domain means that the cookie uses the current document origin and does not include subdomains. And path equal slash means that it's sent for all requests to that origin. So if I set a host cookie for example.com, it will go on every request to example.com but not to images.example.com. There you go. That was three in one. Host prefix gives you secure, top level path and no domain. Next up is the HTTP only attribute. This means that the cookie will only be sent in request headers and is not available to JavaScript by document.cookie. To clarify, you can trigger requests from JavaScript like fetch or XML HTTP request and if you've specified including credentials, cookies, including HTTP only cookies will be sent. They're just not visible to those client side scripts in any way. So in the event that any of those scripts on your site have been compromised or malicious, you've at least limited their access to potentially sensitive cookie data. Finally, a personal favorite, it's same site. Specifically, same site equals lax. And what same site equals lax does is restrict the cookie to only be sent on requests that match the current browsing context. In other words, the top level site the user is currently visiting and is right there in their location bar. Putting that all together, we get a nicely contained first party cookie. This is an ideal cookie for controlling your user session set by your server side framework. And let's talk about why. What this gives you is some pretty reasonable protection against cross site request forgery. The attack works like this. Let's say you have a block where users need to be signed in to post content and that content submission is just a form submission or an API call. If one of those signed in users visits a malicious site, it can trigger a request to that content posting endpoint with some kind of spammy or abusive content. If the cookies aren't protected, the browser may well send them. And when the blog server receives that request, it's gonna look like it came from the signed in user and it's gonna post that content with their name attached. But not with our cookie recipe they can. Same site equals laxes what's protecting us against this attack. The malicious site can make the request, but because the user isn't actually on the blogging site, the sites don't match and therefore no cookie. There are some other benefits too. The secure attribute means that we're going over HTTPS. So other people lurking on the same network can't steal our cookie. Look up Firesheep if you want to see how someone create an extension to demo just how easy this kind of session hijacking over HTTP was. Next up, HTV only. Like we said, that means the cookie can't be stolen by malicious client-side JavaScript, like a third-party dependency getting hacked and then included on your site. Now let's talk about tweaking this recipe for your own taste. If the host prefix is too restrictive, like maybe you have a new site and you've set it up with subdomains for each topic like finance dot and politics dot, but you still want one session across all of those. For that, you could use the secure prefix instead and specified domain. So far, these examples don't have any kind of expiry date set on them. That means they expire when the browser session ends, which sounds short, but in reality, a lot of people leave their browser open for a long time or have a session restoring feature that also brings back all those cookies. Now, there isn't a default right answer here. It's down to your use case. But what I would say is if it's something short-term, like a token for a form submission, then use max age, since you just specified the number of seconds until it expires, nice and simple. If it's something more permanent, like a theme preference, then use expires and set it maybe a year in the future. I wouldn't use it for anything short-term because it's a date format, so you have to think about time zones, clock changes, incorrect clocks. It's a nightmare. I'm not even going to show you that one. Remember, you can always reset that expiry date on future requests, but then if the user doesn't visit your site for ages, you can also ensure the browser cleans that for you. And that just seems polite. No one wants to stay on cookies. We can also use the same site attribute to lock things down a bit more, but it's really for quite specific situations. We talked about same site equals lax, which allows cookies to be sent on top-level navigations. For example, I want my session to be sent when I first visit the site because I want to see my signed-in experience. Same site can also be set strict, which means I really have to be on the site already or the cookie won't be sent. This is useful for protecting form submission. So that blog hosting example, if you set up a same site equals strict cookie, pretty much the same as your session, but you treat it like a token for right permission and validate that it's included on that form submission, then you can be pretty sure it came from the user submitting the form actually on your site. Sometimes you do need that cross-site data, though. Now, our long-term goal is to stop supporting cross-site cookies and bringing better mechanisms for enabling that functionality. We can still do some interim work to clean up those cookies right now. The really important thing is that you make the intent to send that cross-site data totally explicit. For that, we can turn the same-site dial the other way with same-site equals none. That tells the browser that there are no same-site restrictions and it can send the cookie whenever. It does require secure as well, though. No restrictions on site doesn't mean no restrictions at all. Now, even if you don't like the default recipe, I strongly recommend you move to setting an explicit same-site value on your cookies. Chrome is moving to treat cookies as same-site equals lax by default for Chrome version 80 and up with the release of Chrome 84 stable. That's around July 14th, two weeks from this initial broadcast date, if you're watching this on the premiere, or even sooner if you're already watching this in the future or in the past. Anyway, the important thing is cookies without a same-site attribute will be treated as same-site equals lax. In other words, first-party by default. If you need cross-site or third-party cookies, they must be set with same-site equals none and secure. Okay, now, that's all simple enough for me to say, but I know a lot of you are maintaining legacy or complex sites where, to be honest, you don't always know how or why those cookies are being set. Told you we're gonna talk about debugging, so let's do it. First, let's get our environment set up, starting with browser. Cookies are persistent, meaning that you've probably got a lot of things hanging around in your browser. On top of that, browser settings and extensions can also affect their behavior. Because of that, I strongly recommend testing with a clean slate. That means either setting up a new profile, use a separate Chrome channel like Beta or Canary, or using Cognita mode. Across all of those, though, make sure that you don't have any extensions installed and check your user settings to ensure you aren't blocking third-party cookies or just blocking cookies in general. Now, we can make sure the browser is enforcing the new behavior. I'm going to pop in Chrome Flags here. And what I'm gonna do is search for cookies. Now, the two that I want to ensure are enabled are same site by default cookies here. And cookies without same site must be secure here. There are a couple other messages in here, a couple other flags that you can set. So if you look through, you can also see this one, enable experimental cookie features. Now, what this will do is turn on all of the upcoming changes. So this is actually great for general testing if you just want to check that your site is gonna behave well in the future. But when you are trying to isolate individual bugs, you can use the individual flags to toggle that behavior on and off. Now, when you've changed a flag, you are gonna have to restart your browser here so that those flags take effect. Now, I already have restarted and I have a test site set up on same site sandbox.glitch.me. And if you've got all the flags enabled, when you go there, you should see all this nice soothing green. Any red or orange Xs in there, go back, check your cookie settings, check your flags. With all of that configured, I'm gonna walk you through this excellent debugging guide, which comes from Lily Chen, one of the engineers working on same site. So don't worry if you don't catch everything I do in this video, it's all written down in there and there's a link in the info. I'm gonna keep using the same site sandbox for testing. The code is all on glitch. If you expand the info again, you can find a link. Feel free to remix this if you want to run your own version or try some different combinations of cookies and so on. Let's open up DevTools and start poking around. I'm gonna start on the console tab and you'll probably see some warnings, but we're gonna be cleaning those up. So what you actually want to look for if you're on a recent enough version of Chrome, like A4 and onwards, is the issues tab here. So the warnings are gonna show you the same information that the issues tab does, but the issues tab is much clearer, much more actionable. Sam's got a video where he goes through the full tour on that. So I'm gonna go through this pretty quick, let's take a look. Let's tap on go to issues. Now the first thing it tells us is that we want to reload the page to get that full capture. So let's take that advice and reload. Now you can see I've got some issues that have come up. Let's take a look. Now the first one we've got is mark cross site cookies as secure. Let's expand that and see what's going on inside. So you can see that this one is telling me that I've got a cookie that's marked with same site equals none, but it's missing the secure attribute. If I scroll down a bit more, I can also see I've got some affected resources so I can expand that out. And that is gonna show me exactly the cookie that's affected. If I click on that cookie here and see it takes me through to the network tab up here, I can expand that so we can see a bit of what's going on. And it's using this filter entry to basically filter that down too. So you can see it's filtered it specifically to the cookie name that has the problem. So it's showing me all the requests that include that specific cookie. We'll come back to the network tab in a minute. So hold on to that. I wanna take a look at the other issue first. So back in here, let's scroll down again and expand that. The second issue is telling me that I have some cookies that don't specify a same side attribute at all. So the browser just doesn't know what to do with that cookie. Maybe we're okay with it being first party but we haven't told anyone. So let's take a look at the cookies that are affected in there. So I'm gonna expand this again and you can see I've got two cookies affected. Right, now we know where the problems are. Let's go over to network tab to take a look in detail. This is a pretty simple site. So there aren't that many requests. In reality, you've probably got far more to wade through. So let's look at some ways that you can filter that down. What I'm gonna do is right click in here and go to header options where I can enable cookies and also set cookies. So this is gonna show me each of the requests and how many cookies they are sending and how many cookies are being set by the set cookie header in the response. The other part of the puzzle then is this has blocked cookies toggle up here. Now, if I turn that on, you can see one of our requests is disappearing. So when I have this on, it's filtering out all the requests that are not being altered in any way. So we got a pretty good idea that it's these two that are responsible for the issues. Back on the table, let's look at the request for the page first. So tap on there and on the cookies tab over here, I can see that I have one of my response cookies highlighted. So I'm gonna hover over there and that info bar is telling us that this is the cookie that's got the same site none but is missing the secure flag. So I've got the name, I've got the value. Hopefully that's enough for you to track it down in the backend and get that secure flag for set. Okay, but we had another issue that was listed as well. So let's take a look at the second request that's cookies.json. Now, this one looks fairly quiet. So the thing we want to do here is enable this show filtered out request cookies. Suddenly, there's a lot more going on here. Now, what this is showing us is all the cookies that could have been included on that request, but were restricted for some reason. Now, some of them are for the right reasons. So you can see here we've got a cookie that has same site equals strict. That's not included on the cross site request. Got one that says same site equals lax. Same deal, we don't want it on the cross site request. But then we've got these two where they've got a blank same site value. Now, that means that the browser is treating them as same site lax and therefore restricting them. This might be because you didn't specify a same site value, but it might also be because you've got an invalid value in there. So get into your code and look for those same site attributes and check the spelling to make sure there are no typos in there as well. Now, DevTools is great for this kind of interactive exploration, but we can also export the data for a whole series of requests. This is helpful if you need to grab the data for a particular situation, like maybe it's one user with a particular account type who has been able to reproduce this problem, and you can't necessarily sit down with them at their machine for a good debugging session together. For this, we're gonna capture a network log. So open up a new tab and go to Chrome net-export. If you've been doing a whole bunch of these, then you may need to click start over on there. And what we want to make sure of in the options is that we say include cookies and credentials because that's what we're trying to test, but just be mindful that including all of this means that you may be logging some sensitive user data. So treat it in exactly the same way that you would with any other. Make sure that you have the consent to log it. Make sure that you're controlling access to that log and delete it when you're done debugging the issue. Okay, press start logging to disk, and I'm just gonna save that in my downloads directory. Okay, now do not close this tab because this is what's doing the recording, but we are gonna switch back over to our test site and I am just going to reload the page to capture that behavior. Now, in reality, what you want to do is capture the entire user journey that you're trying to debug. At a minimum, it's got to be the section that is causing the problem, but often you should probably try and start maybe from like user sign-in or wherever the cookies are being set because you probably want to capture that interaction as well. Okay, once you're done, we're gonna switch back over to network tab and we're gonna say stop logging. Now, you can say like show file here and you can see there's just a JSON file in this directory. You can take a look at it. It is pretty big and it's pretty verbose. So what we want to do is make use of the netlog viewer tool which is linked from this page. So I'm gonna go there and I'm going to go to the hosted version here. Now, I choose my file which is just there, Chrome net export log. And there we go. You can see that I've got a whole bunch of information about the logging that I've just done. You can see my Chrome version, you can see the OS info. What you can also see is if you dive into this where you've got features here, you can see all the flags that I've got enabled. I've been playing with quite a few of them but there's also cookies without same site must be secure embedded in there. The stuff we're looking for though related to the cookies is over in the events section. So I'm gonna go over here and as you can see I've got 200 events to deal with. So this is pretty verbose. This is all of the requests that Chrome is sending kind of all of the network related events including extensions, including other tabs. So let's filter this down to something a bit more useful. I want to type and it is URL underscore request. Make sure there's no space after that column and there you go. That's a bit more manageable. You can see that there's a bunch of other things going on in there but I can see my same site sandbox requests. These are the ones that I'm interested in. Let's find that cookies.json one and inside of here it's this string that I'm interested in cookie inclusion status. Now what that's telling me is basically why a cookie was or wasn't including the request. And if I look through there you can see each of the different ones. So I've got, let's see what we've got in here. Cookie inclusion status is exclude because that same site strict that's one of the ones that we wanted excluded. We've got one up here, that's CK03 that is excluded because same site was unspecified so it was treated as lacks. This search over here lets you do quite a bit. So if I want to find where a particular cookie is set for example, let's say CK02 I can actually just search for that header value. So set cookie and CK02, there it is. And then if I just control F in here as well for set cookie, there you go. You can see all of the response cookies. And if I look at that, that's my CK02 one. Same site none. That's missing a secure attribute on there. And that's it. So now you know how to debug your cookie issues and when you find them, you've got the recipe to put them right. Remember, it's all about being explicit on where you want that cookie set. We've got more on this theme. You can watch Aegee's video talking you through some of the other resources you can isolate with Koop and Co-op and you can also come back to see me and Mod have more header related fun. So thanks for watching. Hi everyone, I'm Aegee and today I'm going to talk about Koop and Co-op. Koop and Co-op are names of HP headers that let your webpage opt into a special state called cross origin isolated and gain access to powerful features such as shared array buffers across platforms, performance dot measure memory, JS self-profiling API. If you're interested in using a powerful feature from this growing list, you will need to opt into Koop and Co-op. Also, if you own a CDN or provide ads, funds, images or other resources that gets loaded in different origins across the web, you should also watch this video. I will discuss one HP header you can add to ensure all your customers can properly load your resources. With that in mind, let's get started. Composability is a superpower of the web. You can enhance your website's capabilities by simply adding resources from different sources. Commonly used services include ads, web fonts, images, videos, maps, identity federation, payment solutions and so on. These services are quite handy and powerful but their cross origin nature contributes to increased risk of information leakage. malicious parties could take advantage of the situation and exfiltrate information about your users but browsers do a good job at preventing such scenarios. They keep cross origin resources separated within a browsing context group when the web page allows that. Browsing context group is a group of tabs, windows and iframes that share the same context. The same origin policy is a security feature that restricts how documents and scripts can interact with resources from another origin so that users' information won't accidentally leak. The same origin policy has been doing a good job keeping the web a safe place. Anti-spectre was discovered. Spectre is a vulnerability found in CPUs that enables malicious websites to read memory and the contents across the origin boundaries. This vulnerability can be exploited via features that can act as high-precision timers. This leaves cross origin resources that are shared in a single browsing context group at risk. Even if they are protected by the same origin policy, Spectre attacks can bypass that border. As a quick response, browser vendors decided to turn off features that could be used to construct high-precision timers. For example, shared array buffer. To mitigate the risk of spectre and make powerful features available on the web page, its origin needs to be isolated from others. By isolating origins into separate browsing context groups, Spectre and other exploits that grant permissions to read arbitrary memory in the same process are no longer able to read cross-origin contents. This in turn allows browsers to bring back the powerful features on pages that are properly isolated. Chrome on desktop enabled site isolation. This allowed us to turn shared array buffer back on on desktops. To achieve more robust isolation, the browser needs an explicit signal from the website that they want to be isolated from cross-origin resources. That is what Coup and Quep are about. To enable cross-origin isolation, you need to do three things. First, set cross-origin opener policy, same origin for the main document. Second, make sure cross-origin resources use cross-origin resource policy, cross-origin, or cross-origin resource sharing. Third, set cross-origin embedded policy, require corp for the main document. Let's walk through each step to see how they work. When cross-origin opener policy, same origin is set. Any cross-origin window opened from the document will have no access to the opener's DOM, so window.opener will return null. This is how the document achieves isolation from the cross-origin window. To create a cross-origin isolated page, you need to make sure all cross-origin resources embedded in the document explicitly allow it to be loaded. There are two ways to achieve this, by setting cross-origin resource policy, cross-origin, or applying cross-origin resource sharing. If you serve resources from a different subdomain that you have control of, you can simply apply the cross-origin resource policy header. But what about resources you don't have control of? If the resource already supports cross-origin resource sharing, you don't need to do much, just append the cross-origin attribute where it's needed. For resources that don't support it, you should ask the resource provider to enable the cross-origin resource policy header. So here's the action item for owners of CDNs or providers of Ns, fonts, images, or other resources embedded across the web. Please be prepared to serve your contents with cross-origin resource policy, cross-origin. Without supporting it, your resources will be blocked in the browser and harm your customer's experience at some point. With cross-origin embedded policy require cork, you can let the web page only load resources that are explicitly marked as shareable unless they are served from the same origin. For example, an image served from cross-origin without a cross-origin resource policy header will be blocked like this. By setting cross-origin resource policy, cross-origin, this image can be displayed. Note that if you use report-only mode of cross-origin embedded policy, you can send reports to a specified URL without actually blocking those resources. I recommend deploying cross-origin embedded policy starting with the report-only mode. That way, you can confirm that cross-origin isolation is in fact without affecting the end users. Once your page is loaded with Coup and Quap, it should be in a cross-origin isolated state. You can verify cross-origin isolated is in effect by checking self.cross-origin isolated boolean flag in JavaScript. This flag is not available in Chrome yet, but it's coming soon. Let's recap. Start testing with Coup and Quap and opt into cross-origin isolation today. You can learn how that works and why they are needed in more details at web.dev.coup.quap and web.dev.ycoup.quap. If you're an owner of a CDN or provider of ads, fonts, images, or other resources, please start adopting cross-origin resource policy. You can learn more at resourcepolicy.fyi. Additionally, Sam is covering this topic by showing you how DevTools can help you with a variety of issues which includes new functionality for Coup and Quap. Thank you for watching. Are you fed up with console spam? Sick of browser messages that tell you problems, but not how to fix them? Well, the Issues tab is a new way for Chrome DevTools to help you find and fix problems with your website. Problems detected by the browser are presented in a structured format, separate from the console, and that also means your own console messages don't get drowned out by browser warnings. The Issues tab aggregates different types of problems. It describes problems using clear and simple language, explains how to fix them, and links to affected resources within DevTools, and shows you where to find further guidance. So let's run through that from start to finish. First, you need a page with problems. Well, for this example, we have a page with lots of problems. Open the page in Chrome and then open Chrome DevTools. As you can see, Issues were detected. Click the Go to Issues button in the yellow warning bar. You can also select the Issues tab from the More Tools menu, or click on the blue icon at the top of DevTools. Now, you might need to click the Reload Page button since DevTools can't collect requests while it's closed. You'll notice that for the moment, warnings that used to show up in the console still do as well as in the Issues tab. The initial version of the Issues tab checks for three types of issue. Cookie problems, mixed content, cross origin in better policy, that's co-app issues as well. Future versions will support more issue types. So click an item in the Issues tab and you'll see that each item has four components. A headline describing the issue, a description providing the context and the solution, and an affected resources section that links to the resources within the appropriate DevTools context, such as the network panel, and also links to further guidance. Click on items within affected resources to view details. In this example, there is one cookie and one request. Now the Issues tab explains problems and tells you how to fix them, but it can also show you resources in the appropriate context within the DevTools themselves. So click on a resource link to view the item in context. In this example, click same site sandbox.glitch.me to show the cookies attached to that request. Scroll to view the item with a problem, in this case the cookie CK02. Hover over the information icon on the right to see the problem and how to fix it. You can also go the other way. Right click on an item within DevTools to show issues associated with it. We'll be adding more features to the Issues tab in the future. So let us know what you think, how we could improve the Issues tab and what features you'd like to see. You can comment on this video or file bug reports in CR bug, which is the engineering team's bug tracker, or send a tweet to Chrome DevTools. As ever, we'd really appreciate your feedback. So thanks very much. Hey, I'm Mood. Hey, and I'm Ron. Now, over the past couple of videos, there's been a pattern where the changes we've been showing you have been to reduce cross-site data transfer by default. And because some of these changes restrict existing behavior, you need to take action if your site relies on that cross-site functionality. So instead of just a list of features, we wanted to give you a little framework for thinking about them, both for now and for the future. So we've brainstormed and finally distilled that down into three questions you can ask yourself and your team. And we'll be running three examples, but first let's take a look at the raw questions. One, do I need this data? Now, there's definitely a developer temptation to think, just collect all the data. It might be valuable later, but that's not really the right attitude anymore. Right, and you're opening yourself up to more risk if that data leaks. You've got more regulations to follow and it's just more work. So the lazy option is just don't, don't collect it, no risk, no work. So we can sum this up as more data, more problems? Yeah, on to the next question. Question two then, is there an alternative? There are so many options there, so it depends. Some things like site preferences might be better in client-side storage, like indexed DB or local storage. So you can still sync them with the server, but maybe you don't need them on every single request. Yeah, and there might be places where the query parameters in the URL make more sense. If it's related to how the content displays, because then if I share a link with you, you get the same options that I have. Sometimes cookie is all the right answer though, so question three, have I secured it correctly? Oh, I know this one. Watch my video from earlier. Yeah, yeah, okay, Ron, okay, but let's do a summary. So first-party cookies should try using the host prefix so they're secure and single origin. They should use HTTP only to stop JavaScript access and they should set SAMHSA lacks to protect against cross-site request forgery. And if they do need to be third-party, then try and do the same, but with SAMHSA equals none and secure. And that's it. Cool, I feel good about this. I think if you're asking these questions, you're headed down the right track. We should repeat these a few times though, like good scientists to see if it works with different topics. Good idea, I might even have a topic that I could talk about right now. Yeah, me too, okay, you go. Okay, first test drive of these questions then. We're gonna take a look at the user agent string. Now, the original specification for this back in 1996 was pretty simple. It was basically browser version, library version, and so on. However, nearly a quarter of a century later, that string has grown a bit. If you want to successfully pass this, then you need a whole bunch of regular expressions or a library that hides them from you. On top of that, the information about your browser and device in that string can be pretty identifying. For my string here, which is being sent on every single request, you can see the exact build of Chrome, the phone I'm using, the version of Android it's running. Now, especially with less common devices, this could be enough to track someone. While we would eventually like to reduce the amount of data exposed there, those 25 years of growth have meant that there's a lot of genuine and valuable use cases out there for the user agent string. And maintaining backwards compatibility on the web is what means that your modern browser can still show you those 25 year old sites successfully. Right, let's run the questions then and see if it helps us. Number one, do you need it? If you're using the string to try and determine feature support or capabilities, then it's far better to use feature detection, progressive enhancement or responsive design to achieve those goals. That means you don't need to maintain a mapping between browser version and whatever feature you're checking for. It also means you're less likely to exclude less common browsers just because you couldn't match the string. Question two, is there an alternative? When I mentioned valid use cases before, some of those examples might be matching the user's OS so you can show appropriate downloads or match the UI conventions. You may also need to work around browser bugs where the specific browser version is the key bit of information you need to know if it's been fixed or not. For this, I want to introduce you to client hints and specifically the new set of user agent client hints. Essentially, the idea is we could go from broadcasting all of this information by default to a model where the site asks the browser for each piece of info and the browser decides what it wants to return. With user agent client hints, the default data shared on each request is the browser name, significant version, and the mobile indicator, which comes in on these secCH headers. If you want something extra, like the OS platform, then your site needs to ask for it. It can do this by sending an exceptCH header in its response asking for platform. Then on subsequent requests, the browser will send a secCHUA platform header back. You can also do this in JavaScript by calling get high entropy values on navigator.useragentdata and passing platform as a parameter. There's one of these user agent hints to cover each of the bits of data that you can get from the string. So full browser version, platform version, device model, CPU architecture, on top of the significant version and that mobile indicator. But wait, there's more. There are also existing client hints that can give you things like device memory or viewport width. Because even if you're getting the data in the right way, it's easier to ask directly for the thing you want instead of using user agent as a proxy. Okay, third and final question, have you secured it correctly? For the user agent client hints, they are by default restricted to the same origin. If you have other cross origin or cross site requests on that page that need the client hints, let's say your downloads are hosted on a separate origin, so you want to send the platform hint there. You need to specify each of the hint and origin pairs in a feature policy to allow them through. Now, eventually, we'd like to reduce the amount of information exposed by default in the user agent string. So ideally, if you can migrate to user agent client hints, then that should absolutely be your route. However, if you cannot, then try to be flexible in your detection. This kind of string passing is inherently fragile. So feel free to warn me that I might get issues with my browser, but don't block me just because my string didn't match your regex. And there we go. User agent, one, two, three, done. We've got a lot more info on web.dev, but let's keep this rolling and do another round. Maud, over to you. Thanks, Juan. Now let's look at our second test drive of these questions to see how they work. Using an example, that's also an HTTP header. It's called the refer. You'll see it written with either one or two hours. Don't be surprised. This is due to an original misspelling in the spec. Imagine that a user is visiting a page on site one and to load an image, a request to site two is sent. In some cases, site two can see the full URL of site one the user is on in the request's refer header, which might be present in all types of requests, navigation or sub-resource. Oh, and this info is not just present in the header. It may be accessed on the destination via JavaScript. The refer can be insightful. For example, for analytics to know that 50% of visitors to site two were coming from socialnetwork.com. But full URLs can contain private data and even sensitive or identifying details, especially the path and query string. And requests sent from your website might include these details. So on to question one, do I need this data? First, incoming requests. Sometimes, like on this diagram, the refer is used to extract the origin to see where the request came from or whether it's same origin. But the refer contains way more data than you need to answer these questions. If you're doing this, you're using it as a proxy to find answers and this is more work for you. So we'll look at alternatives in a minute. Second, outgoing requests. If there's no compelling reason for your website to share full URLs cross-origin, then you shouldn't. We'll see how. Okay, on to question two, is there an alternative? Yes. First, incoming requests. Back to what we said. If what you need is just the origin, the origin header gives you exactly this and it's available in post and course requests. And if what you need to know is whether the request is same origin, you can use the sec fetch site header. Also, side note, if you're using the refer as extra protection against CSRF, then replacing it with this or origin is great, but make sure that you're using CSRF tokens and maybe same site as a primary protection. Now let's talk about outgoing requests. Is there an alternative to sending the full URL in all requests? Again, yes. Luckily, websites can control how much data is sent in the refer by sending a specific refer policy. And depending on the policy, either no refer at all, the origin only or the full URL will be sent. Back to your websites. Suppose you want the full URL, for example, to understand how users are navigating within your website. This doesn't mean that you have to send the full URL in all requests. What you can do instead is set your refer policy to strict origin when cross-origin. It shares the full URL for same origin requests, like here on this diagram, but only the origin in cross-origin requests. Also, it's a safe policy because if from your HTTPS websites a request to an HTTP origin is sent, the refer will be stripped, which is good, because you don't want to leak your URLs on an encrypted connection since anyone on the network can see this, right? And quick segue. If you don't set a policy, the browser's default policy will be used. And to have your back, browsers have switched to or are experimenting with more privacy-preserving default policies, for example, strict origin when cross-origin, the policy which is talked about. You can already try this out, so check the article in the video notes for details. Okay, on to question three. Have I secured it correctly? Strict origin when cross-origin is good, but maybe you can't set this for your whole website because of some specific use case you might have. In this case, don't fall back to an unsafe policy altogether. What you can do instead is take a case-by-case approach. For example, set strict origin when cross-origin as a general policy for your website, and a more lax policy on specific elements or requests if need be. You need to pick a policy that fits the level of sensitivity of the data, and maybe you have a cookie recipes website, but maybe you have a healthcare web app and the topic in the path is sensitive, or some of the URLs contain user-identifying information. On the whole, don't share the refer to a third party unless it's strictly necessary and transparent to your users. And one last thought, data that might not look sensitive or identifying can bring one more piece to the puzzle and be more revealing than you think. So that's it for the refer, and I think we are through. So, Rohan, you wanna wrap up? Thanks, Maud. Yeah, okay, wow. That was a lot of topics in one session. As always, we've got the links included, so you can read more on this. To close then, when you're reviewing data usage, whether that's cookies, user agent, or refer or anything else, what are you gonna ask yourself? One, do I need this data? Two, is there an alternative? And three, have I secured it correctly? Speaking of needing data though, what have we got next? Oh, smooth transition. So you definitely need data at sign up and sign in, so you gotta do these forms right. And Sam's going to explain how to improve sign in and sign up forms on your websites. And we'll see you around. Thank you. If users ever need to sign into your site, good sign-in form design is critical. This is especially true for people on poor connections, on mobile, in a hurry or under stress. Now, more than ever, you need to ensure that sign in isn't a barrier to using your site. Poorly designed sign-in forms get high bounce rate and each bounce means a disgruntled user who's likely to exit your site, not just a missed sign-in opportunity. But first, a disclaimer. This video is about best practice for simple sign-in forms that use an email address and password. It doesn't explain how to use federated login with identity providers like Google or Facebook or how to integrate with backend services. You can find out more about those topics from the links at the end of the article that goes with this video. So in this video, I'm going to talk about straightforward front-end coding best practice, mostly HTML with a little bit of CSS and some JavaScript. I asked the identity team at Chrome what I should talk about and they said, get the basics of sign-in forms right. So be warned, this is simple stuff. It's not rocket science. And before we start, a quick reality check. Do you actually really need users to sign into your site? People don't like being forced to hand over identity information and all that personal data and the code that goes with it becomes a cost and a liability for you. If people can use your site without signing in, all the better. The best sign-in form is known to sign-in form at all. Anyway, with all that out of the way, let's get on with it. First up, well-formed HTML is the backbone of a good sign-in experience. Use the elements built for the job. They've been around for years. Form, section, label and button. And as I'll show you, using these elements as intended enables built-in browser functionality, improves accessibility and adds meaning to your markup. Your basic HTML might start out something like this. So let's break that down. Now, you might be tempted to just wrap inputs in a div and handle input data submission purely with JavaScript. Well, don't do it. It's just generally better to use a plain old form element. This makes your site more accessible to screen readers and other assistive devices and helps browsers understand the intention of your code to enable a whole lot of cross-platform standardized built-in form features that I'll show you later. An HTML form also makes it simpler to build basic functional sign-in for older browsers and to enable sign-in even if JavaScript fails. So first things first, to label an input, use a label. There are two reasons for that. First reason, a tap or a click on a label moves focus to the input it's associated with. Second reason, screen readers announce label text when the label or the label's input gets focus. Sign-in, heading one, email with a hint. You associate a label with an input by giving the labels for attribute the same value as the input's ID. Now, placeholders can be useful, but don't use them as input labels. People are liable to forget what the input was for once they've started entering text, especially if they get distracted. You know, was I entering an email address, a phone number, or an account ID? I can't remember. There are other potential problems with placeholders. And you can see the article that goes with this video if you're unconvinced. Now, it's probably best to put your labels above your inputs. This enables consistent design across mobile and desktop, and according to Google AI research, enables quicker scanning by users. You get full width labels and inputs, and you don't need to adjust label and input width to fit the label text. Some sites force users to enter emails or passwords twice. That might reduce errors for a few users, but it causes extra work for all users and can increase abandonment rates. I think it's better to enable users to confirm their email address. You'll need to do that anyway and make it easy for them to reset their password. Next, use buttons for buttons. Button elements provide accessible behavior and built-in form submission functionality, and they can easily be styled. There's no point in using a div or some other element pretending to be a button. Give the submit button in a sign-up or sign-in form a value that says what it does, such as create account or sign-in, not submit or start or whatever. Consider disabling the sign-in button once the user has tapped or clicked it. Many users click buttons multiple times, even on sites that are fast and responsive. That slows down interactions and can add to server load. Conversely, don't disable form submission awaiting user input. For you, for example, like don't just disable the sign-in button if something's missing or in the wrong format, explain to the user what they need to do. Now, this is a real example. I was urgently trying to sign into a site and there was no way of knowing what I was doing wrong. Now, that HTML code I showed you before is all valid and correct, but the default browser styling means it looks terrible and it's hard to use, especially on mobile. The default browser size for just about everything to do with forms is too small, especially on mobile. This may seem obvious, but it's a common problem with sign-in forms on lots of sites. In particular, the default size and padding for inputs and buttons is too small on desktop and even worse on mobile. Here you can see the various minimum guidelines for target sizes. On that basis, I reckon you should add at least about 15 pixels of padding to input elements and buttons for mobile and around 10 px on desktop, but don't take my word for it, try this out with real devices and real humans and also make sure to provide enough space between inputs. Add enough margin to make inputs work well as touch targets. As a rough guide, that's not about a finger with the margin. You should comfortably be able to tap each of your inputs and buttons without bumping into something else. You also need to make sure your inputs are clearly visible. The default border styling for inputs makes them hard to see. They're almost invisible on some platforms such as Chrome for Android. So add a border on a white background, a good general rule is to use a hash CCC or darker or change the background color instead. I mean, whatever you do, make it blindingly obvious where to tap or click. And remember, design for thumbs. If you search for touch target, you'll see lots of pictures of four fingers. However, in the real world, most people, many people use their thumbs to interact with phones. Thumbs are bigger than four fingers and control is less precise. All the more reason for adequately sized touch targets. Now, as with form control dimensions and padding, the default browser font size for input elements and buttons is too small, particularly on mobile. Browsers on different platforms size fonts somewhat differently. So it's difficult to specify a particular font size that works well everywhere. A quick survey of popular websites shows sizes of around 13 to 16 px on desktop. Well, matching that physical size is a good minimum for text on mobile. And that means you need to use larger pixel sizes on mobile generally. 16 px on Chrome on desktop is quite legible. But even if you have pretty good vision, it's difficult to read 16 px text on Chrome on Android. Lighthouse can help you automate the process of detecting text that's too small. Now, let's talk about visual indicators for validation. Browsers have built-in features to do basic form validation for inputs with a type attribute. Browsers warm when you submit a form with an input value and set focus on the problematic input. You don't need to use JavaScript. Use the invalid CSS selector to highlight invalid data. This is really widely supported by browsers. And for more recent browsers, you can use not placeholder shown to avoid selecting inputs with no content. Okay, we've touched on elements and a bit of CSS. Now I want to talk about attributes. You know, this is where the magic really happens. Browsers have multiple helpful built-in features that use input element attributes. So let's take a look. Add an autofocus attribute to the first input in your sign-in form. That makes it clear where to start and on desktop, at least, means users don't have to select the input to start typing. Password inputs should, of course, have type equals password to hide password text and help browsers understand the meaning of the input. Using input type password also means that browsers, such as Firefox, offer to save your password when a form is submitted. As I'll show you, browsers also use the name and ID attributes to work out the role of form inputs. Use input type equals email to give mobile users an appropriate keyboard and enable basic built-in email address validation by the browser. Again, no JavaScript required. If you need to use a telephone number instead of an email address, input type equals tell, enables a telephone keypad on mobile. You can also use the input mode attribute where necessary. Input mode equals numeric is ideal for pin numbers. But watch out. Using type equals number adds an up-down arrow to increment numbers, so don't use it for numbers that aren't meant to be incremented, such as telephone or pin numbers. And while we're talking about keyboards, unfortunately, if you're not careful, mobile keyboards may cover your form or worse, partially obscure the sign-in button. Users may get confused and give up before realizing what has happened. Avoid this where you can by displaying only the email, phone, and password inputs and the sign-in button at the top of your sign-in page. Put other content below. Now, I know this won't be possible for every site, but whatever you do, test on a range of devices for your target audience and adjust accordingly. Some sites, including Amazon and eBay, avoid the problem by asking for email, phone, and password in two stages. Now, this approach also simplifies the experience the user is only tasked with one thing at a time. So next up, the name and autocomplete attributes. Though these are a really powerful way for you to help browsers, help users, by storing data and auto-filling inputs. There are two parts to this. The input name attribute enables browsers to work out the role of various inputs so that they can store email addresses and other data for use with autocomplete. So don't make the browser guess. Some browsers, including Firefox, also take note of the ID and type attributes. And when the user later accesses a sign-in form on the same site, the autocomplete attribute enables the browser to auto-fill inputs using the data it stored using the name attribute. Now, you need different behavior for password inputs in sign-up and sign-in forms. Don't add auto-fill to a password input in a sign-up form. The browser may already have a password stored for the site and auto-filling a password just doesn't make sense on sign-up. For example, if two people share the same device and one wants to create a new account, use the appropriate password input name to help the browser differentiate between new and current passwords. Use name equals new password for the password input in a sign-up form and also for the new password in a change password form. This tells the browser that you wanted to store a new password for the site. Use name equals current password for the password input in a sign-in form or the input for the user's old password in a change password form. This tells the browser that you wanted to use the current password that it has stored for the site. Different browsers handle email auto-fill and password suggestions somewhat differently, but the effect is much the same. On Safari 11 and above on desktop, for example, the password manager is displayed and then biometric authentication, fingerprint or facial recognition is used if available. Chrome on desktop displays email suggestions depending on what you type, shows the password manager and then auto-fills the password. Now here's another reason to use auto-complete equals new password. Modern browsers suggest a strong password if that's included for the password input in a sign-up form. Use built-in browser password generators. That means users and developers don't need to work out what a strong password is. Since browsers can securely store passwords and auto-fill them as necessary, there's no need for users to remember or enter passwords and leave them on sticky notes attached to their computer. Add the required attribute to both email and password fields. Modern browsers automatically prompt and set focus for missing data. And I'll say it again, no JavaScript required. So I've talked about the basics of getting HTML and CSS right, but you're also gonna need some JavaScript. Make sure to add a show password icon or text button to enable users to check the text they've entered. And don't forget to add a forgot password link. Here's how Gmail does it. It's really straightforward. You add a listener to your button and in the handler, toggle the password input type to text or password. Make sure to include an ARIA label to warn that the password will be displayed. Otherwise, users may inadvertently reveal passwords. Speaking of accessibility, use ARIA described by to explain password constraints using the element you use to describe your password requirements. Screen readers read the label text, the input type and then the description. Now, you'll also want to validate data entry in real time and before submission. HTML form elements and attributes have built-in features for basic validation, but you should also use JavaScript to do more robust validation while users are entering data and when they attempt to submit the form. Just bear in mind that this does not obviate the need to validate and sanitize data on the back end. The sign-in form codelab goes with this video, uses the constraint validation API, which is widely supported to add custom validation using built-in browser UI to set focus and display prompts. Okay, one really important extra thing. What you cannot measure, you cannot improve is particularly true for sign-up and sign-in forms. You need to set goals, measure success, improve your site and repeat. Usability and lab testing are really helpful for trying out changes, but you'll also need real-world data to really understand how your users experience your sign-up and sign-in forms with analytics and real user measurement or monitoring. And you'll need to monitor page analytics, including sign-up and sign-in page views, bounce rates and exits. Make sure to add interaction analytics, such as goal funnels, where do users abandon your sign-up or sign-in flow and events, what actions do users take when interacting with your forms? And lastly, track website performance. Use user-centric field metrics to understand the real experience of real users. Are your sign-up and sign-in forms slow to load? And if so, what is the cause? And finally, some general guidelines to help reduce sign-in form abandonment. Number one, don't make users hunt for the sign-in. Put a link to the sign-in form at the top of the page using well-understood wording like sign-in, create account or register. And keep it focused. Sign-in forms are not the place to distract people with offers and features. Minimize complexity. Ask for other user data, such as addresses or credit card details, only when users see a clear benefit from providing that data. Before users start on your sign-up form, make clear what the value proposition is. You know, how do they benefit from signing in? Give users concrete incentives to complete sign-up. If possible, allow users to identify themselves with a mobile phone number instead of an email address since, you know, that's the way some users want to do it. They may not want to use their email. Make it easy for users to reset their password and make the forgot-your-password link obvious. Make sure to link to your terms of service and privacy policy documents. Make it clear to users from the start how you safeguard their data. And finally, finally, include branding for your company or organization on your sign-up and sign-in pages. Make sure your fonts, styles, and tone of voice match the rest of your site. Some forms just, you know, they just feel like they don't belong to the same site as other content, especially if they have a significantly different URL. So there you go. That's the basics of sign-in form best practice. And you can find out more from the web.dev article that goes with this video and the code lab that goes with that. I hope that's given you a few items to add to your next sprint to improve your website's forms. Of course, sign-up and sign-in, you know, it's not the only place that involves a lot of form filling that could be improved. So stay tuned for Agee, who's got to talk through some of the new options for payments on the web. Thanks for watching. Hi, everyone. This is Agee. Today, I'm going to talk about web payments. Web payments is the name of WCC Working Group and a set of standard APIs that brings dedicated payment functionalities to the browsers for the first time in the history of the web. There are several specifications under web payments, but the two most important ones are payment request API and payment handler API. The payment request API provides a standardized way to invoke a browser-mediated low-friction payment flow on the web, similar to what users might already be familiar with in many native apps. The payment handler API allows payment apps to plug into the payment request API to enable form-free payments on the web. Here's how a web payment flow starts. A website invokes the payment request API. Customer's preferred payment app is launched inside a special model window. The customer interacts with the payment app to confirm and authorize the transaction. The payment app returns a payment credential that can be verified and processed. There has been some confusion between web payments APIs and wallet-specific JavaScript APIs. To clarify, web payments APIs are low-level web platform primitives that are WCC standard proposals. These are wallets such as Google Pay provide JavaScript APIs that can be built on top of the web payments APIs. Supportful web payments APIs across browsers is progressing slowly but steadily. Today, outside of Chrome, limited support is available in Safari, Samsung Internet, Edge, Opera, and Brave. Mozilla is working on adding support to Firefox as well. There are two ways an existing payment app can integrate with payment request API. The best option for payment app with an existing web-based flow is to implement the payment handler API by adding a service worker to their existing payment experience. A payment app that is primarily a native app can integrate with Chrome on Android using the pay intent. It's been a few years since web payments was announced. Some of you might be curious what's been going on in the meantime. The short answer is that we've been actively working on it. Recently, our focus has moved from trying to figure out how payment request API can be directly valuable to merchants to how the APIs can enable better payment app experiences on the web. Let me tell you a bit more about it. We started by focusing on making the payment request API directly valuable to the end merchants by including a payment method called basic card, which was designed to give merchants a direct alternative to form-based credit card payments. With basic card, customers can just select a credit card stored in the browser to make payments. We've learned that building a compelling payment flow requires much more than just returning a credit card number. That's why we are switching gears to focus on enabling payment apps through the web payments APIs. This means freezing future development on Chrome's built-in basic card support for now except bug fixes and eventually deprecating the support for basic card. Today, to complete a payment on the web, a user often has to fill a long form and follow multiple steps through pop-ups and redirects. Web payments APIs enable a much more streamlined flow where a user can complete all payment steps without leaving the context of the checkout page. For developers, building an equivalent payment flow requires intricate cross-site coordination using iframes, cookies, and post-messages. Some of these mechanisms are being phased out by browser vendors as they are also easily abused by trackers that invade users' privacy. The web payments APIs provide a consistent and robust alternative for managing such coordination and we are working hard to ensure compatibility with the evolving web privacy landscape. Until browser interpravity is improved, we recommend that most merchants integrate with payment apps using their recommended approach. The payment apps will take care of leveraging that web payments APIs were available and appropriate and gracefully falling back to alternatives elsewhere. Now, what's new in the web payments? Let me cover four exciting new functionalities today. Skip the sheet is a UX optimization that allows the user to skip directly to a payment app if there is only one eligible choice. This provides a more streamlined flow for payment apps that are launched from branded buttons. Delegation is a new feature in the payment handler API that allows a payment app to provide all the information requested by the merchant, including shipping and contact information. Previously, this information used to always come from the browser. This enhances the capability for a payment app to handle the entirety of a payment flow. Together, delegation and skip the sheet enable payment apps to more easily transition their existing flows to the payment handler API. Another experimental feature that I'm excited about is the digital goods API. It is designed to be used together with the payment request API to allow web apps to invoke billing flows provided by native app stores to provide in-app purchase experiences that are difficult to achieve on the web today. For example, the digital goods API can be used to enable payments for apps in the Play Store that use trusted web activities. Last but not least, we have made WebAuthn available in the payment handler UI so that payment apps can use a biometric sensor to let users sign in or authorize payments. We also believe WebAuthn is a critical technology to enable low friction payment authorization on the web and eventually replace today's browser fingerprinting-based solutions. We are actively exploring a tighter integration between WebAuthn and web payments. In this video, we have looked back at what is web payments, why has the focus shifted to payment apps, and what's coming next. If you're interested to learn more, we have recently published a new set of documents at web.dev slash payments. Okay, that's everything today from our team. I hope it's been helpful for all of you learning a bit about cookies, coupon prep, depth tools, user agent, referral, sign in, and payments. That's a lot, and we will continue to share more with you. Thank you for watching.