 Welcome back to Web Dev Live. As a Brit, I'm tickled to be kicking off day two from an Amir-friendly time zone. Yesterday, we spoke about governments doing great work, getting the word out on the web. And one of the most inspirational digital services is from the UK, where I saw a government convene online for the first time. Now, we've all been learning how best to work and learn from home. And at Google, we're trying to get a deeper understanding of the needs of web developers, partnering with the community. As one of our close partners, I wanted to invite our friends at Mozilla to chat more about that work and also get information on what's new in their world. Welcome, Kadir and Victoria. Hi, Dion. Great to be here. Hey, so Kadir, web developers know MDN really well and many may have participated in the last DNA report, but can you get us up to speed a little bit and tell us about its history and what we're trying to do? Yeah, of course. So this really started in late 2017, shortly after CSS Grid was shipped. And CSS Grid was a massive success, but was also years in the making and layout had been an issue for web developers at least since the 90s when we abused tables to implement our UI designs. So we asked ourselves, so can we have more of these wins? And we talked to people who worked at the web platform at Mozilla about how they prioritize things. And one thing really stood out because almost everyone said the same thing. They said, we need to hear more from developers. And it makes so much sense because none of us can be successful without that part. It's hard to prioritize the right thing without knowing about developer pain points and it's hard to find the right solution, especially if it, and it's hard to get people to use something if it's not solving that problem or not solving it in the right way. So for all of those reasons, we proposed a developer needs assessment and the DNA in short is meant to be a single and simple tool for harsh prioritization representing very diverse populations and a huge feature space. And it's published on MDN and that's important because it's not owned by a single browser vendor. We initially proposed this under the umbrella of the MDM product advisory board where we have representation from browser vendors like Google and Microsoft and Samsung, but also the W3C and industry stakeholders. And as a community, we need to have at least a common understanding of the facts when it comes to needs, even if you draw different conclusions from them. And for this iteration in 2019, more than 28,000 developers and designers took the 20 minutes necessary to fully complete the survey. And that's from 173 countries total. And that's about 10,000 hours of time contributed by developers and designers to help us understand what their pain points and needs are. And we believe that makes the MDM WebDNA, the biggest web developer and designer focused survey ever conducted so far. Yeah, well, when you put it that way, just really want to say thank you to the community and anyone that took the time to go through those, you know, 20 minutes and getting us the feedback. It's incredibly useful to us, but thank you so much. So when you were going through the results, Kadea, the 2019 report, what kind of really kind of stood out to you? You know, one thing really, and that's web compatibility and interoperability. The four of the top five issues were focused on exactly that topic. And one of the biggest strengths of the web is that there is no single entity controlling the platform, but that doesn't come for free. Web developers and designers are frustrated by not being able to use features, by having to find workarounds, fiddling with browser differences, and also by the difficulty to verify that something that works in one browser will not break in another browser. And related to that, it was a bit of a surprise that the top five issues were extremely stable between very different markets. So whether it's China, India, Japan, the US, or France, the top issues for web developers revolve around web compatibility and interoperability. Got it. So when you take this feedback in, how does it actually kind of change the roadmap at Mozilla? Like, what are you now looking to focus on based on this feedback? Yeah, so web compatibility was already a focus at Mozilla, even before this, but we now have doubled down on it. So recently we made a browser compatibility data machine readable on MDN, and that's now starting to pay off. So if you use VS Code, a very popular code editor, the tooltips have compact data information when you write CSS. And we also recently started a collaboration with CanIuse.com to share the data that we have. So we are all looking at the same browser compact information. And I'm sure Victoria can say more about it in a moment, but the Firefox DevTools, they now come with compact data information built in. Got it. Yeah, the feedback has been really helpful for us. And the web platform team is really working even deeper on stability, compact issues that you talk about, helping with testing, layout, really kind of understanding what the developer needs are and kind of bringing it into our prioritization too. So it's actually almost time for the next version of the DNA report. So what do we have in store for developers this time around? Yeah, so one thing we're super excited about this year is to see how things have changed year over year. So we want to see that developer satisfaction go up or down and how after top pain points changed. So this is really the first opportunity to see those trends. Got it. Well, I can't wait to get that out to developers and I hope that everyone's watching would do us a favor and take some time to get that feedback in so we can really know what we can prioritize in the future for our roadmaps. Now, Victoria, I actually used to work on developer tools at Mozilla. And not only was it the birth of a great tooling in the browser back from Joe Hewitt's Firebug onwards, but it really continues to push the bar. So I'd love for you to be able to kind of catch us up on what's the latest, what's new in Firefox DevTools. Hi, Deanne. As Kadir explained, we know that differing browser support for CSS features is a top issue for web devs. So our team built the compatibility panel to make it easier to stay on top of this. It lists all the CSS on your website that's unsupported in certain browsers as well as deprecated styles. You try it now in Firefox Dev Edition. I really love the UX touches there, especially the turtle and the like, that's great. And in general, a lot of great features that are landed there. I'm curious if there's anything on the upcoming roadmap that you're excited about sharing. We've also been working on the Firefox Profiler. It's our performance tool, which features shareable links for collaboration. We've been integrating the recording UI into Firefox so it's easy to get started. This tool is also currently in Dev Edition. As far as our debugger, I want to highlight two unique features in 77. We added a type of breakpoint that's new to browser tools, the watch point. It lets you pause when an object property is accessed or changed. Also, we made source mapped variables work. When you pause in an original file, we now reverse engineer the scope chain so that variables look correct in the scope's pane and work in the console. This was six months of incredibly challenging work done by our teammate Logan, who had deep knowledge of being tech lead on Babel the year before. We joked that he's the only person in the world who could have written this. So recently, I really embraced the open design process for a network panel redesign. We sent out a survey and posted early mockups to Twitter and got amazing input. I originally used bold to indicate large files and someone suggested it'd be more clear to have mouse and elephant icons for small and big. That's how we ended up with the turtle for slow responses. People also told us that when it comes to the domain column, they mainly want to know if it's third party or not. So in this condensed view with the sidebar, we hid the domain column and added an icon that indicates third party requests. Originally, I made these brightly colored file type icons and some people love them and others said it was too much, they look like candy. So I iterated, toned down the colors and got to the result you see here. Most of this has landed in the latest nightly. We hope everyone will try it out and give us more feedback. Awesome. Great, so Victoria, Kadeir, thank you so much for taking the time to join us today. We really appreciate the great work that Mozilla continues to do for the web. Now you spend a lot of time both in your developer tools and also on the core task of building your app UI. So to chat more about modern web UI, let's welcome Yuna. Hi, Dian. Hi, Yuna. Now we were just talking about the developer needs survey with Mozilla and there was plenty of feedback from developers on layout. So I was just curious, are there any recent additions to the platform that you think target these needs? Oh yes, we've definitely been listening and CSS has been evolving so rapidly in the past few years and really months. So tomorrow I'll be going over a ton of cool aspects of modern layout with CSS Grid and Flexbox, including how to harness the power of CSS functions like clamp, fractional units, auto placement, the min max function, justification, place items, the repeat function and a lot more to create robust layouts breaking down how powerful a single line of CSS can be. There are also some CSS properties coming down the pipeline that will help with a lot of user needs that haven't yet been met. An aspect ratio is one of them. This just landed in Chrome Canary and it enables users set to find width to height ratios for media items like images and video. Previously, the way to do this was a hack using padding and calculating a percentage, but now you can set your ratios in a much more readable way. I'm looking forward to this landing in browsers and making a lot of developers lives easier because I know this is something that I run into a lot. We're also getting the Gap property in Flexbox. This one is exciting because of how many times we've just been styling a series of items and wanting there to be space between those items but not around those items. Gap enables the parent element to control spacing not the children, making it easier to style these items uniformly within that parent. Currently you can use Gap to create tracks with CSS Grid, but you'll be able to use with Flexbox Layout too, meaning you can leverage all of the benefits of Gap with a greater choice and layout mechanism. The Web Animations API is also getting a lot more robust in Chromium 84. Now we have promises, replaceable animations, composite modes, partial keyframes and a way to access animations from CSS in JavaScript. Check out the blog post on web.dev for more information about these updates and try them out in Chrome Canary yourself. The Act Property rule is also available behind a flag in Canary and it's something that I am particularly excited about because this allows for semantic variables in CSS. With Act Property, you can declare CSS custom properties that have semantic typed values and fallbacks. This is the part of the CSS Houdini effort, specifically the Properties and Values API and previously was possible in JavaScript with CSS.Register property as a part of Houdini but the Act Property Declaration brings this into our CSS files, meaning a nice co-location of superpowered styles with the rest of your CSS. The other Houdini APIs to keep an eye out for are the Type Object Model, the Paint Worklet, Animation Worklet and the Layout Worklet. Great, so JavaScript seems to kind of have its time with the addition of Async Away, ES Modules and the like. It was great to see that evolution. It's really feeling like this is a big time for CSS. I love being able to get features like Gap that you mentioned in the like to just make things a lot easier for us and super excited at how deep you can get into on the Houdini side too. But if I use these and I'm building these super rich designs and the like, with this great power comes great responsibility. So how do you think about the role of accessibility here? I love that you bring up accessibility because accessibility should always come first. I think you're absolutely right that that needs to be top of mind. Your users need to be able to access your content and navigate your product. It is not an enhancement. I think of accessibility as a core feature. And Chrome 83 actually just launched with some new accessibility testing features, which are pretty neat because they allow for visual accessibility testing. So now through DevTools, you can examine if your UI works for users with various vision deficiencies like blurred vision and four different types of color blindness. Yeah, it's great. We're actually gonna have Paul Lewis come on and kind of walk through that a little bit. So thanks so much. I also have really been enjoying your new CSS podcast with Adam Argyle, not only are you both kind of like really fun to listen to and the like, it's actually been really interesting to watch you kind of go through step by step and kind of teach us the fundamentals. There was a lot that I didn't really know about. Yeah, honestly, we are learning so much as we are going through the fundamentals and we're having a lot of fun making these episodes. So if you haven't seen it yet, check out the CSS podcast. Absolutely. So thanks so much for joining us, Yuna. And we'll see you later on the stream. Bye. Now, there are a few more DevTools features I'm really keen to show you. And no one's better to show off a bit at all in than the ever supercharged Paul Lewis. Hey, Dionne, how are you doing? Not too bad, how are you doing, mate? Yeah, pretty good, thanks. All right, so one of the things that we've noticed is that DevTools puts out these console warnings, as you can see on screen. And if you're anything like me, after a while, you start to ignore them. And the reason is that there just can be quite a lot of them. So we've been thinking about that and what we decided to do is to bring in the Issues tab. Now, if we detect issues on your page, you'll see this bar across the top with the button in the top right-hand corner there that says Go to Issues. If you click on that, it'll take you through to the Issues tab. Now, it might offer you the opportunity to reload the page to get more information. If you click on one of these items, it'll expand and you can see more information there, as well as potentially some links to content for you to read up on what you could do to fix the issue. So that's the Issues tab. The other thing we've been looking at is web vitals. So if you go to web.dev slash metrics, you'll see a whole list of metrics here that affect the UX and things that we would like to optimize as web developers. And we've been looking at ways of exposing this information to you inside of the DevTools UI. So things like first contentful paint or largest contentful paint, for example. So if you go to the Performance tab in DevTools and you take a recording in the Performance tab, you'll see something that looks like this. Now, there is a Timings tab there, or Timings row, sorry I should say, across which you'll see these blocks and these relate to some of those metrics. So you can see FCP and LCP, first contentful paint, largest contentful paint and so on. So you can start to get information there on some of your metrics. The other thing we've started doing is to add candy striping to your long running tasks. And you can see that here, I have one task on my main thread that is 70 milliseconds long. And what we're looking for is we're looking for tasks to remain under 50 milliseconds. This means that the main thread stays responsive and hopefully we can respond to user interactions quickly. So as you look around your performance recording, if you see this candy striping effect and the red triangle in the corner, you know that you've got a task that's running longer than 50 milliseconds. What we've also added as well is we've added a total blocking time footer at the bottom. What this tells you is, if you like the amount of candy striping that you would see across the whole recording. So as you're looking around, if you see that that number is going up, you might want to take a look and see if you have a lot of long running tasks on your main thread. Bringing that down should hopefully help your user experience. Another thing that we've added is this experience track. And what's contained within this is layout shift information. So for example, when you've got buttons and so on on your page and they're perhaps moving around, this can cause UX discomfort. So what we want to do is we want to minimize the amount of moving elements on the page. And so the layout shift here is going to tell you what elements are moving on the page and where and so on and the size that they were when they did it. So if I look at this, I have a warning here which tells me that cumulative layout shifts can result in poor user experiences and that's a link to more information. As well as information on where it's moved from and to, and if I roll over that, I get an overlay on my screen which shows me exactly where on my page the shift took place. You can also get live information about layout shifts by going to the rendering tab and choosing layout shift regions here in the options. Now, I should say that for people prone to photo sensitive epilepsy, this might be a less suitable option because it can cause flashing of overlays on the screen but it is there as an option if it's suitable for you. The next thing I want to talk about is WebAssembly debugging. It's an experiment. So if you go into your DevTool settings go to the experiments tab and click on WebAssembly debugging. You can switch it on there. What this allows you to do is allows you to do things like setting break points in your WebAssembly code. So here I've compiled a C program. It's just a Hello World program. But what I've done is I've added a break point on the line that says Hello World. So when this code executes and it hits that line inside of the WebAssembly it pauses execution just like it would inside the JavaScript. And you can see here in the call stack that I can actually take a look at what's going on in that particular frame and I can go between my JavaScript and the C and so on and so forth. So that's something that's coming down and it's currently in Canary. So take a look at that. Now the last thing I want to show you is color vision deficiency emulation inside of DevTools. And there's no better way to do that than to actually give you a demo. Okay, so here I am in Chrome Canary and I have a video here running of me and Serma doing supercharged yesteryear. But you see I have the rendering tab open in DevTools and I can emulate various vision deficiencies such as blurred vision or I can do Protonopia. I can do Geutronopia, Protonopia. Oh, sorry, Tritonopia. And a chromatopsier as well. And you see the live effect that it has on the page. So these are physiologically accurate emulations of various vision deficiencies. Now a vision deficiency isn't an on-off thing like we see here, but rather it's a spectrum. So a person could have a milder form of a vision deficiency or a more acute form. What we've chosen to implement inside of the DevTools UI is the most acute form. The theory being that as you're optimizing your app for accessibility in terms of color and contrast, if you make it work for the most acute form, then you'll include everything up to and including that as well. So that's color vision deficiency emulation inside of DevTools. Thanks, Paul. I'm really looking forward to seeing more later today. Now, one thing I've noticed as we work from home is how seriously people are taking their home setups. Whether it be playing with mics and cameras or virtual backgrounds. And recently I saw a demo that would make you invisible on your video feed using TensorFlow.js. And so I really wanted to learn more. So please welcome Jason Mays from the TensorFlow.js team. Hi there. I'm Jason and thank you for the introduction there. It's a pleasure to be invited to the show. And yes, I have created an invisibility cloak. So maybe you want to learn more about that. Yeah, Jason, invisibility cloaks are pretty cool. And so maybe you can show us how web developers can create superpowers like that with TensorFlow. Sure, definitely. So if I switch to my slides for just a second, you can see what the invisibility cloak was that the arm was referring to. And in this demonstration on the right hand side, you can see as I get on the bed, the bed is deforming in real time and I'm being removed from the bottom frame at the same time. And this is running all in the web browser. Now this is pretty cool because privacy is preserved as none of these images are being sent to the server side. And that's super powerful, especially in today's climate where privacy is top of mind. Now this was created in just under a day, in fact. So it's quite easy to get started with machine learning in the web browser and we'll see some more demos in just a second. So on that note, I also created a Chrome extension that allows me to use the same stuff we saw before. I was actually using body pics to create that, which gives me this image segmentation of my body in real time. And I can now join a Google Meet meeting as you can see shown on the slide right now. And this can be combined with my previous demo so I can remove that second person who comes into frame halfway through the GIF and then it would appear as if it never actually happened. Cool, so can you give us a few more details on how this all works? So essentially all this is using body segmentation and this is running in TensorFlow.js in the web browser and this can distinguish 24 unique body areas across multiple bodies in real time. You can see on the right hand side that this works pretty well when all the settings are bumped up to high and you can even get the pose estimation showing on the bodies too, which estimates where the skeleton is. These can be used in delightful ways just as clothing size estimation, which you can see here. This is another prototype I created and I don't know about you Dion, but I am terrible at knowing what size clothing to buy in my once a year clothing purchase. So here, totally. And here you can see I just enter my height and with less than 15 seconds, I can get an estimate of my inner leg, my chest and waist measurements, which the clothing site can then use to estimate what size I am, a small, medium or large. Now this can even give you superpowers as you see on this next slide and one of our community members from the USA has combined this with WebGL shaders to turn himself into Iron Man of sorts and he can shoot lasers from his eyes and mouth using our face mesh model. So this is pretty cool and it runs buttery smooth at 60 frames per second in the web browser. And you can even go further, of course. There's many web technologies out there that you might want to combine with machine learning, such as WebXR, WebGL and use very intensive OGS. And if you do that, you can get an example like this from another one of our community members in Paris, France, who can essentially scan a magazine and if there's a person in it, they can bring that person into the living room full size so you can walk up to them and inspect them in more detail. Pretty cool technology, but of course, after seeing this, I thought to go one step further and if I add WebRTC, I can then teleport myself anywhere in the world in real time. And this is using a complete rewrite, using WebRTC, AFrame, 3.js and TensorFlow.js together to create this demo. And it really does make a big difference. When I'm seeing someone in my room, which I can walk up to and move around, it's a massive difference compared to a rectangle that's solid on the screen. So this could be the future of video conferencing, who knows, but it's great to play these technologies and push the boundaries of the web. That's really cool, from invisibility cloaks to teleportation, that's pretty cool stuff. Yeah, exactly. This changes everything, essentially. Yeah, totally. So how should web developers, if we kind of zoom out a second, how do web developers kind of generically think about the role of ML and TensorFlow.js and how it could fit into their web applications? Yeah, that's a great question. And obviously, right now, in fact, machine learning in JavaScript is still a pretty new thing at the very early stages, but that's super exciting too, because there's so much potential to be unraveled at this time as well. So on that note, I had asked web developers to consider how machine learning might fit into their existing pipelines. Maybe you're developing a content management system. In that case, you could potentially use something like automatic image cropping to detect where a human face is in that image. So then you can make sure that it's cropped nicely when you're resizing with your CSS. Or maybe you want to summarize a blog post article, so you have one paragraph of text that shows in the search results. That's now possible with machine learning too, and that can be done automatically. So I think I would encourage people to experiment and go outside of the regular box of thinking. And of course, on that note, on this side, you can see all the different areas JavaScript can run on the browser, server-side, mobile-native, desktop-native, and even internet of things. And TensorFlow.js supports all of these environments too. So maybe you want to combine it with hardware. If you can recognize an object, maybe you can trigger something to happen in the physical world, or something on the server-side that talks to a third-party service. And on that note, TensorFlow.js can essentially run, retrain via transfer learning, or even allow you to write your own models from scratch if you so desire. Now, on that note, we have a ton of pretrained models you can use to get started, such as the body segmentation you saw just a little bit ago, but also things like pose estimation, speech commands, face mesh, hand pose, and some cool natural language processing. And just to dive into that a little bit more, you can see how these models work here. So here's the object recognition in action. This class allows you to recognize 90 pretrained objects, like these dogs you can see here, and you get the bounding boxes that come back at the same time, which is pretty neat. Or what about this? Face mesh, just three megabytes in size, and you can understand 468 unique landmarks on the face. And this could be cool for making face masks or some kind of AR experience, such as the one you see on the right. Moddyface, which is part of a L'Oreal group, has actually used this for AR makeup try-on, and this lady on the right-hand side is not actually wearing any makeup at all. In fact, she's selecting the color of makeup she wants to try on, and she can do that all in real-time in the web browser in a much more hygienic way, which is pretty cool. And then finally, I just want to talk about some of the client-side superpowers you get if we think about running machine learning in the web browser. The first one is privacy, as we hinted at before. Essentially, because we're running in the web browser, none of that data is ever being sent to a server for classification. So that allows you to access the center data in a way that is great for privacy. Linked to that, of course, is lower latency. As there's no server involved, that means there's no 100 millisecond or so round-trip time from the mobile device to the server. You cut that out completely by running on the edge. And of course, lower cost. You might spend tens of thousands of dollars hiring BP GPUs and CPUs to run the machine learning models on your server-side environment. By running on the client-side, all of that goes away because you're using the hardware of the client to run instead. Got it. So how can people get started? You've definitely piqued my interest. Now I want to start playing. Sounds good. So if there's one slide you want to kind of screenshot from today's talk, it would be this one here. On this site, you can find out our websites, our models, GitHub code. We are open source, so feel free to contribute. There's a Google group for asking more technical questions and of course, some great boilerplates to get started on co-pen and glitch. And for those of you who want to go really deep, we recommend the Deep Learning with JavaScript book. That's very comprehensive and even if you have no background in machine learning, as long as you know some JavaScript, it will walk you through everything step by step. So how do you recommend checking that out? And on that note, I would also suggest checking out Teachable Machine right after this show maybe. In just two minutes, you can use this website to learn to recognize any object in your room. In just 30 seconds, you just take a few images of that thing, hit train and you'll get a classifier that can then classify that object. If you like it, you can then export this model to a JSON format and then use it on any website you like to be more creative and makes your own creations using that. And then finally, I'd ask you to come join the community. We've got this made of TFJS hashtag that you can use that allows you to share what you've made so we can find it and we can invite you to our future show and tells and of course allow others to get inspired by your great work too. So finally, I just want to leave you with one more inspirational example. This guy from Tokyo in Japan is actually a dancer, but he's used TensorFlow.js to make this cool hip hop video. So machine learning really is now for everyone and I'm super excited to see how creatives will start using this and not just the academics, musicians, artists and much, much more. So if you do make something, do use a made of TFJS hashtag so we can find it and share it for you and I look forward to seeing what you make. And with that, feel free to stay in touch using the following links if you say desire on Twitter and LinkedIn. Great, thanks so much for joining us, Jason. Yeah, and thanks to everyone who joined me on the day two kickoff. So now let's get to today's sessions where we focus on updates across our tools and the web platform to make developers more productive as well as the latest in the world of PWAs. Please enjoy the show and remember the team is here to chat with you on web.dev slash live and via YouTube. I'll see you there today and we'll be back tomorrow morning for the day three kickoff. Hey everybody, Paul Lewis here just gonna show you some features that have been landing inside of DevTools recently and I have, as always, Serma with me and we're just gonna talk those through. Navigate around, I've got a few demos and things to show you. How you doing, Sam? I'm good. Good, good. I thought we could start with, I know you've been working on the performance tab Yes. In some ways. And that's probably something that people are interested in. Why don't we start with that? Let's start with that then. Okay, so if you've not seen it in fact, let's just go there right now, web.dev slash metrics. Helps if I can type as always, my typing is woeful, but let's have a look. Nothing has changed. Nothing has changed. Web.dev slash metrics. Okay, so here you can see, we've got an important metrics to measure. First contentful paint, largest contentful paint, first input delay, time to interactive, total blocking time and cumulative layout shift. Now some of these metrics it's worth saying from the off, these are all designed to help you improve the user experience in your sites and your apps. But not all of them are suitable for what we'd call a lab setting. Some of them are we call field setting metrics. So for example, first input delay, it's not really something you typically test in a lab setting. It's something you do more with the field with like Rome and sort of live data. So not all of these make sense in the lab, but the ones that do, we've been trying to get into the performance panel inside of DevTools. So because once you open DevTools, you are in a lab setting. Exactly, you're not, you can never acquire a real user athlete. Can you please open DevTools and quickly do the measurement? The second you need DevTools to measure something, you're in a lab. Yeah, and so our assumption is predominantly if you're sort of local host on your machine, we tend to refer to that as the lab setting. So let me just run you through very quickly the kinds of things that we've got. So if I just, if I go to the web Dev homepage and I've got the performance tab open here in Chrome, this is currently in Canary. So some of these features are very kind of hot off the press. They're subject to change. So just bear that in mind if these things look a little bit different over the coming weeks and months, it's because we're working on them. We just wanted to share them with you because we're kind of excited about some of them and all of them and we want to show them to you. So if I take a recording, so I'm going to go to record, I'm going to hit in my case command shift R which will do a reload without any caching. Just wait for a little while for things to- Sorry, say again? Or service worker. Like it's literally as if you hit the network. Yeah, exactly. So I'm going to stop that profile. I left it running for quite some time there and it's going to process that. Okay, so let me bring that down. So this, I mean, this is actually a really good trace recording like a performance recording. If you've got something like this in the lab setting you'll be super happy. And the reason is if I just zoom in a little bit here you can see that most of the tasks in all of these top level tasks here, the task, they're all really, really short. And that's really good. Because it means- That's super short. Yeah, it means that the main thread is remaining responsive. So let me just zoom back out. But let's talk about some of these metrics. So for example, DOM content loaded that's been around since forever, really. And that continues to be shown here. But you see this timings row contains some new ones like first paint, first contentful paint, first meaningful paint and largest contentful paint. And you can actually see on my screen here when I roll over largest contentful paint the element that is the largest contentful paint highlights we put an overlay on that one so that you can see. So immediately some of these metrics that you're gonna see web dev slash metrics are showing up inside of the dev tools timeline. So that's the high level metrics. What we can also do though is we can show long tasks. Now, as I said, all these tasks are pretty, pretty quick but I'm running on decent hardware here. This one here is 18 milliseconds. And I think we could probably do another recording and actually show it slowed down. So what I'm gonna do is I'm gonna go to the settings here and I'm gonna switch on CPU throttling. I'm gonna switch it to six times, slow down. The reason I'm gonna do that is just so that we can see what it would look like on slower hardware. So we have a sixth off a MacBook now. Yeah, exactly. So I'm gonna record, I'm gonna refresh as I did before. As I say, I'm gonna give it time to actually settle down some of our metrics. They do need to wait until there's only a couple of requests running on the network and so on. So I'm gonna stop that recording. And what we start to see is we start to see these red triangles. So again, let me zoom in on what's going on here. So there's a couple of things to notice. One is that the task six times slowed down in this particular case is 168.54 milliseconds. What we're trying to do is we're gonna try and keep all our tasks under 50 milliseconds. And the reason for that is that we can make the main thread responsive when we do that to user interactions. Because most of the JavaScript that people are running, like touch handlers and mouse click handlers and all those kinds of things, all run on the main thread. And so what we wanna make sure is that we're not blocking the main thread with work. And so keeping tasks under 50 milliseconds is the goal. So the blocking time. And I mean, let's be honest here that, I mean, 50 milliseconds is somewhat lenient. If we went to the good old goal of like 60 frames a second, it would be even less. But during the first load, we can be a bit more lenient because nobody expects to interact with the website during the first second of loading. So we can be, have a little bit of a higher threshold there, but above 50 milliseconds, it can become very noticeable if the main thread is blocked. Yeah, absolutely, great point. In the context of page load, absolutely. This is more about the 50 millisecond thing. If you're animating, yeah, absolutely, to just repeat that, you'd be going for 60 frames a second and your tasks would need to be a lot, lot shorter. So if you're wondering how much of your recording is actually blocking, so how much of candy striping overall do you have going on in here? Well, we actually have this new metric at the bottom which says total blocking time, which you can think of as like the amount of candy striping that's in this recording as a whole. In this particular case, it's 230.05 milliseconds. And we have a link to explain what's going on. And you can see that that would take you over to the total blocking time metric information. So that's, so I find it quite interesting because we have the total blocking time is actually not necessarily the amount of time that the main thread has been blocked because you have a 50 millisecond budget per task that you're allowed to block, but everything that's over counts into that total blocking time. So that is the amount of time that you went over the 50 millisecond budget. Exactly. I should say that the total blocking time, we tend to stop it at one metric that is not shown here, which is the time to interactive. So when we hit an interactive time and the back end in blink actually notifies us that we've hit interactive time if it can, we'll actually stop tracking total blocking time for that value at the bottom. But the candy striping here would tally pretty much entirely with that total blocking time. At the bottom, you've got this number 230.05 in this case. If that number starts to creep open up, that's something you're gonna wanna take a look at because that means that you're spending quite a lot of time with long running tasks on the main thread and the chances are it's negatively impacting the user experience. In this case, Web Dev is actually really good. It's just I had to introduce some slowdown, which is a good feature to know about that you can introduce slowdown both in terms of the CPU but also the network as well. You can do that if you so desire. So that's the total blocking time. That's long running tasks as well. We marked that at 50 milliseconds plus with the candy striping and the little red triangle in the corner. Now, the other thing we've also introduced is a thing called layout shifting. So if I just even go to the Google homepage and I do a recording here and I just refresh like so and I hit stop. Okay, what you're gonna see is you're gonna see a new track if we find any layout shifts and the layout track, sorry, the experience is the new track. It's called experience and one inside here we've got a layout shift. And if we click on this, let me just bring this up so you can see a bit more, you get information on the layout shift. So the idea of a layout shift is, I think most people have experienced this, you're browsing the web and you're about to tap on a button and it moves to somewhere else on the screen and it's really frustrating. And so we're starting to give you the ways and the means of tracking those layout shifts in the course of interacting with your page and so on. And we put them into this experience track. So there's some stuff about scoring which if you read the documentation on cumulative layout shifts we'll explain the scoring a little bit more to you and you can also see whether or not it was in response to some recent user input. We also include some location information where you went from and you can see this here is this. So it's gone from here, this element has shifted to here, you can see that overlay. So I'm pretty confident I know what it is. It's probably this privacy reminder here that is coming on here. Now, one of the things that we probably need to add is which element it is specifically and there is that information in the trace. It's just a case of plumbing it through. So that's actually something that I'm gonna hopefully look at. So maybe by the time you see this video go and have a look in Chrome Canary possibly we've got the elements there. If not, what's doing the timer recording with screenshots help? So you can go through the film strip and actually see something shift in. That might be a good way to work around that until DevTools can pinpoint it for you. Yeah, absolutely. That's a great way of doing it. Often, I think when it's your own code you have a fair sense of what's likely to be moving around. And the thing to bear in mind is layout shifts do cause problematic UX. And so it's a good thing to try and get those down and ideally removed if at all possible. And the way to do that normally is to reserve space for your content ahead of time in your style so that you're not just leaving content to just move other content out of the way. Sometimes it's not possible, but where you can if you can mark certain areas as being the correct size ahead of time that should help an awful lot. So that's the layout shift information. So that's added in. So those are the main bits of performance. So in summary, we had the timings which are here, we've got the layout shifts and we also had long running tasks which we show on the main thread with the candy striping and the red triangle. So in DevTools, we have also been working on WebAssembly and the debugging experience that it gives you because you write your code in whatever language, then you compile it to this VM bytecode. And you kind of don't wanna step through the bytecode. You wanna step through your original code that you wrote. And for the longest time, WebAssembly and DevTools used source map, source maps to define that mapping from the compiled WebAssembly binary code to the original source code. And source maps were kind of built for JavaScript and minifiers and transpilers. And so that worked quite well, but it's also lacking some capabilities that are quite important. So I have little examples. C code is actually a sample from M script that I just simplified a little bit. It just draws a little gradient. It doesn't really matter what it does, but this is C code. And I have compiled this to WebAssembly with our new debugging experience because in native land, in binary land, there has been a debugging format called Dwarf for a very long time that has all these capabilities, but until recently, DevTools didn't understand it. So now we have compiled this to WebAssembly. This is what the output looks like. It just draws this little nice gradient that you see on screen here. But what's cool about this is that now, if you go to the source panel in DevTools over here, we have our usual JavaScript files. We have our WebAssembly bytecode, which, you know, that's not the kind of code you wanna step through. Because that is basically assembly. And while that sometimes can be useful, it doesn't read that well, let's be honest. What you can now find with Dwarf support in DevTools is that we actually have a mapping to the original C++ file. Again, this also worked with source maps, but this has some more capabilities. So I can now set a breakpoint here in the C code. Actually, let's set two breakpoints, why not? And so if I reload, the program will be halted once we reach that breakpoint. And then we can continue going through it as we're used to. And you can see the UI updating. So if I now step over this SDL unlock surface, whoop, the gradient shows up because we're actually stopping the program and updating the UI as we go along. And I think that's pretty cool. So now the... The question that I have for you there, Serma, is are you able to inspect, say the values of like i or j or alpha and those kinds of things in your loop? So for example, if I hover over this, nothing shows up. And that's exactly the capability that does. For example, I was also missing with source maps because source maps can't really handle renaming of variables very well. Dwarf however can. So while we are using Dwarf to have a more efficient debugging experience, the capabilities are potentially possible. We haven't quite built that part yet. So we are using Dwarf to allow you the old experience of stepping through and break pointing and all these things. But you can't quite inspect variables yet, but we are working on that. And only now is it even possible to build. So that's really the exciting part here. The other part is that we are no longer using a WebAssembly interpreter during debugging but actually our baseline compiler. So usually in the olden days, when you would start debugging your WebAssembly with source maps, you would actually experience a much, much slower WebAssembly execution. So if you had a long loop and you went to a breakthrough, it would actually have to wait longer until it is done. Now we are at the baseline compiler which generates much more performance code under the hood. So you can debug with higher performance. And if that isn't a great tagline, I don't know what is, to be honest. So you've kind of pointed at this, I think, a little bit. This is experimental. It's still a work in progress. If somebody wanted to actually try this out for themselves in Canary, what would they do? Where would they go? So it is currently in Canary. It is still being worked on. So in the last couple of days, it kind of oscillated between working and not working. Sometimes I could set break points. Sometimes I couldn't. But we are on track for getting this ready for stable. I don't know if we've set a milestone yet. If so, I will update the description of this talk accordingly. But if you want to play around with this, please do. If you just Google for mscripten and dwarf, the most recent release now has support by default for these dwarfs and bugging symbols. So if you just update your mscripten to the most recent release, this shows just work out of the box. Just to be clear, there is no source map file in the file that I'm serving. So source maps cannot have provided this debugging experience. So this is definitely using the dwarf symbol that are in the WebAssembly binary. Cool. Well, with that, I think we have covered what is important and new in DevTools, didn't we? No, I've got more stuff to show you. I've got the issues. You have more stuff to show? Yeah, switch back to me. Well. All right, so the next thing I want to show you is the issues tab. Now, you're similar to me, I think. When you see lots of warnings and messages in the console over here, you kind of go, sure, and you start to kind of mentally filter those out and maybe you feel like me, you kind of clear the console and you sort of think, okay, it's just too busy, just too noisy. I'll kind of deal with those at some point. Well, if you are like that, completely understand where you're coming from. So we've added this new thing, which is the issues tab. It says the issues have been detected. So if you see something like this, we've detected some issues during the execution of your page. And you can go to the issues tab, which opens a new tab here. Now, sometimes if the page is already loaded and you bring up the console, it might say, well, there's possibly more information. Would you like to reload? So sometimes you'll get an option to reload the page and you might see more issues coming through that way. So you can see here that this particular page, some same site cookies issues are starting to show up. So you can see I've got the issues listed here and I can spin this down and it gives me an explanation of what's going on and how I can resolve this issue directly. It also tells me which cookies in this case are affected and it also links me off to some information on web.dev about how this all works. It's also true of something like mixed content as well. So if I go to something with mixed content, which is HTTPS and HTTP mixed together. Similarly, I can go here and it says that there are eight cases where I have mixed content where all resources should have been loaded via HTTPS, but they haven't been and it lists again, it lists the requests and the resources in question. So hopefully that's going to make the console less noisy and makes it much more clear to you how you can track down any issues that DevTools has noticed in your site and your app. Okay, so the last thing I want to talk about is color vision deficiency emulation. So let me fire up this brilliant video of HTTP 2.0 feeling for just a second. Yeah, talking about imposter syndrome, something I personally suffer from. So if you've not seen that, go ahead and watch it, it's a really good video. So what I've got on my screen, I've got the rendering tools inside of DevTools. Now, if you've not seen this, you can go to more tools and then rendering down here. If you click on that, this tab will pop up and there's a load of tools just for rendering inside of here. And since we're passing through here, the layout shifts that I talked about earlier in the performance area, layout shift regions. This is a live update version of layout shifts. So if you're just wanting to see very quickly what layout shifts you have during the life cycle of your page, you can switch this on. And if you see blue flashes, then that's a layout shift that would have been caught if you'd taken a performance recording. Now I'm not going to show this today because as it says here, it may not be suitable for people prone to photosensitive epilepsy. And I want to be mindful of that. But if you think this is the right tool for you, definitely check that out when you're looking in the rendering tools. So if I scroll to the bottom here of this list, you'll see that you can emulate vision deficiencies. And if I start playing this video back, you can see that if we work our way through this list, we have blurred vision, which you can see applies a blur to everything. And I can still interact with the page. I can still click on buttons and so on, but you can see the content is completely blurred. We also have Protonopia and there is Deuteronopia, Tritonopia, and Acromatopsia, assuming I'm pronouncing those correctly. If I'm not, then forgive me. I find it really cool that all these effects, so to speak, are applied to the page without disabling the interactivity or the animations or even the video. So you can really check if all your animation and the entire experience holds up when someone has one of these vision deficiencies. So I think that's just a really cool thing. Yeah, absolutely. So you might have noticed, so these seem fairly extreme when you look at this. It looks to me, as somebody who doesn't have any of these vision deficiencies, these look like a very extreme form of change, visual change. So I mean, I said this in the introduction with Dion earlier, but to say it again, these deficiencies that we're emulating are physiologically accurate, but the most acute form of that particular vision deficiency. So it's not on-off like we have emulated here where it's sort of no vision deficiency and then Protonopia. If you have one of these vision deficiencies, it's much more likely to be a spectrum and you may have a certain amount of one of these vision deficiencies. So what we're doing is like, this is the most extreme version of all of these vision deficiencies. And so that way you can make sure if your page holds up in the most extreme case, your note will also hold up for any less severe case. You can make sure your website remains usable for anyone really. That is exactly the idea behind it, exactly that. If you're optimizing for accessibility, you want to be as confident as you can be that you're capturing the colors and the contrast and all of that stuff in the most helpful way possible. And so by going for the most acute version of these vision deficiencies, exactly as you said, you can optimize for those and then anything up to and including those will also be covered as well. So that's as we've talked about the issues tab. We've talked about long tasks. We've talked about layout shifts, color vision deficiency, and WebAssembly debugging with Dwarf. I know, right? And that is quite a lot of new things in DevTools and you should definitely try those out. Absolutely, yeah. So if you want to try any of these out, probably the easiest thing to do is to fire up Chrome Canary, give them a go. If you run into any issues, you can go over here to the help menu and you can report a DevTools issue which will create a bug and you can fill that in and that'll make its way. AKA your inbox. I really hope not. So with that, thanks for watching and I guess we'll see you around. See you around, bye. Hello, my name is Shu and I work on the JavaScript specification as well as the V8 project. My name's Leshek and I work as a performance engineer on the VM. So Shu, what's new in JavaScript these days? Yeah, a whole bunch of stuff has happened since last year and you might recognize some of the features we're going to talk about today from some of our colleagues 2019 Google IO talk because language features take a while to be standardized and to be shipped in the browsers. The ones we're going to be talking about today have shipped. So let's start with the fun stuff. Like I said, we're about to, we're either shipped or about to ship quite a few syntax features that should make web devs lives easier. For this talk, we'll be focusing on two features that'll make dealing with optional values easier. So Leshek, have you ever written code that dealt with configuration? Oh yeah, definitely. Like always using a hash map for those things. Yeah, so I'm writing this new chat app, right? Something of a strength for Google engineers. I made some network parameters configurable which I keep in this map of configurations called config that you see on the screen. But the network configuration is optional because it isn't always set by the user and it has sub configurations like the server and the port and maybe those aren't set by the user either. Handling that kind of optionality is kind of a pain. Currently folks do this with logical and like you see on the screen. Oh, that's pretty good, boys. Yeah, and for those chains of property accesses where at any point some property in the middle would turn out to be undefined, we added this feature called optional chaining. Easier to show you on the screen than to talk you through it. So the optional chaining feature is the question mark and the dot instead of a plain dot. Oh, I see. So if net config is undefined, then net config.server is undefined and dot adder is undefined and so forth. Yeah, almost. It's a little bit more relaxed than that. It's if it's undefined or null and specifically we call the set of things that are undefined or null knowledge. So in this case, if net config is knowledge, the whole optional chain is undefined. If net config isn't knowledge, but net config.server is knowledge, then again, the whole thing is undefined. You get the idea. If nothing is knowledge, then eventually you get the whole property, the most nested property access. Yeah, cool. That's a lot easier to read. Yeah, I think so too. Now another common feature while dealing with configurations is default values. And sometimes folks use logical or for this like you see on the screen. Oh yeah, I've definitely read that before. Yeah, and it usually works fine, but sometimes it doesn't and it's really surprising when it doesn't. Suppose I add a configuration for enabling compression to the server. Do you spot the bug? Oh yeah, right. How would you actually explicitly disable compression, right? If it's false then false or true is still gonna be true. Yeah, exactly. If enable compression is false, false or true like you said is true. So what we really want to test here is not something is truthy, which is what logical or tests for, but actually if something is absent or present and we already are familiar with that concept, that's knowledge. So we introduced this new syntax feature called knowledge coalescing, which is the two question marks. And that does exactly what you want here for default values. It tests for knowledgeness on the left hand side. If the left hand side is knowledge, then it evaluates to the right hand side. If the left hand side is not knowledge, then the whole thing is undefined. So in this case, enable compression is false, question mark, question mark true, will still get you false because false is not null or undefined. But if enable compression wasn't present, if it's undefined, then you get the default value of true. That's pretty cool. What can I use it? So you can use both optional chaining and knowledge coalescing in Chrome stable today. Now, enough talking for me. That was just the taste of the new features. You can find more on our website later. We'll show you the link. But you know, we add all these new syntax features and I'm worried that supporting them all will slow down the parser. V8 is known to be fast and I don't want to do anything to make it slower. You know what? That's a fair concern to have. When we shipped ES6 back in 2015, we actually saw a big parsing performance drop. Like this is measured on octane on the code load benchmark. And we had this big drop during this implementation phase. But actually nowadays, parsing speed doesn't matter as much as you might think. Oh really? Or I thought parsing was pretty expensive. Well, it's still not cheap. But in the past year, we've worked a lot to move a lot of parsing off of the main thread and be able to parse scripts while they're still loading. So imagine that Chrome sees a script like this. The HTML parser gets up to it, it sees the script and then has to pause the HTML parsing, has to download to the script, parse it, execute it and only then can it continue parsing HTML. I know it isn't strictly true because of optimizations like preloading. No, you're absolutely right. This isn't actually the whole truth and the download of the script happens a lot earlier if there's a link preload or if the pre-parser finds the script earlier. And if the download moves off of the main thread and earlier in time, then parse and execute can move earlier too. But the thing is the parsing itself can happen on a separate thread as well. It's only really the execute that has happened on the main thread. In particular, if a script is marked as async, you can keep processing the HTML up until the parsing of the async script is actually finished and it needs to execute. And we've had support for this basically forever, but it's been very limited. We've only been able to concurrently parse one script at a time and we've only been able to do this for async scripts. Yeah, how come it's been so limited? Honestly, just historical technical reasons which don't really hold anymore. So one of the first things that we did was move everything from this dedicated thread into our global thread pool, which meant that they could happen at the same time in parallel. Another thing that we changed was to have synchronous scripts also use this off-thread parsing functionality. I'm kind of confused. You said synchronous scripts, but what's the point of parsing synchronous scripts in another thread? Isn't the whole point for non-async scripts that they block the main thread? Well, that is the point for the execute, but for the parsing, even though we're parsing on a different thread, if the main thread is free, that means it can do other things. It means that the user can scroll. It means that the user can type. It means that we can execute other JavaScript like on-click handlers. So it's actually very useful to have this empty space here on the main thread. Ah, okay, I see. This is the difference between improving interactivity versus just improving the loading time. Right, but we can improve loading as well. Because the parsing is happening on a separate thread, we can actually move it earlier. We can start parsing when the download starts, and then as data comes in from the network, we can feed it into the parser. And then the actual parse time doesn't matter as much as you might think. All that we need is for the parser to be faster than the network. Really? The bin networks are already pretty fast. Not always, but usually they are, fair enough. And, you know, caches are even faster. So we can't completely ignore parser performance. So we have invested a lot into improving the parser performance as well, the single-threaded parser performance. Starting 2018, we put this big effort in, put some of our best engineers on it, and we've had actually very good results in improving parser performance just through programming optimization. Yeah, up and to the right. That's the kind of graph I like to see. Really fascinating stuff. The thing I learned quite a bit in just the past five minutes about making parsing and compiling faster and web app performance in general. And you got me thinking about this other big chunk of web app performance, which is memory. I was doing this thing the other day with my chat app, and, you know, I got it basically working, and I was trying to measure the performance of the packets that I was getting from the server. I wrote this little moving average class to compute the latency moving average of the packets that I was getting from the web socket. You see there that I add a message listener that basically all it does is it accumulates events into the event array, and I use that later in this compute function, which I don't show to actually compute the moving average. And the way I use that is I have this component that when I start measuring, and I want to see the live statistics of the moving average of the latency, that I make a new instance of it, and then when I stop, I null it out because I don't want to keep all the events I accumulated in memory. I know that V8 garbage collects memory that can no longer be reachable. And as long as the moving average is reachable through the this.movingAverage property on the moving average component, that the garbage collector cannot collect it, which is why I null it out. That makes a lot of sense to me. Yeah, and I thought this would work fine. But then what happened was, you know, as a chat app, I kept it open for a while and I opened the memory pane. I see every once in a while that a GC happens and it moves, you know, it collects some memory. The memory goes down a little bit. But it's pretty clear that the trend is up into the right. This is one of those graphs where up into the right is actually bad. And what this was basically showing me is that it's a memory leak, right? That every time GC happened, it wasn't able to collect all the memory. So I just kept accumulating more and more memory. And eventually, if I kept this chat app open for another day or so, my computer will have ran out of memory. Memory leaks, but if you only collect objects that you can actually reach, you know that out your moving average field. So the garbage collector should be able to reclaim his memory, shouldn't it? Yeah, so it's a common mistake, but it's still pretty subtle. I'm sure the more seasoned web developer would have spotted it right away. So what's going on is that the web socket is holding on to all the event listeners strongly, which means that until they are explicitly removed, everything that is reachable via the event listener is also considered reachable and thus not collectible by the garbage collector. So you see that use of this.events.push inside the event listener, as long as that use is inside there, the whole moving average instance is reachable from within the event listener and thus not garbage collectible. So even when I nulled it out in the moving average component, it was still considered alive by the garbage collector. To deal with this, folks often use what's called a disposable pattern, where I have a method that manually removes the event listener called dispose. And that's kind of annoying. And to use that, the way I would do it is before I null it out in the moving average components, I would have to remember to manually call dot dispose. What is the C++? You have to manually manage your memory after the whole part of garbage collection was that you don't have to bother with that sort of stuff. Yeah, exactly. And it's so easy to forget it too. And this is all because the event listeners can't be garbage collected until you manually remove them. So what if there was a way to actually tell the engine, don't let me keep you from garbage collecting this thing, even though it's reachable. Then you don't have to remember to manually call dispose or even need the disposable pattern. And it turns out there's a new standard feature in JavaScript that lets you do exactly this, weak refs, all right. And before we go into it, I have to give a quick disclaimer. So weak refs are an advanced feature that's hard to use correctly because garbage collection is unpredictable and very different from browser to browser. And even different from run to run of the same browser. Because of that unpredictability, we didn't add weak refs to the web for many years and you hopefully will never run into a memory leak or a bug that legitimately needs it. But on the rare occasion that you actually legitimately needs a weak ref, finally you can use it and fix your problem at the root. All right, back to the main programming. So how am I using weak refs here to solve the previous problem? I still have this event listener, but now instead of directly registering that event listener function with the socket, I wrap it in a weak ref. It is what's called the target of the weak ref. And inside the actual event listener, I dereff the weak ref and I call the function. And this kind of indirection basically means that the function that is actually holding the moving average component alive via this.events.push is no longer kept from being garbage collected because it is a weakly held reference inside a weak ref. Okay, and what does weak ref.tref return? I see you're using optional chaining function call syntax here. Yeah, good eye on that. That was not an example that we showed earlier, but like optional chaining for property, you can also optional chain a function call. So if it's undefined, then you don't end up making the call and the whole thing is undefined. But that also suggests that dereff here when the thing is actually collected, will return undefined. To recap here, what it basically means is that you have to manually call dereff because we're no longer preventing the garbage collector from collecting the event listener since it's wrapped up in a weak ref. So every time you want to access it, you have to manually dereff and if the garbage collector has collected it, then dereff will return undefined. Okay, so in this case, the listener is reachable by this.listener and once a particular moving average instance isn't reachable, then the component and the component knows it out, then the whole thing can be collected. Right, exactly. Because we're no longer accidentally keeping moving average alive anymore via the event listener, we can go back to what I naively thought would work in the first place. When I no longer need all the data in the moving average instance, I would just know it out and let the GC do its thing. Okay. No way, but hold on. But now you've got this strongly held listener on the actual event listener that's calling the weak ref. Yeah, that's a good point. You know, I thought you wouldn't spot that, but that's exactly right. Even though with this weak ref in direction, I still have this event listener, remember, the socket still keeps strongly, it just holds on to all the event listeners until I unregister it. I still have this extra event listener. So what do I do there? There is a companion feature to weak refs called finalization registry that lets me do the thing that's needed, which is I want the garbage collector to tell me when it has collected something so that I can perform some action at the point that an object has been collected or in GC parlance finalized. And that feature is called finalization registry. On this slide, what you see is that I make a finalization registry and when I add the new event listener, I also register with the finalization registry, meaning when the inner listener, the thing that actually does the this.events.push, is collected. And remember, it's collectible now because it's held in a weak ref. When that's collected, it's going to run this function that I passed with the finalization registry asynchronously to remove the event listener, cleaning up all the excess memory. Now, again, this is an advanced feature and hopefully you'll never need it. So it doesn't actually pass the object itself into the finalizer. There's a good observation there. You see that the thing that actually gets passed to the finalizer are some other values. The object that you want to observe the finalization of, that's already been collected so you don't get that back. In this case, the thing we need to perform the finalization action to our register are the socket and the wrapper listener and that's what will pass to your finalization registry. All right, that makes sense. Yeah, and like I said, this is an advanced feature and this example here is pretty dense. I recommend that you follow the link on the screen there to follow our full explainer on the V8.dev website for the feature. So with all of that work, I opened up the memory panel. Again, I kept my chat app on for a while. I start and stop measuring the latency and now I see that every time a GC does happen, it's able to reclaim basically all of the memory and then over a longer period of time, I'm no longer accumulating memory and yeah, it looks like I fixed the leak. Looks pretty tricky. Like I've collected garbage before and I don't particularly deterministically myself. Yeah, the garbage collection is not predictable. It's not, it's non-deterministic. Don't depend on always running and that's why we have kept saying that WeCrafts and finalization registry is an advanced feature. So, and that's a good point too. Given the unpredictability of the garbage collector, are there other things that the engine does to make apps slimmer? Actually, yeah. Basically doing a lot of work to reduce its memory consumption. There's actually been two major projects that landed last year, which have focused on this, pointer compression and V8 Lite. And I can actually talk about both of them very quickly. So, pointer compression, first of all, you've probably heard that machines are 32-bit or 64-bit. On 32-bit machines, we have 32-bit pointers. On 64-bit machines, we have 64-bit pointers. And the whole point of this, and the point intended, is that 32-bit pointers can reference up to 4 gigabytes of memory. 64-bit pointers can reference up to 18 exabytes of memory, which is quite a lot more. And Chrome wants to be able to run in 64-bit so that it can access more than 4 gigabytes of memory. Yeah, Chrome definitely needs more than 4 gigs. Yeah, all right. We've all heard, we've all seen the same memes and you know, fair enough, if you've got a hundred tabs open with a thousand images and they're playing games and they're playing music, it's gonna use up memory. But not necessarily each individual tab, not necessarily each individual V8 instance. And the key observation of point of compression is that actually, we can probably restrict each V8 instance to be less than 4 gig. And if we can restrict it to be less than 4 gig, that means we can pre-allocate a 4 gig area for it and force all objects to be allocated inside of that area. And now, instead of referencing those objects by a 64-bit pointer, we can reference them by an offset like this. Under point of compression, you can take your 64-bit pointer and then you can split it in half. You can split it into a base and an offset. The base is the start of that 4-gig allocation area and the offset is the offset within it. And then you only have to store the offset on objects, which means that your pointer's got half the size, they got back to 32-bit size. Ah, guessing it wasn't just easier stack. It definitely wasn't. It was a whole journey and there's a whole blog post describing that journey which was very exciting. But as a little spoiler, I can tell you that on typical websites, we reduce memory by about 40%. Yeah, those are some very impressive numbers, 40%. But what if a web app or a Node.js program really wants to use more than the 4 gigs? Are you constricting apps to only have 4 gigs of memory? Well, kind of, but also not really. First of all, with point of compression, those objects are a lot smaller, so you can fit a lot more of them into that 4 gigabyte allocation area. And also this 4 gig only applies to a single VA instances, JavaScript object heap. So for example, type to raise, they have their own external memory backing, so they're not included. Wasm instances have their own 4 gigabyte allocation area, so those are separate. Even other VA instances inside of web workers and on other tabs have their own 4 gigabyte allocation area. So you're only restricting one VA instance and all of them. The other big project last year was V8 Lite. And this was a really interesting one because we thought to ourselves, what would happen if we just gave up on performance and tried to just improve memory? How far could we actually get? And for memory constrained devices where V8 just couldn't run at all without the memory that it needed. Yeah, that's an interesting thought experiment. I guess if you run slowly, that's better than not being able to run at all because you're out of memory. Right, absolutely. The approach that we took was to just look at typical websites and look at what kind of things are actually taking up memory there. 40-ish percent was user data. There's not really much we can do about that. Projects like Pointer Compression are gonna reduce that by a lot. But we can't really have any targeted optimizations that reduce the amount of data that users create. And there was this big bucket of other because there's always a big bucket called other and we couldn't reduce that either with targeted optimizations. But we did look at some of the top users of memory and we decided to try and target those. Right, so right off the bat, if you're not worried about performance at all, you don't need to optimize code. That makes sense to me. Absolutely. And if you don't need to optimize code, you don't need to recollect type feedback either because that's just storing the data that we need for optimization and it's only used for performance. Even the bytecode that we generate, you don't have to store that. You can just compile it on the fly whenever you need it and get rid of it afterwards. Sounds a little different to me though. Bycode is unoptimized code and if you're even getting rid of that, that sounds like you're giving up more than just a little bit of performance. Yeah, the first prototypes of VA Lite were pretty slow. But then we realized that we could get a lot of these gains without actually sacrificing performance at all just by being a little bit lazier. Yeah, I'm something of an expert at being lazy myself. Yeah, I'm pretty good at being lazy myself too. But as an expert in laziness, you know, right, that being lazy doesn't just mean not doing anything at all. It means not doing anything until you're really required to. So we took the same approach with VA. Let's talk about those feedback vectors that I mentioned previously, the type feedback. You're not gonna make much benefit from type feedback if you only run a function once or twice. So only where you're gonna start benefiting you after you run it tens or hundreds of times. So we can delay creating this type feedback until we've already had a couple of runs of this function, taking off some of those feedback vectors but not all of them. Same thing with source positions. We only need those for printing line numbers when we print exception stack traces or for printing stack traces and DevTools. So if we can delay calculating those two later, then we save a lot of space as well. Even bytecode, we have this capability of getting rid of bytecode that we don't need. So we can just get rid of old bytecode, keep around bytecode that we're still using and save a little bit of memory there. And there were a bunch of tiny projects targeting these top memory users, which are described in this blog post in a lot more detail. But against boiler alert, they reduce memory by 10 to 30% on typical websites. Nice. So there's actually been a lot more going on in V8 in the last year. We only really had time to talk about a couple of projects. I recommend you visit our blog where we post about new versions of V8. We talk about exciting new things that we just like to talk about. It's a great read. I would look forward to seeing you there. Thank you very much for all the viewers who joined us for this whirlwind tour of what's new in the JavaScript language and the new developments in the engine itself that makes running JavaScript both faster and to use less memory. We definitely didn't have time to go into all the new features that were added to JavaScript, so please give our blog a read. Thank you very much. Thanks, everyone. Hi, everyone. My name is Matjas, and I'm here to tell you what's new in Puppeteer. But before we can do that, we should probably talk about what Puppeteer is in the first place. Puppeteer is a browser automation library for Node. It lets you control a browser using a simple and modern JavaScript API. After installing it using NPM install Puppeteer, you can require Puppeteer in your Node script and start automating. The first step to browser automation is to launch an actual browser. And with Puppeteer, that's just one line of code. Next, we open a new page. This is equivalent to opening a new tab in your browser. Now let's navigate to a URL. This line of code ensures that the page has finished loading before continuing with the rest of the script. Then, we take a screenshot and save it to a file before finally closing the browser. And that's it. That's the entire script. We did all of that with just a few lines of code. And Puppeteer can do much more. You can generate PDFs, evaluate JavaScript in pages, enter text in input fields, click on elements. Almost anything you would manually do when using a browser can be automated using Puppeteer. The Puppeteer project is fully open source and has received contributions from individual contributors all around the world, as well as from companies like Mozilla, Sauce Labs, and Microsoft. At Google, the Puppeteer team consists of Chrome engineers who also work on DevTools. And this might sound a little strange at first, but it actually makes sense because Puppeteer is built on top of the same underlying protocol that DevTools also uses to communicate with the Chromium backend. Because of this, Puppeteer also gives you access to advanced browser functionality that is usually only available through DevTools. For example, you might know that DevTools lets you emulate print media so that you can easily debug print styles. Well, Puppeteer lets you do the same thing in an automated script. Here we call page.emulateMediaType to force print styles and then we save the result as a PDF. Okay, now that you know what Puppeteer is, what it can do, and who is working on it, let's take a look at some recent feature additions. Similar to emulating print styles, we recently added DevTools support for emulating light and dark mode, as well as other so-called CSS media features. We then shipped a new Puppeteer API that lets you perform the same emulation programmatically. This Puppeteer script takes two screenshots of your web app. One in light mode and one in dark mode. It works independently of your operating system settings. One of my favorite features on web.dev slash live is the schedule, which adapts to your local time zone. I live in Germany, so when I view the schedule, I see something like this. Today's event started at 2 p.m. for me. But someone in Tokyo, for example, would see a different time. For them, the event started at 9 p.m. I love that the website just tells me what I need to know in my local time. Nobody likes doing time zone math. To make it easier to test this kind of time zone-aware functionality, we added DevTools support for emulating arbitrary time zones. Yesterday's event started on June 30th at 6 p.m. for me. But for someone in Tokyo, it was already 1 a.m. on July 1st. In addition to the new DevTools functionality, we also added a new API to Puppeteer to let you change time zones programmatically. This script emulates various time zones and then executes some time zone-dependent JavaScript in the page context. We're logging the same date, but in two different time zones and that produces different output. Here's another example. This Puppeteer script forces the Tokyo time zone, then loads the web DevLive page and finally takes a screenshot of just the schedule, similar to the side-by-side screenshots we saw earlier. DevTools recently gained support for simulating the effect of various vision deficiencies, including blurred vision and color vision deficiencies. This can help you identify accessibility issues related to color, such as bad contrast. And guess what? We added a corresponding Puppeteer API that lets you apply these simulations programmatically. This script takes a screenshot of the web app after simulating blurred vision, a chromatopsia or full color blindness, and dutronopia, which is red-green color blindness. One feature we're still experimenting with is the ability to register and use custom selector query handlers. Many Puppeteer APIs deal with selector strings, which by default use query selector or query selector all to find elements in the page. We've heard from users that they want to be able to provide their own selector query handlers with custom logic and this new feature now makes that possible. You can imagine providing a custom hash text handler which looks for DOM nodes containing a string of text or maybe you wanna select elements across shadow DOM boundaries which query selector doesn't let you do. There's one more feature I wanna talk about and it's a little different from all these API additions we've been covering until now. Let's go back to our very first example, launching a browser, navigating to a URL and taking a screenshot. Puppeteer was originally built for Chrome so when you call Puppeteer.launch, it launches a Chromium browser by default. You can now also specify this explicitly by using the product option. Okay, so we added a new product option. By itself, that's probably not very interesting but here comes the exciting part. Instead of Chrome, you can now specify Firefox and then use the same Puppeteer API to test a real Firefox browser. By changing just this one line, we are now automating Firefox instead of Chrome. Firefox support for Puppeteer is the result of an ongoing collaboration with Mozilla. Part of this effort involves patching Puppeteer itself but a big chunk of the work happens in the Firefox code base. The Puppeteer Firefox implementation is still experimental and so not all the Puppeteer APIs are yet compatible with Firefox. But Mozilla has been making great progress here. In fact, as of mid-May, exactly 319 out of the 638 tests in Puppeteer's test suite are passing on Firefox. That's exactly 50%. We're hoping to ship Puppeteer with more complete Firefox support soon. Longer term, we would love to support Safari as well and we're actively working on making that happen in collaboration with other browser vendors. We believe the right way to get to a fully cross-browser Puppeteer is by standardizing a protocol that all browsers can implement instead of building on top of the proprietary Chrome DevTools protocol. In addition to all those new features, a lot of work has been going on behind the scenes of Puppeteer. We recently finished migrating the code base to TypeScript. We simplified our test runner, we considerably improved the robustness of our continuous integration setup and our documentation keeps getting better and better. This work is often less user-visible but it's crucially important because it enables us to iterate more quickly and more confidently. I hope you enjoyed this overview of what's new in Puppeteer. Thanks for listening and see you next time. Hi, my name is André Bandaja and in this video, I'm going to show you how to use your progressive web app inside an Android application without writing a single line of native code. Progressive web apps or PWAs combine the reach of the web with the capabilities that were once available to native apps. If you are new to PWAs, read more about them on web.dev slash progressive dash web dash apps. It is natural that developers building great PWAs want to reuse those experiences inside their Android applications. In the past, possible ways for a developer to use their progressive web app inside an Android application included using the Android WebView or embedding a browser engine. The WebView doesn't provide support for many of the new capabilities on the web, like push notifications or web Bluetooth. So the output can be a support experienced compared to the PWAs built on. Creating and maintaining an app with an embedded browser requires a considerable amount of engineering effort and produces an app that's larger than a native app equivalent. At last year's Google IO, we announced trusted web activities which allow developers to use their progressive web app inside an Android app in a full-screen tab that is powered and has the same features and capabilities as the browser providing it. This leads to a small development cost and application size. Even though trusted web activities provide a better alternative for using a PWAs inside an Android app, developers still need some knowledge about native application tooling and development. So to create an easier path for developers who want to create their Android app using their PWAs inside it, we have created Bubblewrap, a Node.js project that contains both a library and a common line interface developers can use to generate and build their Android application. In the next few minutes, I'd like to guide you on how to configure Bubblewrap and use it to generate an application from an existing progressive web app. I'm going to use Roman Merwood's persistence app as a starting point, but you can use any existing progressive web app. Check the video description for the link to the persistence app. We'll need to modify the application later. So I'll open the app, scroll down and click on the code link. Then I'm going to remix the project so we can modify it. We can get the link to the remixed app by clicking on share, then live app and then copying the link. We are going to need that information soon. In order to use Bubblewrap, we need Node.js 10 or above installed on the development computer and optionally an Android device set up in developer mode so we can test the application. Check the link on the video description for more information on how to set up on Android device for developer mode. Bubblewrap builds on top of native SDK tooling. So we'll start by downloading the Android common line tools and the Java development kit or JDK version 8. To download the Android common line tools, we can use the shortcut on the Bubblewrap CLI documentation which is linked on the video description. Inside the page, click on the link for your operating system, accept the license and click on download. The Bubblewrap CLI documentation also links to the correct version of the Java development kit. Inside the page, choose your operating system, then architecture, then download the compressive tar file for the JDK. In our terminal, we now create a directory where we can place both dependencies. Then we unzip the common line tools and then the Java development kits. Make sure to take note of the directories where those files were decompressed as we're going to need them later. I like to rename the JDK folder to just JDK as it's easier to remember. With the dependencies now ready, we can install Bubblewrap using NPM install. With Bubblewrap and its dependencies now installed, we can start the creation of the Android app itself. Let's start by creating a folder for it. And now we can initialize the Android project by calling Bubblewrap init and passing the URL to the web manifest to it. When Bubblewrap runs for the first time, it will ask for the location of the JDK and the Android common line tools we downloaded previously, while also automatically installing other dependencies. Then the CLI will ask you to confirm values read from the web manifest and fill in any missing required values needed to create the Android app. We can, for instance, change the start URL so that we can use Google Analytics to measure how often our users are opening the PWA from the Android app. Android applications need to be signed with a self-generated key in order to be uploaded to the Play Store. If Bubblewrap is unable to find an existing key, it will prompt the developer to create one. So let's go ahead and create it and make sure to take note of the passwords you choose. Finally, we can now invoke Bubblewrap build to build the project. The command will output three important things, the quality criteria for the PWA, an asset links.json file used to validate the domain opened inside the trusted web activity, and a signed Android application that can be uploaded to the Play Store. Bubblewrap will check the quality criteria against the URL used to launch the trusted web activity. We strongly recommend that your PWA passes the quality criteria. The quality criteria is measured using Lighthouse against the start URL and consists of a minimum performance score of 80 and getting the PWA check. In order to be shown in full screen, developers need to implement digital asset links. Bubblewrap takes care of the configuration of the Android application, but there is one extra step that needs to be done in the web app. The content of the assetlinks.json file needs to be made available on dotwellknown slash assetlinks.json on the root of the domain. On my remix project, I'll create a dotwellknown slash assetlinks.json file. Then I'll place the content of the file generated by Bubblewrap into it. The application is now fully set up. If you have an Android device in developer mode, you can now connect it to the computer and run Bubblewrap installed to launch the app. Congratulations, you have built an Android app. When uploading an application to the Play Store for the first time, it will ask if the developer wants to use app signing. If opting in into app signing, the Play Store will manage the signing key for you, making sure it's not lost. This is important, as losing the key means it's not possible to update the application on the store anymore. But it also means that the final key used to sign the application will be different than the one generated by Bubblewrap. To update the assetlinks.json file, we need information on the key used by the Play Store. This information can be found by clicking on the app link site and on the menu on the left and copying the details for the fingerprint and using its top date assetlinks.json file on the web app. It is possible to use both fingerprints in the application. Check out the video description for a link on how to add both keys to the application. Bubblewrap removes friction for web developers who want to open their PWA in an Android app. I'm a fan of common line tools. If you are more like a graphical user interface person, check out PWA Builder. It uses Bubblewrap as a library to power their Android application generation. And that's all for Bubblewrap today. Make sure to check the GitHub repo and drop us some feedback. And if you're watching this live, jump into our live chat and tell us what you think. Thanks for watching. Hi, I'm Damian, a web-accessing consultant at Google. In this talk, you will learn how to define and install strategy across all your mobile experiences. Letting your users install your app is one of the best ways to keep them engaged. Today, you can achieve that in different ways. Let's start with native app installs. If you have a native application, you might think that this might be the best platform to promote to all your users. And for some of them, this might be true. But for some users, native apps can have some disadvantages too. The most common one is storage constraints. Making space for a new app might mean removing valuable content. Freeing up storage is also the number one reason users remove apps from their devices. There's also the issue of available bandwidth, especially for users on slow connections and expensive data plans. Finally, moving to a store creates additional friction and delays a user action that could be performed directly in the web. A great alternative to this is allowing users to install your progressive web app from the browser through an add to home screen prompt. You can also upload your PWA to the Play Store using trusted web activity. In this example, Quinto Andar, a real estate company from Brazil, was able to reuse the same code base in the web and the Play Store while offering a great experience to users. Let's take a look at another example. Oyo Rooms is one of the largest hospitality companies in India. They have a very large user base coming from a variety of devices and networks. They have built different versions of their mobile experience to satisfy the needs of all their users. First, they created a native application for the Play Store. For the most sophisticated users, this could be the best choice. Oyo Lite is a progressive web app uploaded to Play Store via trusted web activity. It provides the same functionality of the native app, while occupying only 7% of the space. Finally, for users that visit the site directly by typing the URL or clicking on a link, Oyo offers a chance of installing the PWA directly from the home screen. Having all these ways to achieve app installs is great. But how can you combine all these offerings to increase installation rates while avoiding making your apps compete with each other? Let's discuss some strategies to combine different install offerings. The first strategy is to show the different options in the same screen. This is a simple approach that might just work for many users. The challenge is to be able to communicate the value proposition to distinguish clearly one from the other. But instead of delegating the choice completely to users, we can make the life easier. The idea of the following strategies is to make some inferences. For example, by tracking users' behavior and device characteristics. We call these heuristic-based approaches. The first one is web install as fallback. In this strategy, you can start showing the native app install prompt. If the user doesn't install the app and keeps visiting to your website, chances are that the web is their platform of choice. After a while, you can start promoting your PWA to these users. This strategy can be implemented very easily. For example, by using cookies to drag user behavior. The group of users that dismiss the app banner and kept coming several times to the site might be good candidates for a web install offering. But before showing the web install call to action, there are two more things to take into account. The first one, make sure that the user hasn't already installed your native app or your PWA by other means. The Get install related apps API can help you check that. The second is actually a UX best practice. To maximize the opt-in rate for your web install prompts, you might want to use the double permission pattern. In this example, Oyo shows a web install icon after capturing the before install prompt event. When the user clicks on it, they trigger the standard add to home screen prompt. If you want to learn about UX patterns for web permissions like this one, check Pshay's talk, Safe Permissions for the Capable Web at Chrome Dev Summit 2019. Let's move now to the second strategy. Intuitively, users on slow networks or low end devices might be more inclined to download light apps. Therefore, if it's possible to identify a user's device, one could prioritize the light app over the heavier native app install version. You can implement this by writing a function, checking for device characteristics to decide which prompt to show. If it's a low end device, a light app. And if it's a high end device, you can offer the coordinated app. Inside the function, device signals can be obtained in two ways. The first one is by using JavaScript APIs like device memory, hardware concurrency or the network information API. The second one is by using client hints which can be inferred from the header of the HTTP request. To use them, you need to send an accept CH header in your response, indicating the type of hints you want to receive, for example, device memory. After that, you will start receiving these hints in the header of the HTTP request. Finally, you can use this information to map to a device category and use that later to decide which prompt to show. If you want to learn techniques on how to map device signals to device categories, check out Adaptive Loading, a talk that was given at Chrome Dev Summit 2019. Wrapping up. Today, you can offer different channels to users to install your mobile experiences. For example, you could offer an ADBAP, a PWA available in the Play Store and a web install from the user screen. Then, you can define a heuristic to show the most suitable install offering to a particular user. You can create the very simple one based on the user behavior on your site. For example, by tracking how often they come to it. Or you can go for a more sophisticated approach by mapping device signals to device categories and show different install offerings, depending if the device is low, mid, or high-end. We encourage you to experiment with these techniques. For example, by running A-B tests and to reach out to us on Twitter to share your experiences. I hope you continue enjoying Wadden Live. Thanks for watching. In March 2003, Nick Fink and Steve Champion stunned the web design world with the concept of progressive enhancement, a strategy for web design that emphasizes core web page content first and that then progressively adds more nuance and technically rigorous layers of presentation and features on top of the web content. While in 2003, progressive enhancement was about using, at the time, modern CSS features, unobtrusive JavaScript, or even scalable vector graphics, progressive enhancement and 2020 is about using modern browser capabilities. My name is Thomas Diner. I'm a developer advocate based out of the Google Hamburg office. Today, I want to talk about progressively enhancing like it's 2003, building for modern browsers. Since we all can be here together in person due to the coronavirus, I've converted my talk into an online trip that I want to take you on with me. For this trip, you need a solid understanding of JavaScript, talking of JavaScript. The browser support for the latest core JavaScript features is great. Promises, modules, classes, template literals, arrow functions, you name them, all supported. Async functions work across the board in all modern browsers. And even super recent language additions like optional chaining and knowledge coalescing reach support really quickly. When it comes to core JavaScript features, the grass couldn't be much greener than it is today. For the trip that we are going on, you likewise should have a good understanding of progressive web apps. For this talk, I work with a simple PWA called Fugu Greetings. The name of this app is a hat tip to Project Fugu where we work on giving the web all the powers of native applications. You can read more about the project at web.dev.fugu-status. Fugu Greetings is a drawing app that allows you to create virtual greeting cards. Just imagine you actually had traveled to Google I.O. and wanted to send a greeting card to your loved ones. Let me recall some of the PWA concepts. Fugu Greetings is reliable and fully offline enabled. So even if you don't have network, you can still use it. It can be installed to the home screen of the device and it integrates seamlessly into the operating system as a standalone application. With this out of the way, let's dive into the actual topic of this talk, progressive enhancement. Starting each greeting card from scratch can be really cumbersome. So why not have a feature that allows users to import an image and start from there? With a traditional approach, you'd have used an input type file element to make this happen. First, you'd create the element, set its type and the to be accepted mind types and then programmatically click it and listen for changes. And it works perfectly fine. The image is imported straight onto the canvas. When there is an input feature, there probably should also be an export feature so users can save their greeting cards locally. Similar to before, the traditional way to saving files is to create an anchor link with the download attribute and with the blob URL as its href. You would then programmatically click it to trigger the download and to prevent memory leaks from happening, hopefully make sure not to forget to revoke the blob URL. But wait a minute. Mentally, you haven't downloaded a greeting card. You have saved it. Rather than showing you a saved dialog that lets you choose where to put the file, the browser instead has directly downloaded the greeting card without interaction and has put it straight into your downloads folder. This isn't great. What if there were a better way? What if you could just open a local file, edit it and then save the modifications, either to a new file or back to the original file that you had initially opened? Turns out there is a better way. The native file system API allows you to open and create files and directories, make modifications and save them back. Let's see how it can feature detect if the API exists. The native file system API exposes a new method, choose file system entries. I can use this to conditionally load import imageMJS and export imageMJS if the API exists and if it isn't available, load the files with the legacy approaches from the earlier slides. But before I dive into the native file system API, let me just quickly highlight the progressive enhancement pattern here. On browsers that don't support the native file system API, I load the legacy scripts. You can see the network tabs of Firefox and Safari here. However, on Chrome, only the new scripts are loaded. This is made elegantly possible thanks to dynamic imports that all modern browsers support. As I said earlier, the grass is pretty green these days. Let's look at the actual native file system API based implementation. For importing an image, I call window.choose file system entries and pass it an accept option parameter where I say I want image files. Both file extensions as well as MIME types are supported. This results in a file handle. From the file handle, I can obtain the actual file by calling its getFile method. Exporting an image is almost the same, but this time I need to pass a type parameter, saveFile, to the chooseFileSystemEntries method. So I get a file save dialog. Before, this wasn't necessary since openFile is the default. I set the accept parameter similar as before, but this time limited to just PNG images. Again, I get back a file handle, but rather than getting the file, this time I'm creating a writable stream by calling createWritable. Next, I write the blob, which is my greeting card image, to the file. Finally, I close the writable stream. Everything can always fail. The disk could be out of space, there could be a write or read error, or maybe simply the user cancels the file dialog. This is why I always wrap the calls in a try catch statement. I can now open the file as before. The imported file is drawn right onto the canvas. I can make my edits and finally save them with the real save dialog where I can choose the name and storage location of the file. Now, the file is ready to be preserved for the eternity. Apart from storing files for the eternity, maybe I actually want to share my greeting card. This is something that the web share and web share target APIs allow me to do. Mobile and more recently, also desktop operating systems have gained native sharing mechanisms. For example, here's Safari's share sheet on macOS. Triggered from an article on my site, blog.tomac.com, and you click the share button, you can share a link to the article with a friend, for example, by the native messages app. The code to make this happen is pretty straightforward. I call navigator.share and pass it an optional title, text and URL. But what if I want to attach an image? Level one of the web share API that you can see on the screen doesn't support this yet. The good news is that web share level two has added file sharing capabilities. Let me show you how to make this work with a full greeting card application. First, I need to prepare a data object with a files array consisting of one block and then a title and a text. Next, as a best practice, I make use of the new navigator.canshare method that does what its name suggests. It tells me if the data object I'm trying to share can technically be shared by the browser. If navigator.canshare tells me the data can be shared, I'm in the final step ready to call navigator.share as before. Again, everything can fail in the simplest way when the user cancels the sharing operation. So it's all wrapped in try catch blocks. As before, I use a progressive enhancement loading strategy. If both share and can share exist on a navigator object, only then I go forward and load share.mgs via dynamic import. On browsers like mobile Safari that only fulfill one of the two conditions, I don't load the full functionality. If I tap the share button on a supporting browser, the native share sheet opens. I can, for example, choose Gmail and the email composer widget pops up with the image it hatched. Up next, I want to talk about contacts. And when I say contacts, I mean contacts as in the devices address book. When you write a greeting card, it may not always be easy to correctly write someone's name. For example, I have a friend who prefers your name to be spelled in Cyrillic letters. I'm using a German Quartz keyboard and I have no idea how to type their name. This is a problem that the contact picker API solves. Since I have my friends stored in my phone's contacts app, where the contacts picker API, I can tap into my contacts from the web. First, I need to specify the list of properties I want to access. In this case, I only want the names, but for other use cases, I might be interested in telephone numbers, emails, avatar icon, or physical addresses. Next, I configure an options object and set multiple to true so I can select more than one account. Finally, I can call navigator.contacts.select, which results in the desired properties once the user selects one or multiple of their contacts. In FuguGreetings, when I tap the contacts button and select my two best pals, Sergey Mikhailovich Brin and Laurence Edward Larry Page, you can see how the contacts picker is limited to only show their names, but not their email addresses or other information like their phone numbers. Their names are then drawn onto my greeting card. And by now, you've probably learned the pattern. I only load the file when the API is actually supported. Up next is copying and pasting. One of our favorite operations as software developers is copy and paste. As greeting card authors at times, I might want to do the same. Either paste an image into a greeting card I'm working on or the other way around, copy my greeting card so I can continue editing it from somewhere else. The async clipboard API about from text also supports images. Let me walk you through how I have added copy and paste to the Fugu Greetings app. In order to copy something onto the system's clipboard, I need to write to it. The navigator.clipboard.write method takes an area of clipboard items as a parameter. Each clipboard item essentially is an object with a blob as a value and a blob's type as the key. To paste, I need to loop over the clipboard items that are obtained by calling navigator.clipboard.read. The reason for this is that multiple clipboard items might be on the clipboard in different representations. Each clipboard item has a type's field that tells me in which mind type the resource is available. I simply take the first one and call the clipboard items getType method, passing the mind type I obtained before. And almost needless to say by now, I only do this on supporting browsers. So how does this work? Here, I have an image open in the macOS preview app and copy it to the clipboard. When I click paste, the Fugu Greetings app then asks me whether I want to allow the app to see text and images on the clipboard. Finally, after accepting the permission, the images then paste it into the application. The other way around works too. Let me copy a greeting card to the clipboard. When I then open preview and click file and then new from clipboard, the greeting card gets pasted into a new untitled image. Another useful API is the badging API. As an installable PWA, Fugu Greetings of course does have an app icon that users can place on the app dock or the home screen. Something fun to do with it in the context of Fugu Greetings is to use it as a pen stroke counter. With a badging API, it is a straightforward task to do this. I've added an event listener that on pointed down increments the pen strokes counter and sets the icon. Whenever the canvas gets cleared, the counter resets and the badge is removed. In this example, I've drawn the numbers from one to seven using one pen stroke for each number. The badge counter on the icon is now at seven. This features a progressive enhancement. So the loading logic is as usual. Want to start each day fresh with something new? A neat feature of the Fugu Greetings app is that it can inspire you each morning with a new background image to start your greeting card. The app uses the periodic background sync API to achieve this. The first step is to register a periodic sync event in the service worker registration. It listens for a sync tag called image of the day and has a minimum interval of one day. So the user can get a new background image every 24 hours. The second step is to listen for the periodic sync event in the service worker. If the event tag is the one that was registered a slide ago, the image of the day is retrieved via the get image of the day function and the result propagates it to all clients so they can update their camera system caches. Again, this is truly a progressive enhancement. So the code is only loaded when the API is supported by the browser. This applies to both the client code and the service worker code. On non-supporting browsers, many of them is loaded. Note how in the service worker instead of a dynamic import, I use the classic import script function to the same effect. Sometimes even with a lot of inspiration, you need a notch to finish a started greeting card. This is a feature that is enabled by the notification triggers API. As a user, I can enter a time when I want to be nudged to finish my greeting card. And when that time has come, I will get a notification that my greeting card is waiting. After prompting for the target time, the application schedules the notification with the show trigger. This can be a timestamp trigger with the previously selected target date. The reminder notification will be triggered locally. No network or server site is necessary. As everything else I've shown so far, this is a progressive enhancement. So the code is only conditionally loaded. I also want to talk about the Wakelock API. Sometimes you need to just stare long enough on the screen until the inspiration kisses you. The worst that can happen is for the screen to turn off. The Wakelock API can prevent this from happening. In full greetings, there's an insomnia checkbox that when checked keeps your screen awake. In a first step, I obtain a Wakelock with the navigator.wakelock.request method. I pass it the string screen to obtain a screen Wakelock. I then add an event listener to be informed when the Wakelock is released. This can happen, for example, when the tab visibility changes. If this happens, I can when the tab becomes visible again, re-obtain the Wakelock. Yes, this is a progressive enhancement. So we only need to load it when the browser supports the API. At times, even if you stare at the screen for hours, it's just useless. The idle detection API allows the app to detect user idle time. If the user is detected to be idle for too long, the app resets to the initial state and clears the campus. This API is currently gated behind the notification permission. Since a lot of production use cases of idle detection are notification related. For example, to only send a notification to a device, the user is currently actively using. After making sure that the notifications permission is granted, I then instantiate the idle detector. I register an event listener that listens for idle changes, which includes the user and the screen state. The user can be active or idle and a screen can be unlocked or locked. If the user is detected to be idle, the canvas clears. I give the idle detector a threshold of 60 seconds. And as always, I only load this code when the browser supports it. Phew, what a ride. So many APIs and just one sample app. And reminder, we never make the user pay the download cost for a feature that their browser doesn't support. By using progressive enhancement and make sure only the relevant code gets loaded. And since with HTTP tool requests are cheap, this pattern should work well for a lot of applications. Although, at times, you might still want to consider a bundler for really large apps. This has been a short overview of many of the APIs we're working on in the context of Project Fugu. Definitely check out our landing page where you can find links to detailed articles for each API that I've talked about. If you're interested in Fugu greetings, go find and fork it on GitHub. And with that, thank you very much for watching my talk. You can find me as Tomajak on GitHub, Twitter, and the web in general. I'm looking forward to answering your questions and I hope you enjoy the rest of web.dev life. Hi, I'm Remyon, a web ecosystem consultant at Google. In this talk, we'll explore how different companies are building fast and resilient experiences in the web. We'll use the Warbox libraries to show how to implement four different patterns in your site. But all of these features can also be implemented by manually writing the service worker code. Our first pattern is called resilient search experiences and can be applied to any site that offers some type of search functionality. When a user searches for a topic in Google search for Chrome in Android devices and loses connection, instead of the standard network error page, they are presented with a custom offline page, asking if they want to opt-in for notifications. If the user accepts the permission, once the connection is back, they will receive a web push notification, informing that the search assault is ready. Clicking on the notification will take the user to the result screen. This is a great way of keeping the user engaged while letting them complete the task they were looking for. At the heart of this implementation is the Background Sync API, which lets you defer actions until the user has stable connectivity. In Warbox, this can be implemented very easily. First, you can define a network-only caching strategy for the search endpoint. So this request always go to the network. Then, you can pass a Background Sync plugin to take care of the offline scenarios. Let's see how the plugin looks like. The Warbox Background Sync plugin receives the name of a queue to store failed requests to be retried later. The plugin also receives an on-scene callback, which will be called once the connection is recovered. Inside the callback, you can retrieve any failed request, process them, and inform the user of the result. For example, with the notification. Before moving to the next pattern, let's take a look at an important detail from this implementation. You might have noticed that the notification permission is requested when the user loses connection. At that point, the user understands the value of the service and knows that the notification will deliver timely and relevant updates. This is an example of a good implementation of the web push permission. Our next pattern is adaptive loading with service workers and will allow you to provide a fast experience regardless of the network and the device. Terra is one of the biggest media sites in Brazil. They have a large user base coming from slow and fast connections. To provide a more reliable experience to all their users, they are combining service workers and the network information API to deliver lower quality e-machines to users on 2G or 3G connections. Terra took this strategy to the next level. When users are navigating on slow connections, they deliver the AMP version of the articles, which are more lightweight and tend to perform better under these conditions. To implement this functionality in Wordbox, you can first apply a cache-first strategy to e-maches. Then you can pass an expiration plugin to limit the number of entries in the cache. You can extend this strategy by creating a custom plugin that we will call adaptive loading plugin. Inside the plugin, you can listen for the request will fetch callback that will be called before the request is made so you can apply a transformation to it. Inside the callback, you can check the connection type. If it's a slow connection, you can create a new URL for a lower e-mach quality. Finally, you can create a new request based on that URL and fetch the most appropriate image file according to these conditions. If you are using Cloudinary, there's a Wordbox Cloudinary plugin making this feature even easier to implement. Check it out. As you might have noticed, the first two patterns have some things in common. We have combined the functionality of runtime caching strategies with plugins. This shows one of the benefits of using Wordbox, allowing you to extend the standard features in a very easy way. Let's move now to the second part of the talk. Our third pattern is called instant navigation experiences and it's useful for any type of site. Performing a task in a website might involve several steps, each of them meaning a navigation request. Navigation requests like requests for HTML pages are normally satisfied via the network. This means using a cache control header of no cache or a max age of zero to ensure that the response is reasonably fresh. But having to go against the network means that each navigation might be slow or at the least not reliably fast. To speed up these navigations, you can apply a technique called prefetching. In this example, Mercado Libre, the largest e-commerce site in Latin America, dynamically inchecks link prefetched tags in listing pages to accelerate parts of the flow. But prefetching is not only useful for e-commerce sites. Italian sports portal VirgilioSport uses service workers to prefetch the most popular posts that appear in the homepage before the user even clicks on them. As a result, low time for navigation to articles have improved by 78%. And the number of article impressions has increased in 45%. Prefetching is commonly implemented by using a resource hint in your pages. Link prefetch. The tag tells the browser to fetch a resource at the lowest priority and keep it in the HTTP cache for five minutes. In the service worker side, you can intercept requests for HTML pages so that you can extend the lifetime of the prefetched resource beyond the five minutes window. For HTML pages, a stale what revalidates strategy is a good option. To respond quickly from the cache while simultaneously keeping it at to date. Before moving to the final pattern, there's a slight variation of this technique. If they're using resource hints in the page, some developers prefer to delegate prefetching completely to the service worker. For that, you need to implement a page to service worker communication technique. The Warbox window package allows you to do that. So if you are interested in following that route, you can check that out. We have reached the end of our talk. Our final pattern is app shell UX with service workers. And it's useful if you want to make multi-page apps feel like single page applications. Dev has become one of the favorite platforms for software developers. The architecture of their site is a multi-page app. Their team was interested in the benefits of the app shell model, but didn't want to incur in a major architectural change. So let's see what they did. First, they created partials for the header and the footer of the home page. These assets are added to the cache at the service worker install event. What's commonly referred to as pre-caching. The content of the page is the only part that's actually being fetched from the network when navigating. But the key ingredient of this solution is the usage of streaming. Thanks to that, bytes can start being painted in the screen before the full response is ready. Warbox, you can start by creating a regular expression to match request for pages. Then you can pass an array of stream responses to compose. For the header and the footer, you can use a cache first strategy. For the content, you can use a network first. All the streaming sources will be composed by Warbox and sent to the client. Thanks to streams, the header can start being painted as soon as it's picked up from the cache without having to wait for the full response. We have seen four advanced patterns for speed and resilience. As a complement of this talk, we'll be uploading guides and cold apps so you can see them in more detail. Please check web-depth slash progressive apps and web-depth slash reliable. Thanks for watching. Hi, I'm Andre. Today, we're going to answer some frequent questions about the new features and capabilities for installed PWAs that were reserved previously only to native apps. And to tell us more about what is new and help us with some hard questions we have a guest. Hi, PJ, tell us about your role at Google. Thanks so much, Andre. I'm PJ. I'm a product manager on the Chrome web platform team. I work on progressive web apps, usually called PWAs. And basically, progressive web apps are modern applications built using web technologies that are making users happier. PWAs have a lot of capabilities of which one is that they can be installed into a user's computer, just the same as any other application. Oh, cool. So that means we have exactly the right person in the room. For people in the audience who are not yet familiar with installable PWAs, can you tell us a bit more about what they are and where they are available? Being installable is really a standout feature for PWAs because it's giving web developers the ability to make applications that can be started from the start menu on Windows, from the application folder on Mac, the home screen on Android and iOS. And these can really look and feel like any other application on the device. So for applications that users are using repeatedly, being installed means that that app is a little bit more top of mind for the user because that launching surface is immediately accessible to the user. They don't have to navigate anywhere in the browser to get back to the application. It also means that the application is showing up in the activity switcher as a separate app and that makes install quite attractive to developers. But I want to be clear that a PWA doesn't have to be installed to be a PWA. Being installable is just one property of a PWA. You asked this question about distribution. So let's talk for a moment about where PWAs are available. So first PWAs can be installed directly from the web browser on both desktop and Android. And on desktop, PWAs can also be listed in the Windows Store. On Android, you can find PWA-powered Android apps in the Play Store. This is a technology called a trusted web activity. You also see PWAs in the Samsung Store. You might have heard that PWAs are showing up in the Chrome OS Play Store and that's an early access feature I'm really excited about. So let's save a little time at the end of the session to talk more about that. Oh, I'm really looking forward to learning more about Chrome OS Play Store. But before that, can you tell us about the recent features that you think are the most exciting for developers? I really wish that we had time to go into everything that's shipping because there's a lot happening right now. But I'm gonna have to just pick a few favorites for today. So the features that I'm most excited about are some of the things that web developers could previously only do using a hybrid technology like an electron app or a Cordova app. And to begin here, let me just mention that the ability to install PWAs on desktop is still really new. So for those of you in the audience who have been paying close attention, I might have seen the announcement at IO last year, this might seem like old news, but I think for a lot of the web development community, this is still a very new feature and people are still getting used to the superpower that install can happen everywhere. You can now write one app and have it be installable on desktop, on tablet, on smartphones and users can discover that app on your website in search results, in play, in the Windows store, in the Samsung store. And this is giving web developers a really unprecedented reach for distribution of their applications. So I'm just super excited that this is now possible for to have install across all of these different screens and through all of these different channels. The other features that I'm most excited about are all of the capabilities that were previously only possible with Cordova or Electron. So for example, registering a file type handler for an app, offering an immersive mode, so creating better web games through an immersive mode and adding context menus for shortcuts and more. So the file type handler would allow a user to start a web-based image editor by double clicking on an image in their OS file explorer. That's exciting. Exactly that. So like with file type handling, you could register a file extension or MIME type. So let's say that you've written a new type of image editor and you can edit JPEG and PING files, you could register a file type extension for those file types and then those file types will automatically open in your editor if the user double clicks on those. A word of warning, file type handling isn't quite here yet. We expect it to go into Origin Trial in Chrome 85 in August and to be available generally sometime in the fall or winter. Looking forward for this one to go stable. Tell me more about immersive mode. What does that mean? Immersive mode is a term that's just been borrowed from native, it's a full screen mode and basically it removes all of the operating system decoration, so no status bar, no navigation bar and this is great for games or other media, basically when you wanna be able to address every single pixel on the screen. So I could start a video player in full screen from Dicon at home screen, nice. And what about app shortcuts? Sure, app shortcuts are a way to provide quick access to important functions in the app directly from the app icon. So for example, on Android you might be familiar with a long press on the app icon on the launch screen or on the home screen, so if you were to long press on a home screen icon for say a mail application you might see compose functionality directly in the menu that appears when you long press on the mail client. App shortcuts also work on desktop operating systems and that support will be arriving in Chrome 85. Interesting, so that's like deep linking to parts of my PWA directly from the icon that's on the home screen. Exactly. Switching gears a little bit, we launched Trusted Web Activity last year's IO and since then we have many feature requests and questions from developers and I wanted to go through some of those with you today. First, developers have pointed out that the way permissions work in the browser and in native is different and that makes their users a little bit confused. As an example, native apps get the notification permission by default while web apps need the users to accept the permission first. How are we planning to solve those inconsistencies? So let us just start by just sharing my perspective on the philosophy here of I don't think that users should need to think about what technology was used to build the application that they're using. Users should really just have a job, users just have a job to get done and we want to help developers help users as easily as possible. So wherever it makes sense, I think an installed web application should use the operating systems typical UI for things like managing permissions, launching and switching between applications just to match the user's expectations for how things should behave on the device that they're using. So we've introduced this concept of notification delegation into installed PWAs and that means that when a PWAs installed, it will delegate the notification permission into the native setting area on Android. So another difference between an app installed from Play and a web app, for example, is that an app installed from Play automatically receives notification permission. So we want that experience to be the same for Android applications that were built using a PWA. And that's why we've delegated the web notification permission to the notification settings panel in Android and these apps can be configured to auto enroll users and notifications so that they just look and feel and behave exactly like any other Android app and the user doesn't even need to know that this application was built using web technology. In the future, we're going to be adding locations of the settings panel too. Of course, there won't be any auto enrollment for location because users are not auto enrolled in location for location permission on in Android native apps either. So we're just going to match what the behavior is between for a native application, for apps that are installed from say the Play Store or from any other distribution channel where the user may or may not be aware that this application is a web application. We're going to continue as well to delegate more permissions and match OS preferences over the next few releases. Got it, so this will make the experience more seamless to users regardless of the technology that the developer used to build the app. Another frequent request, developers sometimes feel that when they make an Android app using PWA and tracer web activity, they should have a communication channel between the native application and the web app. This would allow them to use native platform capabilities where an equivalent on the web doesn't exist already. Is this something that is being considered? I'm really glad you asked this question because this is the exactly the kind of product question that I really love. And I'd like to hear from our audience on this one. So today you can pass parameters into a tracer web activity when you launch it and you can use intents to leave the tracer web activity and pass some information into another activity inside of your app. And we're considering adding support for a message bus. For example, we could extend post message to enable a bus with this functionality. However, I don't have use cases from developers on exactly what they need here. Most capabilities are already in the web platform or are planned as part of the Fugu effort to add web capabilities to the web platform. So if the web platform has missing capabilities, I'd really rather add that capability to the web then create a message bus to native because if a capability is part of the web platform then it's gonna work everywhere and developers only need to have one code base, which is simpler and that code base will work in multiple browsers, whether the app is installed or not installed, et cetera. So I'd like to turn this just to do a question for our audience. What do you need to do in native code that you can't do today in the web platform and are there things we could do to improve the web platform so you wouldn't need to do that in native or perhaps it's something that could only be done in native and you really need that message bus. I'd really like to hear from you about this. Cool, I'm also looking forward to hear from folks on Twitter or if you're watching this live on the live chat. Many developers have a native application and prompting users to install a PWA in the browser when they already have the native app installed can lead some confusion. Is there a way to prevent the prompts from showing when the native app is already installed? So that's a great question. It's also probably one of the top concerns that I hear from teams that are implementing a progressive web app. It's called channel conflict and it arises where you're not sure which experience is gonna be best for the user. So I think the most important thing for developers to know is that you do have full control over the promotion of PWA install. So you don't need to worry about the browser promoting your PWA if you don't want it to. There are ways you can prevent the browser from promoting the installation of your PWA if, for example, you have a native app. So let's talk about how that works. First, in the web app manifest, there's a couple of fields you should know about. One is called the related applications field and this is where you can list native apps for Android and iOS. And then there's another Boolean field called prefer related apps. And if this is set to true, the browser is not gonna promote the install of your PWA. Secondly, there's an event that fires when a PWA passes the installability check in the browser and when this happens, developers can call a prevent default method and that's gonna block any promotion of the PWA install in browser UI. Finally, there's a new JavaScript API just landed earlier this year called get installed related apps and this is gonna let developers inspect if the user has native apps installed on the device that are associated with the origin that the user is currently on. And just to be clear, this isn't gonna let you see any app that the user happens to have installed on the device. The app has to be associated with the origin that the user is on. This API does allow for a lot of programmatic flexibility for developers. So it means you have full control. Which experience do you wanna promote to the user? You could use, for example, user behavior. You could provide the user with preferences in your HTML. It's really up to you how you wanna use this control but it does give you, as the developer, a lot of control over what you promote to the user and when it happens. So this means I can use this API to check if my data application is installed and make decisions like showing that sound screen prompt or not. What if my native app is using trusted web activity? Will these APIs to return? So yes, absolutely. This does mean that you can use the API to check if your native app is installed. A trusted web activity is really just an Android app. So it will show up just like any other Android app and this API will return it. It will also return if the app is a, it has been installed from the web browser. So a PWA installed from the web browser will also get returned if I get installed related apps. Cool. On the developer experience side, we've been working on tools like bubble wrap to help developers build their project using trusted web activity. But many developers as you wonder if it wouldn't be easier if they could just copy paste a new role into the Play Store. That's a great question. So the reason why we've been focusing our efforts on making this easier for developers on bubble wrap is that we could create a much more powerful, much more flexible system for developer, building Android apps using a command line utility like bubble wrap than what we could do in a web interface. So app stores are really a different ecosystem and they have different requirements and policies. And we believe that developers should have powerful tools that they need to rethink the experience of their applications for this environment. And we also want to avoid giving developers a perception that they could just drop a website into the store without thinking about that experience. So there are all kinds of design decisions that should go into building an Android app, whether or not it's a PWA powering that app. For example, what's the splash screen gonna look like? When are you gonna hide it? When are you gonna show content? Do you need notifications? There wouldn't be this kind of flexibility for configuring these options with a web interface. That being said, we want it to be as easy as possible. So we've worked to streamline it to the maximum extent that we can with bubble wrap and make it a powerful but flexible and easy to use developer tool. What about place or policies? Do Android apps that use PWA inside a tricetable activity still need to comply to those policies? I think you said the magic word there, which is Android app. It doesn't matter how your Android app is built, whether it's built using PWAs and web technology or whether it's built using Java or Kotlin. Play policies apply to all Android apps distributed in the Play Store and therefore they also apply to Android apps built using progressive web apps. Gotcha, so same store policies apply. What about applications designed for children? Can developers use web technology in those apps? If the target audience for your application is children, you need to comply with the Play Family Policy requirements and these requirements are intended to help keep miners safe from inappropriate content. Unfortunately, that's really difficult for the review teams to evaluate with web apps where the content can change and not only can the first-party content change but it's really easy to include third-party content inside of a web application which can be unintentionally inappropriate. Let's imagine that you are using an advertising network or something else where you're loading content in from a third-party site. You might not be able to verify yourself for sure whether or not that content is always going to be appropriate. So for the time being, it's not possible to build Android apps using trusted web activity that comply with Play Family Policy. We are working on ways to make this possible in the future. Got it, so it seems PWAs are crossing over ecosystems and developers need to adjust some of their expectations from that. Developers using trusted web activities are also expected to fulfill a quality criteria. Is that why that criteria exists? Yeah, nobody wants an app store that's going to be cluttered with low-quality apps. And we also want developers to succeed with their apps in the Play Store. Keep in mind something that's really different about distributing through the Play Store than it is to just sort of build a PWA on your own site is that these apps have user ratings and reviews. And we want to make sure that developers are set up for success to get good ratings and reviews for their app. So at a minimum, users expect apps that they install from the Play Store to look and feel app-like, to be fast, to work offline. And that's why we have quality criteria for PWAs in the app store. And this is exactly what progressive web app criteria has been intended to do from the very beginning. So first I want to share that there are certain types of events that can happen to a web application that are effectively a crash. And these are things like a 404 happening. These are things like failing an offline check when the user goes offline and returning to Chrome Dino. This is effectively a crash. Failing the digital asset links verification, which is something you need to do with a trusted web activity to verify that you are the owner of the content and the owner of the application in the app store. So if any of these things happen, starting in Chrome 86 in October, we are going to be mapping those crashes into Android vitals crash events. And you will see Android apps begin to crash if users are running into 404s in your application, et cetera. So that's just something that developers should be aware of and we're making another announcement with more details about this. The second thing is on the performance side. Right now, don't have a date to announce with respect to enforcement of performance, but developers should be aware that the criteria is a full batch PWA and a light has performance score of 80 or better. And you can use webpagetest.org slash easy or PageSpeed Insights against your start URL as the fastest way to check whether or not you're meeting this criteria because having the app launch quickly is an essential part of having a great application experience for your users. We will be providing a lot of notice as to when enforcement will begin. It will be in 2021, it won't be in 2020 for the performance and full batch PWA scores. So expect to hear more about this later in the year. So last one. Twitter has recently launched their PWA on the ChromeOS Play Store, which was quite exciting. Can you tell us more about how they did it and when this is becoming generally available? I'm really excited about this. Right now, this is an early access program and it requires manual intervention from our team members. So it's not something that we can extend to everyone in the community, but we're working on getting it to general availability and when that rolls out, it'll be possible for everyone to just do this themselves. I'll give you a hint, it uses trusted web activity. So if you're building a great trusted web activity, it'll be really easy to make your progressive web app available in the ChromeOS Play Store in the future. And I hope we can share more about this in the second half of the year. So wow, trusted web activities are coming to ChromeOS, nice. Well, I think we covered a lot. I wish we had more time to discuss. I do too. So for those of you who are watching, please do reach out to us on Twitter if you wanna give feedback. And if you're watching this live, please join us on the live chat. We'll see you there and thanks so much for watching. Yes, thanks for watching. Hi everyone, thanks for joining. I'm PJ, I'm a product manager on the Chrome web platform team responsible for progressive web apps, notifications and permissions. Today's talk is about quieter notification permission prompts and how recent changes to how Chrome handles notification permission requests can make browsing for the web a little better for everyone. There's also important information in here for developers who use notifications to help you improve your user experience, improve your notification accept rates, and to tell you about upcoming changes that will detect and flag abuse abuse of notification prompts or content. If you're not familiar with them, web notifications are a channel for communicating timely and contextually relevant information to the user. Mostly these work just like push notifications in mobile apps, except they can also work on desktop, on Windows, Mac, as well as smartphones. On Android, for example, web notifications appear in the notification drawer and on desktop, they typically appear in the top right corner of the screen. In some cases, notifications aren't just helpful. They are almost essential to the app's functionality. For example, if you had an incoming call from a communication app like Google Duo or Chat, that's not something you want to know about later. You need to know about it right away. Of course, not everyone uses apps that require notifications and not all websites are putting the needs of their users first. That means that we are seeing a lot of websites out there that are misusing notifications in ways that are annoying or could be abusive. Before we get into that though, I want to talk about how users get enrolled in notifications. To receive web notifications, a user needs to accept a notification permission request. When websites prompt users out of context for a notification, such as when a user first arrives on the website, it can be a pretty annoying distraction both from the browsing experience itself as well as from the website's content. Worse, some abusive websites look for ways to trick users into accepting notifications that are then used to promote malware or undesired content. I want to cover why we have notifications in the web platform in the first place in a little bit more depth. The web platform is there to enable web developers to create powerful applications and web notifications are part of that toolkit. Without notification support, there would be entire types of apps that would be simply impossible to build using web technology. So for example, messaging apps, calendars, e-commerce for food delivery notifiers, taxier ride sharing apps, all dependent notifications to provide a timely tap on the shoulder to the user. And while you can imagine that some of these apps might be usable without notifications, you could see that most of the time you're probably gonna want one. We've also all equally experienced some of the misuse of notifications though. That includes things like unwanted marketing, promotions or content that just isn't very important or relevant to us at a given moment. To address this problem starting in Chrome 80 that was released in January of 2020, we started making changes to how these notifications requests work to help make browsing the web safer and less interruptive. We're gonna get into that new UI in the next slide. In Chrome 80, we added quiet notifications UI. Quiet notification UI is less interruptive, but it still lets the users know that the request has been made. There's a little bit of animation to catch the eye, but on desktop, the dialogue is in the omni box. So it doesn't actually cover any part of the web content. On mobile, the notification prompt used to be a modal in normal UI, but in quiet UI, it's an easily dismissed info bar at the bottom of the screen. Quiet notification UI aims to reduce the visual priority and interruptiveness of notification requests. On desktop, which you're seeing in this example on the left, you'll notice the bell icon initially animates with text, indicating that notifications are blocked on the site. In mobile, the quiet UI is now an info bar. In both of these cases, in product help explains to the user why notifications were blocked on the site. Quiet notifications UI was created specifically to address the concerns I mentioned earlier in this talk. We receive frequent complaints from users in Chrome feedback about disruptive notification permission prompts or unwanted notifications. That being said, there are services with tens or hundreds of millions of users like messaging apps and calendars that are depending on timely web notifications every single day. Let's talk for a moment about how users get enrolled in quiet notification UI. There's several ways that this can happen. First, users could just enroll themselves manually by changing their preferences in Chrome settings. Second, sites that have very low accept rates for notification permission requests will be automatically enrolled in quiet notifications UI. And this is currently sites that have the lowest few percentile of notification accept rates. So the absolute rate needed for quiet notification UI does change over time because we are using percentiles. We'll also periodically increase the accept rate percentile that's needed to preserve normal notification UI. We always keep a control group of users that are in normal notification permission UI so that if a site's accept rates improve, we can remove it from quiet UI enforcement. Third, there are some users who almost never accept any notification permission request. These users simply don't want notifications. And for these users, we adaptively enable quiet notification mode on their behalf in Chrome settings if they repeatedly block notification requests. As sites improve their behavior and use of notification permission requests, we expect that there will be fewer and fewer users who are adaptively placed in quiet notification UI mode. Finally, and this is starting in Chrome 84, which is coming up soon, we're gonna begin enforcing against abusive notification prompts that try to mislead users or phishing for private information or promoting malware. In this case, in addition to quiet notifications UI, the user is going to be advised in the notification prompt that the site may be trying to trick them. So what should you do to make sure your website is not enrolled in quiet notification UI? Well, first and foremost, if you're prompting users to enroll notifications as soon as they arrive on your website, please stop doing that. This is the easiest way to improve your notification accept rate. Very few users will accept a notification from a site they're visiting for the first time. And if you think about it, why would they? We're all experiencing information overload. Wait until you know your user better and you know you can add value for your user before you prompt them. You can and should prompt your user to accept notification when there's a clear user benefit and in the context of the user's journey in your application, websites that ask for notification permission in the context where the benefit is clear to the user have 80% accept rates or higher. That should be your goal. Even if you do the best possible job with your notification prompt UX, it's possible that some of your users may be in quiet notification UI mode. The first thing you wanna check here is to make sure that the accept rates on your site are what you really expect them to be. Notification accept rate data is in the Chrome User Experience Report, which is a public database containing important information about real world Chrome metrics for popular destinations on the web. There's a minimum number of users and decisions that are required for data to be available in the Chrome User Experience Report and that's to help with preserving visitor privacy. So if your site doesn't have data in the Chrome User Experience Report, you may need to get that information from somewhere else. For example, most notification service providers will have this instrumented so that you can check your accept rates or if you've rolled your own notification implementation, you may need to add your own instrumentation. It's also a good idea to look at the notification accept rates of sites that are like yours in the Chrome User Experience Report so you can get a sense for what the benchmark accept rates are and aim to be better than those. The article linked here in the slide will give you more details about how to use the Chrome User Experience Data Set to learn more about user's notification permission prompt decisions on your website. So the second thing you need to think about is how can you make your notification requests more in context? I know I mentioned this before, but it bears repeating. You wanna make absolutely sure that you're asking for notification permission at a moment in the user's journey that makes sense to them. In this example, we're showing a notification request the first time the user receives a response to their first chat. This is a perfect moment. Even with quiet notification UI, it should be obvious to the user why they would want to accept notifications. The activity and motion in the web app combined with the motion of the browser prompt should be sufficient cues for the user to enroll in notifications if they want them. If your user doesn't accept notifications with this in context pattern, there's a pretty good chance that they just don't want notifications and you should respect that decision. Before I finish, I wanna share a little bit about what's coming next for notifications. First, we're planning to increase the accept rate percentile that's needed to have normal notification prompts. Since this is a percentile as well, something to keep in mind is that if other sites are improving their notification UX and you're not, your site may be slipping into a lower percentile and quiet notifications may be activated for your site. If your site has a notification accept rate of over 50%, you're in safe territory, but we recommend aiming for 80% or better. Second, Chrome places a high priority on user privacy and security and we intend to take more steps to protect users from abusive notification in the future. That includes more protections from abusive notification content as well as retroactive action to help users who may have already enrolled in notifications from abusive sites prior to the release of Chrome 84. Most important, as we improve the signal to noise ratio of the web notification ecosystem, we hope users will come to view notifications as being more helpful. If this happens, it means we're doing a good job protecting users from unwanted notification prompts and unwanted notification content. Ultimately, this will help developers who use notifications for key functionality in their apps as users are more likely to accept notifications when they have less reason to be worried about spammy or abusive notifications. Thanks again for joining today, everyone. Have a great day. True or false, IndexedDB is limited to 25 megs. False, gone are the days of tiny storage quotas. True or false, local storage should be avoided. True, it's synchronous and may cause performance issues by blocking the main thread. All right, here's another one. True or false, cookies are a great way to store data. False, they've got their uses, but should never be used for storage. How about this one? AppCache is a great way to make your app work offline. Yeah, trick question, absolutely false. AppCache is awful and it's going away soon, thankfully. So how should we be storing data and caching our critical app resources on the client? How much can we store? How does the browser deal with eviction? And be sure to stick around to the end and I'll tell you how you can start Chrome with only a tiny storage limit so you can test what happens when you exceed your storage quota. I'm Pete LaPage. Let's dive in to storage on the web. Modern storage makes it possible to store more than just small chunks of data on the user's device. Even in perfect wireless environments, caching and other storage techniques can substantially improve performance, reliability and most importantly, the user experience. With the Cache Storage API, you can cache your static app resources like HTML, JavaScript, CSS, ensuring that they're always instantly available. And with IndexedDB, you can store all kinds of data, article content, users documents, settings and more. IndexedDB and the Cache Storage API are supported in every modern browser. They're both asynchronous and will not block the main thread. They're accessible from the window object, web workers and service workers, making it easy to use them anywhere in your code. There are several other storage mechanisms that are available in the browser, but they've got limited use and may cause significant performance issues. If you're concerned about storing large amounts of data on the client, don't be. Unless you're trying to store several gigs, modern browsers typically won't even bat an eye. And even then, it really comes down to the amount of disk space available on the device. Of course, implementations vary by browsers. Firefox allows an origin to store up to two gigs. Safari allows an origin to use up to one gig and when that limit is reached, Safari is currently the only browser that'll prompt the user to increase that limit. And Chrome, well, look, it's a little complex, but stick with me here. Chrome and most other Chromium-based browsers limit storage to 80% of the total disk space and each origin can only use 75% of that. For example, if you had a 10 gig hard disk, Chrome would limit its storage to eight gigs. Then each origin would be limited to six gigs. Essentially, each origin will be allowed to use up to 60% of the total disk space. It sounds complex, but there's an easy way to see what's available. In many browsers, you can use the Storage Manager API to determine the amount of storage that's available to the origin and how much storage that you're already using. It reports the total number of bytes used and makes it possible to calculate the approximate bytes remaining. Unfortunately, the Storage Manager API isn't implemented in all browsers yet, so you must use feature detection before using it. But even when it is available, you still need to catch over quota errors. In some cases, and I'm looking at you, Chrome, it's possible for the available quota to exceed the actual amount of storage available. Most Chromium-based browsers factor in the amount of free space when reporting the available quota. Chrome does not, though, and it will always report 60% of the actual disk size. This helps to reduce the ability to determine the size of stored cross-origin resources. So what should you do when you go over quota? Most importantly, you should always catch and handle right errors, whether it's a quota exceeded error or something else. Then, depending on your app design, decide how to handle it. For example, delete content that hasn't been accessed in a long time or remove data based on its size or provide a way for users to choose what they want to delete. Both IndexedDB and the Cache API throw a DOM error named QuotaExceededError when you've exceeded the quota available. For IndexedDB, the transactions on a board handler will be called passing an event. That event will include a DOM exception in the error property, and if you check the name of the error, it'll return QuotaExceededError. For the Cache API, writes will reject with a QuotaExceededError DOM exception. Data stored in the browser can be cleared in a couple of ways. It's most commonly initiated by the user choosing to clear data in the browser's site setting panel. But it can also happen when faced with storage pressure like low disk space. In this case, browsers typically automatically delete data from the least recently used origins and continue to delete that until the storage pressure has been relieved. If the app hasn't synced data with the server, it will cause data loss and means that the app won't have the resources needed to run, both of which can lead to a negative user experience. Thankfully, research by the Chrome team shows that this doesn't happen very often and it's far more common for users to manually clear storage. Thus, if a user visits your site often, the chances are small that data will be deleted. Let's take a look at a specific example of how automatic eviction might happen in Chrome. Origin A is the least recently visited site. Origin B is the next least recently visited site and so on. Origin E and Origin K are getting close to their quota limits, but they haven't reached it yet and the overall usage is less than the total quota, so nothing is gonna be evicted. Origin B has a star next to it because it was granted persistent storage, meaning that it can only be deleted by the user. Check out my article on web.dev for more info about persistent storage, when you should be using it, and how to request it. Now, let's say the user visits Origin N again, which happens to be a music playing site. The user saves a few more songs for offline listening. Now, each origin is still within its quota limit, but Chrome has exceeded the overall limit. To get back under the overall limit, Chrome will start evicting stored data from the least recently used origin first and continue until it's back under the total limit. Firefox and other Chromium-based browsers work in essentially the same way. Safari is a little different. When it's out of storage, it will prevent anything new from being written, but they recently implemented a new seven-day cap on all writable storage, including IndexedDB, Service Worker Registrations, and the Cache API. This means that after using Safari for seven days and not interacting with the site, it will evict all content for that site. This eviction policy does not apply to progressive web apps that have been added to the home screen, essentially installed PWAs. Check out the WebKit blog linked in the description for complete details. Modern computers typically have large hard drives, which makes it hard to test the over quota failures. So here's a little pro tip. Create a small RAM disk. Here, I've created a 500 meg RAM disk on my Mac. Then, start Chrome using the user data dir flag. This tells Chrome to store the user profile and user data on the RAM disk. Chrome now thinks my disk is only 500 megs, thus is gonna limit my storage quota to only 300 megs, which I can quickly fill. This makes it much easier to verify that my code behaves properly when it hits those quota exceeded errors. Chrome DevTools also have helpful features for understanding what's going on with the data that you've stored. In the application panel, the clear storage panel will show you how much storage you're using for the origin and makes it easy to clear some or all of that data that you've got stored. The storage panel lets you see what's in local and session storage, as well as what's in IndexedDB, including the actual databases and even the individual entries. And the cache storage panel shows you what's stored in cache storage. Gone are the days of limited storage and prompting the user to store more and more data. Using the cache storage API and IndexedDB, you can effectively store all the resources that your app needs to run. Be sure to check out my article, Storage for the Web, where I've got additional details and info on some of the not-so-good storage mechanisms. Then, check out my article on persistent storage to learn how you can protect your data from being blown away, even when the device is facing storage pressure. See you soon. Hi, everyone. My name is Thomas, and today I want to talk to you about the explorations that we've been doing with the Zoom team over the past few months and some of the specific advanced APIs that we've been exploring together. As you've all probably seen, Zoom has become a stable in many homes, and so it's critical that we're able to provide a good experience directly through the browser. Zoom does have a web version today, but compared to its native client, it's missing some features and sometimes misses the mark in performance. We wanted to change this, and so even before COVID hit, we started working with the Zoom team to understand exactly what changes they would need and what new things in Chrome they would want to create a truly great experience. Now, Zoom is, of course, a video conferencing application. You can't talk about video conferencing on the web without talking about WebRTC. WebRTC is a really great full stack solution that provides a complete package for achieving video conferencing on the web. WebRTC was built and standardized about 10 years ago and now ships in all major browsers. This makes it the best choice if you want a complete solution with broad support across browsers. WebRTC's strength of being that complete solution can, however, also be a challenge for someone like Zoom who have their own custom protocols and their own architecture. Zoom would rather want a set of more simple, low-level APIs that they can then build their own architecture and system on top of themselves. And the three specific ones that we've been exploring and that I want to talk to you a little bit about today are WebAssembly SIMD, WebTransport, and WebCodex. I'll mention from the start that all of these are fairly cutting edge and most of them are in active development. So while they're all in a place where you can start to play with them, these aren't shipping APIs just yet. But hopefully this presentation will cover some of the early parts of it. And by the time you're watching this, it might be in the future and you'll be able to actually use these directly and the lock already shipped. So first of all, I want to talk about WebAssembly SIMD and how it can provide really highly performance code. Most of you have probably heard about WebAssembly already, but as a recap, WebAssembly is a new low-level binary format for the web that is compiled from other languages and offers maximized performance. This means that you can take something like C++ or Rust and then compile it into WebAssembly before shipping it to the client. WebAssembly has been out for a while and has been shipping in all major browsers for a while, but we're continuing to expand it with functionalities such as SIMD, which stands for Single Instruction Multiple Data. To explain SIMD, let's look at this incredibly simple loop that just adds two arrays together. Without SIMD, the CPU would go through this loop and add their different elements together one by one, requiring four full steps. But with SIMD, the CPU is able to vectorize these elements and then take just a single CPU operation to add them. The best part is that because compilers are so smart, they can automatically detect these optimizations and do them for you. In Inscriptin, you just need to pass the dash M-SIMD 128 argument to EMCC and for Rust, you can pass dash C target feature equals plus SIMD 128. This will cause the compilers to automatically find and use SIMD where possible. Sometimes you also wanna have more explicit control and this is where you'll wanna use SIMD Intrinsics, which let you use the SIMD instructions directly. This is more detailed than I can cover here, but if you're interested, I highly encourage you to go and check out these links. SIMD can be used for a huge variety of things, including highly performant ML models such as this hand tracking, a real life invisibility cloak, and real time automated background removal. And this last use case is just one of the things that Zoom is excited about using SIMD for. They have an awesome feature where you're able to automatically remove the background so that people in conferences can't see all the random stuff that you have in your background and then replace it with fun videos or animations. If you're interested in diving into WebAssembly and SIMD, here are some of the links that should help you get started. WebAssembly SIMD is doing an origin trial in Chrome M84, which will start rolling out to users in July 14th. If you aren't familiar with origin trials, it's basically a mechanism for you to test out features with production users while we may still be making some changes to the API. You can read more about those origin trials at this link as well. So the next API I wanna get into is WebTransport, which is a next generation networking API for client to server communication. Let's look at the definition of WebTransport. WebTransport provides bidirectional transport through both unreliable datagrams and reliable streams-based mechanisms. That's a mouthful, but let's see if we can't break it down and understand it a bit better. First, bidirectional means that it enables easy two-way communication. With something like HTTP, the connection has to be initiated by the client and you have to send all of the requests at once and then wait for a response. With WebTransport, you don't have these limitations and so you can enable a much more interactive session. Looking at the two different mechanisms, unreliable datagrams are one of the mechanisms for sending data through WebTransport. These datagrams are similar to UDP datagrams in that they are packets of information that gets sent, but with no guarantees about delivery or ordering. Reliable streams, in contrast, are similar to TCP streams and provide reliable and ordered data communication. So now that we have an understanding of the definition of WebTransport, let's understand what you might actually use it for. Firstly, WebTransport will be the only mechanism to do unreliable data communication without leveraging WebRTC. And this is exactly why Zoom is interested in looking into WebTransport because it'll allow them to simplify their deployment and put it a little more in line with the other platforms that they support. It's important to note, though, that WebTransport won't be just a pure UDP sockets API since it does have some requirements around encryption and congestion control. It does offer an alternative to WebSockets and to understand exactly how it compares to WebSockets and WebRTC, let's look at this chart. So to understand the differences, let's dig into each of these pieces. First, WebTransport and WebRTC offer both reliable and unreliable while WebSockets only offer reliable delivery. WebTransport is an in-development API while both WebSockets and WebRTC are widely available. While WebRTC provides a fairly high-level complete solution to the problem of video conferencing, WebTransport and WebSockets are both much lower-level APIs that doesn't solve everything for you but gives you more of that basic access. WebTransport also enables multiple cancelable streams whereas WebSockets can only do a single stream and WebRTC can also do multiple streams but they aren't cancelable. So here is a quick example setup for how you can actually use WebTransport. In this part of the code, we really just set up our new QuickTransport which is specific subtype of WebTransport and create that object passing in the URL that we wanna connect to. Then we just set up some simple logging function and await the transport being ready for us to use. Then we can simply grab the writer from the sendDataGrams function of our transport object which we can then use to send data at any point. Remember that this data that we send does not have any guarantees of delivery or the order that it will be delivered in. Next, let's look at how you can actually read data from the server. Here you see a simple example where we get the reader from the getReader function and then in a classic whileTrue loop we just read things from that reader and then detect them when we're done and console log out the actual values that we're able to read. WebTransport is still very much in development but we do have a blog post already published about how you can use it and you can find that and more information at these various links. So now it's time for us to jump in to our last and exciting API, the WebCodex API which aims to offer direct codec access on the web. But first let's back up and remind ourselves what exactly a codec is. A codec is a device or computer program which encodes and decodes a digital stream or signal. While many of us have not worked directly with codecs we've all seen common examples like mp3, vp9, h264 and many others. Codecs are actually used in tons of places throughout Chrome such as the audio and video tags, web audio, webRTC and the media recorder API. However, in all of these places where it's used you can't really configure and get pure access to just the codec part. For example, web audio allows for decoding a media file but needs to work on the complete file and doesn't support a streaming based approach. Media recorder has some controls but they're very high level and you can't really configure it to support extremely low latency use cases. As mentioned previously webRTC does give you a lot of this control but it needs you to bring the whole package of webRTC along and without doing that it's hard to get access to just the encoding and decoding parts that you want. As a result of this lack of configuration some apps have started compiling these codecs to JavaScript and WebAssembly. Some of you may remember that this is how the awesome application Squoosh lets you resize and re-encode images. This approach is really cool and workable today but has some specific drawbacks. Specifically it increases your bundle size, lowers the performance, causes slower startup time and reduces the power efficiency. Really what you want is to avoid shipping these codecs all together and just get the direct access that you need through the codecs that are already shipping as part of the browser. And that's exactly what the goal of WebCodecs is and in their own words the goal of WebCodecs is to provide web apps with efficient access to built-in both software and hardware, media encoders and decoders for encoding and decoding media. WebCodecs main advantage is that it lets you get the direct access that you need to again build your own systems on top of the basic codec access. This completely unlocks some use cases like video editing since you really need that frame-by-frame access and faster than real-time encoding and decoding to do this properly, something that's currently completely impossible on the web platform, except for maybe shipping codecs with WebAssembly. Additionally, many existing things that are possible to stay on the web but only if you use WebRTC, things like cloud gaming, live streaming and video conferencing will get more flexibility about how they can interact with these codecs. Zoom for example is looking into using this API in conjunction with the WebTransport API. They're hoping that they'll be able to take encoded video frames and send them up to the server using WebTransport at the same time that they'll be fetching down encoded frames and then decoding them to show to the client, providing a really smooth integrated experience. Next, let's look at some of the simple examples for how you can use the decoder part of this. Here in this canvas setup part, we're really just grabbing a canvas's context and then making this very simple function to paint a video frame to that canvas from converting it to an image bitmap. Now, when you want to set up the decoder part, you just call this new video decoder element and you set up the output function that we defined previously, as well as just constant logging out any errors. Then you configure it with the codec that you want to use and then you have this incredibly simple function that you just pass in your encoded chunk and call the decode function from your video decoder and then it does the rest of the work for you. WebCodex is still extremely new, but for those of you who are curious, you can go and check out the explainer to see what the team is currently working on. We will also be doing a web.dev post about WebCodex, so if you're seeing this in the future, be sure to go and check out that. And that brings us to our overview of these three new exciting APIs that we've been exploring with Zoom. You've hopefully gotten a better understanding of some of these new and advanced APIs and hopefully an understanding of how they will be bringing all of us closer together in the future. Thank you so much for your time and I hope you enjoy the rest of the sessions. Goodbye.