 Welcome back to Web Dev Live APEC Edition. Now, it's been great to be together over the last two days, and now let's finish strong on day three. We've spoken about the role the web has played with you building websites that share COVID data online, allowing work and learning to happen from home, much needed entertainment, and keeping up with world news. One of the big shifts we've seen is around retail and commerce. Now, online retail has nearly doubled its share of credit card spending and e-commerce is now 30% of retail, growing 15% in just six weeks. There's been a flood of activity as businesses rushed to get online. Pedro Freitas, head of Loja Intergrado, which is part of the VTEC platform, shared how they saw the doubling of sites being created every single day. When they saw this, they kindly offered unlimited plans for health care clients. When it comes to food, we're back to 1992 levels of share between groceries and restaurant orders, and my omelets are certainly looking a lot better. And then there's the story of Foodler, a couple of university students quickly build a site to allow people to order directly from the local street hawkers in Singapore, bridging the online offline gap. And then there's Sebastian Cabanero, a high school student from San Marcos, California, who jumped into action to create food banks. A website that helps you find, well, local food banks. Now, we continue to be impressed with the ingenuity that some of you are displaying as you help people in need right now. And speaking of being impressed, we're really fortunate to have a really healthy framework ecosystem on the web, where different approaches can be explored in different ways. Now, Vue is a popular and well-loved framework with particularly large usage in the APAC region. So we wanted to invite Evan Yu, the creator of Vue, for a chat. Hi, Diel. Nice to be here. Hey, Evan. Thanks so much for joining us. Now, I'd kind of love to start at the beginning of the history of Vue and the story as you've kind of gone through different evolutions from version one to two and now what you're working on with version three. Sure. Vue started out as a personal experimental project back in 2013 and was first made public in February 2014. The initial goal was really just to create something that I would enjoy using myself. It was a very small library combining the data binding that was inspired by Angular 1 with a ES5 getter-setter-based reactivity system. Because we used getter-setters, it was not IE8 compatible when it came out, which a lot of people took issues with, but I'm kind of glad we made that decision in the early days. Later on, we released version two and started adding more and more supporting libraries to it. So, for example, we added a router. We added a CLI. So as we added these more parts, it started to become more like a framework. But we still kind of loosely followed the concept that it should be incrementally adoptable. So it's not as monolithic as some other solutions, but also it's no longer just a single library that does only one thing. The major change in version two was the introduction of the virtual DOM as the underlying rendering layer, which opened up a few interesting capabilities at the time, for example, server-side rendering or rendering to other platforms. Right now, we are hard at work in finishing up version three, which is a major rewrite. And it brings up a lot of new interesting features and changes. There are so many things. So I'll just mention a few highlights here. We added the Composition API, which exposes the lower-level reactivity APIs inside View for advanced larger composition and reuse in large applications. We re-rolled the reactivity system using ES2515 Proxies, which greatly improved performance. The rendering layer rewrite also has seen great performance improvements. The bundler, the library itself is now fully tree-shaking compatible, so you will see smaller bundles. We added first-class TypeScript support, more modular internal architecture for better maintenance and tooling integration. So there's a lot more. We're shipping in View three. We're still hard at work in ironing out some of the rough edges, but it's going to be ready very soon. Got it. That's great. So you've been working on the web for quite a while, with 2013 being the birth of View, and obviously you worked on the web before then. I'm kind of curious as you've seen the evolutions and seen what's going on right now, how do you feel about web development in 2020? What are the most pressing issues? What are you focusing on? What are the problems that you see the web developers have, and how do you feel like you and View can help there? Sure. I think the ecosystem right now is at a transition period where we are seeing a lot of these new language standards features, new platform capabilities, are being finally stable and consistent and shipped in all these evergreen browsers. So all these mainstream browsers in their latest version now have a very consistent support for these latest features. And IE11 is finally on the way of phasing out, so that sort of presents an opportunity for new stacks or new tooling to break free of the shackles of these legacy problems and sort of rethink how we can best take advantage of these new features that's now we can finally use for the larger majority of our users. So I think, for example, most of these major browsers now support native ES module imports, which presents some interesting technical possibilities that we can leverage to sort of rethink our development workflow. Got it, yeah. I've actually been watching you in your Twitter feed there kind of explore some of these things. I've seen you working on Veet and VeetPress. I was wondering if you could explain a little bit more on what these are doing. Sure, Veet is a work, I would say it's a web development build tool that combines a dev server with a build step. Now the interesting part is in the dev server we are leveraging the browser's native ES module import handling to provide a bundle free development experience. So instead of bundling your whole app, Veet lets the browser import your modules as needed using native imports and only processes them on demand when the browser actually requests them. So this has a few advantages over the traditional bundling based dev servers. The first thing is when you have a large app you may have a lot of modules in your application but when you are working on a specific part of your application, maybe you only need to import a subset of these modules. So on demand import compilation using Veet's approach allows you to only compile the necessary part for modules for the part that you're actually working on. So this results in much faster server startup time in large applications. The second part is Veet also supports hot module replacement on top of native ES module imports. Now with native ES module imports because we don't have to do the whole bundling scope crawling when we handling hot module replacement, the implementation is actually much simpler and much more efficient. So it will stay blazing fast even as your app grows. So which keeps the development feedback loop fast no matter how big your app is. So I believe this really presents an interesting option because development experience is so close to the old days where we first got into web development where you just have an index.html page and you import some scripts and you just get things going. Veet tries to sort of simplify the whole development setup and just presented development experience as close to the original vanilla web development experience as possible. But without giving up all the modern new tooling capabilities that we are used to. So it's still an interesting kind of experimental at the stage, but we're getting a lot of good feedback from users already. Got it. That's really exciting. I love this kind of new trend towards in dev time not having to deal with bundling and the like in production. We can still obviously do a lot of that, push out all of the optimizations we can. Obviously everything we can do to users make sense. So that's Veet. What about Veepress? How does that tie into this? Oh yeah. So for, if you don't know Veepress, so Veepress is a remake of Veepress and Veepress is a static site generator built on top of Veep which allows you to write markdown, use view components in your markdown and write custom themes using view as a view application. Now, Veepress was based on Webpack. So Veepress is essentially a remake of Veepress but using Veet as the underlying development as the underlying build tool. And we took the opportunity to bake in, so first of all that provides obviously a much faster development experience, faster server startup, faster updates when you edit a page. Another aspect of it is we are baking a lot of performance best practices into Veepress. For example, some of these, we're seeing a lot of static site generators based on universal JavaScript frameworks, right? For example, we have Next.js which is in Next.js review which are both excellent projects but when we are using service rendering to send static content to the client, we often face the problem of the double payload issue where you're sending the static content as HTML but you're also sending a lot of JavaScript which was used to render the same content and which is kind of useless when it's on the client. And then we're spending time on the client to hydrate those content using JavaScript, right? So there's a lot of room for improvement here and Veepress tries to tackle that. It takes advantage of View3's compiler to do static analysis and we automatically detect all the static parts that won't change in your page. And then we concatenate them, stringify them into static strings and eventually safely remove them during the production build. So that decouples the current page JavaScript payload of your page from the content. So you're only paying for the dynamic bits inside your page and all the static content will have no impact on the JavaScript payload size and have no impact on the client-side hydration performance. So I think that's a pretty significant improvement to how Veepress was handling things and we're excited to see how this would, when we finish it, we're excited to see how much of a performance improvement this can bring to our users across the board. Got it, I have a feeling like we're gonna be hitting some core web vital thresholds here. That sounds great, thank you. Now, we're going live, the live stream right now to an APAC-friendly time zone and we talked about how View is particularly popular in APAC and I was just kind of curious on what you think caused that and what we can learn from that. I think View's popularity in Asia, in APAC definitely has a lot to do with me being Asian. Personally, I am very active in the Chinese developer community as well and a lot of people probably heard about View through me but at the same time, I think a more important aspect of it is good localization of our documentation. So when we first worked on the documentation because I am native Chinese speaker and I know a lot of Chinese developers sort of struggle when they see really dense technical text written in English, it's not that they cannot attempt to read it, it's just when you are reading something that's not in your native tongue, it just takes so much longer, it makes it so much less efficient to learn. It takes much longer to get the click when you're learning something new in the language that you're not used to. I wrote the first draft, the first version of the View's documentation in Chinese and later on when we had a bigger community, community members started to contribute more and more to these translations. They started taking over the maintenance of the Chinese docs and we were seeing a lot of contributions to translate View docs into other languages as well. So I think this sort of good internationalization, localization effort is definitely critical in helping View's adoption in these areas. And from my personal experience, what I've seen is particularly in a lot of areas where English is not the first language, you typically see that the channel for local developers to keep up to date with the latest information, to keep up to date with the new things that's happening in the front end world is they sort of rely on a few key community leaders who are proficient in English to translate the content for them. So it's great after by these community leaders, but they are not obliged to do this and when they're not enough of them, this kind of creates a bottleneck for the content and information to flow into these, to reach these developers in these areas. So I think if say a framework or a tooling or a community that takes internationalization and localization as a first class concern, it definitely will help a lot in reaching these developers in a much bigger scale. That makes a ton of sense. I've seen that on a few developer sites too. I remember one tried to add translations just through machine translation and they noticed that a lot of the developers were switching to English, thinking that it was just that they were familiar with English and the like, and then it was only when they went through, a lot of it was the community actually taking the time to build really high quality translations like you're talking about and then everyone kind of flipped back to that. So that makes a ton of sense. Now, before we go, I happened to notice that you're a karaoke fan as am I and I just wanted to ask if you have a go-to song. Yeah, Don't Stop Me Now by Queen. Nice, nice. Yeah, mine is Sweet Child of Mine. So yeah, on that note, before we start singing, I'm already kind of getting ready to jump to the mic here. We'll have to have a karaoke session sometime, but thank you so much for joining us, Evan. Thanks for all of you do, all that you do and all that the view community does for the web. Thanks for having me. Evan just spoke about his experimentation with a bundle-free developer experience. And on day one, we spoke about the work we've been doing to understand bundlers with tooling.report. Now, understanding what's going on in these bundles is really important and is one area that we're looking to explore in Lighthouse. So please welcome Paul Irish to tell us more. Thanks, Dion. So we all spent a good amount of time configuring our bundles and our approaching approach to bundling strategies. But ultimately there's a lack of transparency when it comes to the JavaScript getting all bundled up and shipped into production. They're just these big files and it's easy for us to treat them as black boxes. And some of us on the Lighthouse team have worked on community tools like source map explore, source map visualization to help understand what's happening inside of these files. And we've long wanted to bring some of that inspection into Lighthouse itself. So I'm going to show you a little sneak peek of some upcoming Lighthouse features that we hope are going to help a little bit. The first one is a new audit called remove unused JavaScript. Now, this is actually in Lighthouse 6.0 and it is listing all by file, the top files that are top JavaScript files that are unused and it's giving you an idea of what percentage of them, how many bytes of them are actually unused. This is great and it's actually just like what you see in the coverage panel in Chrome DevTools. But we can improve this if we know a little bit more about the actual JavaScript bundles. We can actually explode them and see, okay, for each bundle file, what are all of the original modules that were on disk source files and understand how much of them were unused and used. And this really helps us understand a bit more about the kind of code that is not actually being run. Another new audit that we're introducing is one that identifies duplicate modules across JavaScript bundles. So in this case, you actually see that low dash is in two separate JavaScript bundles. But we are really happy that this helps you see exactly cases where you probably could reconsider your chunking strategy to make for something a little bit more optimized. Another new audit is called legacy JavaScript. What we're doing here is we're trying to find all the cases where you're shipping JavaScript to production and specifically to modern browsers where you're including things like polyfills and you're compiling down to a level where the modern browsers don't really need them. Modern browsers understand a lot of modern JavaScript and so we wanna make sure that you're not over compiling. Legacy JavaScript identifies polyfills and transforms and tells you what exactly it found in each file so that you can re-optimize your deployment strategy. Got it, and is this all powered by source maps? Yeah, yeah, exactly. Source maps are super useful and they enable this really powerful analysis. And actually, if I can, we're working on a interactive UI for exploring this stuff in more detail. And so I wanna show this, these are early mocks but we wanted to provide that kind of rich, interactive tree map experience inside the tool as well. So this UI might look a little familiar to you if you've used some of these tools before but giving you a view of across all your JavaScript files and if there's a bundle and if we can understand what's inside of it, we'll show it and you can explore that in more detail. We're really excited to provide a bunch more information around here and augment it with data around code coverage so that you can see exactly what's happening at a table view to help prioritize all of the major things that you should be paying attention to and let you explore it in detail so you can really find out what's happening and what you could change. So again, this is early, this will all change quite a bit but we're really excited about bringing this experience to you. It's all coming soon to a light house near you. Got it. I'm always love kind of opening up some of the old bundles and seeing all of the mistakes I've made in the past. So excited to play around with this. Now a few weeks ago, we also shipped Lighthouse 6 and I believe that's the version where we now have started to include Core Web Vitals. Is that right? Yeah, that's right. All right, so Core Web Vitals, we got these three metrics, first content for paint, first input delay and cumulative layout shift. Though it's worth pointing out that FID, first input delay, is a field metric and since Lighthouse is a lab tool, in its place we have another metric, TBT, total blocking time. We like to think of this as FID's lab companion. So they're not measuring exactly the same thing but they're all about interactivity, they're all about long tasks on the main thread and heavily influenced by them. So it really works well. So in Lighthouse 6, you'll see these three metrics up top and if you're looking at it and your report looks like this one, then you're like, wow, okay, well these metrics are, they're not great. It looks like I can make some improvements here. All right, but what's the next step? We've added some new audits to help point you in the right direction. So I'll just go right through them right now. First one I'll show is this, avoiding long main thread tasks. So here we're just listing the longest tasks on the main thread that you have, how long they took and what URL we can attribute them to. And then two more, we have the largest contentful paint element. The metric itself is telling you at what time that paint happened but that paint was associated with a particular DOM element and here this will just tell you. And similarly for cumulative layout shift, you wanna know what the actual shifts were. Well, these were the DOM elements that that shifted and these ones contributed most to your total CLS value. So you could just read this and so you'll see this content container. Okay, yeah, I know what that is but then if you're looking at these other two items and you're like, okay, but which actual DOM element is this? If you run Lighthouse through DevTools, we actually have a nice little surprise for you. So there we kind of upgrade these element references. So if you hover over them, we'll actually just apply that element inspection hover tool tip that you normally get in DevTools. And if you click through, you'll see it in the DOM tree elements panel. So nice kind of integration to help you understand exactly what thing you're looking at. Anyways, this is just a small slice of the new stuff in 6.0. Please, everyone check out our blog posts for all the rest of the fun stuff. Got it, I'm always a sucker for these audits. It's great to be able to kind of just go from the metrics themselves and the scores and like really go deeper to understand what's affecting them. Thanks so much, Paul. I love how we're making sure to open up as much data as we can with the Chrome user experience report. And I keep seeing great mashups and analysis from the community, such as the Onely map, which lets you kind of zoom around the world and kind of explore the performance characteristics for different device types and obviously different locations. Now Lighthouse is a great tool to help you audit your web apps and it helps you not only with just performance of course, but really across the board. And we want you to be free to create the highest quality web apps that give you the biggest reach to users across whatever device they're using. Now to hear more about the latest in PWAs and advanced capabilities, let's welcome Peter Page. Awesome. Thanks, Dion. We believe that you should be able to build and deliver any kind of app on the web. Web apps should be able to deliver the same kinds of experiences with the same capabilities as native apps. Combining the installability and reliability of progressive web apps with our capabilities project, we're working to close the gap and help you build and deliver great experiences. To do that, we've been focusing on three things. First, we've been working hard to give you and users more control over the install experience, removing the mini-infobar, adding an install promotion to the OmniBox, and more. But one of my favorite things about the web is how ubiquitous it is. We know that for some businesses, it's really important to have your app in the store. At Chrome Dev Summit, we previewed a library and CLI called Bubblewrap, then called Llama Pack. It makes it trivial to get your PWA into the Play Store. In fact, PWAbuilder.com now uses Bubblewrap under the hood. In just a few mouse clicks, I can generate an APK that I can upload to the Play Store. Second, we're providing tighter integration with the operating system, like the ability to share a photo, song, or whatever else, by invoking the system-level share service. Or the other way around, being able to receive content when shared from other installed apps. You can keep users up to date or subtly notify them of new activity with app badging, and make it easy for users to quickly start an action using app shortcuts, which will land in Chrome 84. And finally, we're working to enable new capabilities that enable new scenarios that weren't possible before, like editors that read and write files on the user's local file system, or getting a list of locally-installed fonts so that users can use them in their design. Of course, there's plenty more, so stay tuned. But I hear, Dion, you've been playing with the TikTok PWA. That's right, that's just my 10-year-old son, Josh, who started to create some really fun TikTok content, even though he was pretty restricted these days to the house and the pets and the like. And he kept bugging me to join in and play myself. And I was a little bit gun-shy, so to speak, on that side, but it actually gave me a great excuse to play with the really well-built PWA that the TikTok web team put together that I didn't even know about. It was really, really a nice, rich experience there. That's really cool. And all of these things that we've been working on, things like the app shortcuts, are gonna be a great addition to them. The idea that you can push and hold on the app icon and be able to quickly jump in and create a new TikTok or push and hold and say, hey, I wanna see my friend's TikToks or the ability to be able to show app badges when a friend has posted something new or something like that. So there's plenty of great stuff coming that they can take advantage of, and so can you. That's excellent, thanks again, Pete. Thank you. Now, with that, it's time to close out the last opener. It's been a great pleasure being with you over the last three days, but we're not quite done yet. We've got a great set of content coming up today. Next, you're gonna hear from Google engineers who help you improve the reliability of your experience with advanced patterns for building PWAs, then how you can get them into the Play Store and how to help you increase your conversion rate for notifications, and so much more. Now, if you're watching live, we'll be on the chat to answer your questions at web.dev slash live and of course on YouTube. But the fun isn't gonna end today. Thanks to our amazing Google Developer Groups all across the world, we've got a set of follow-up events in the weeks to come where we'll have Googlers, Google Developer Experts, as well as experts from across the local communities joining you to share more insights and guidance. Just check out the regional events section on web.dev slash live starting tomorrow to find the event that's in your time zone. Thank you so much.