 So they asked me if I would do an advert for the Chrome Developers YouTube channel. And we thought that's too easy. So we're going to try and distract him using the ingredients we have available to us. Take it away, Paul. OK, so you can go to youtube.com.cromedevelopers and you can watch content on PWAs, performance, DevTools, find out about the content from Chrome, DevSummit, all the talks that have been going on. You can watch what's new in Chrome with Peeler Page. In the start, in the notification. There's new content every week. So don't forget to go to youtube.com.cromedevelopers and subscribe. So, yeah, subscribe, because that's a good idea and there's good content with good people. All right, all right. Is it good cake? Pretty good. I don't remember that. It was such a strange moment in my life. Anyway, welcome to day two of Chrome Dev Summit. Yes, welcome, welcome. There's some energy in the room. There is some energy. I'm wondering if they've had enough sleep and so on. Well, there's plenty of people in the room and plenty of people on the live stream. People in the room, give us a cheer, a proper one. Go on. People watching via the live stream, give us a cheer. That doesn't work. You can't do that. I like the thought of somebody in my quiet office watching and it's goo. So today has a slightly different theme to yesterday. What is today's theme? Well, yesterday was about today, whereas today is about tomorrow. What that means is that yesterday was all about what you can do today with the features that's on the web and today is more future-looking. So, you know, we're going to talk about some things from emerging and developing proposals to some new ideas. Right? You're going to talk about new ideas. That is basically what I said. But it's going to be a little bit like a round in the big web quiz because as you're watching this stuff, you get to decide whether it's like future of the web or vaporware. Right? Does that work? I think so. But we really want your feedback on this sort of stuff as well because some of it is just sort of far out there ideas and we want to know if we're going in the right direction or not. That's one of the great things about being here, right? Absolutely. So, to get us started, we need our day two keynotes and for that, we'll be inviting Nicole Sullivan and Multi-Ubil to the stage. On Twitter, they are Stubinella and Cramforce. Please welcome to the stage Nicole and Multi-Ubil. As we like to call them, Stubinforce. Hi, and welcome to the day two keynote. My name is Nicole. I'm a PM on the Chrome Web Platform team. And my name is Malte. I work on AMP and JavaScript infrastructure at Google. So, actually, this is a little bit awkward today because we're the second keynote and Ben and Dion already talked about effectively everything yesterday. So, we thought we'd do something slightly different. So, normally at a Chrome Dev Summit keynote, you know, we'd walk through all the new exciting APIs that the web platform makes available for you. And so, we actually are going to do that, but first, we want to talk about something slightly different. We want to talk about web frameworks. Developers who build for the web often choose to use a framework. In the past, changes to the web platform didn't necessarily take frameworks into account, and changes to frameworks didn't necessarily take the web platform into account. We think both the web platform and frameworks benefit from a close collaboration, and so we're out to make that happen. Obviously, if you're building something simple, it makes sense to choose simple technologies. But as soon as an app is sufficiently complex, in most cases, developers choose to use a framework. A few months ago, I put out a very, very methodologically correct Twitter poll asking people why they choose to use frameworks and got back a whole bunch of answers. And this one in particular sort of resonated with a lot of people. You can't not use a framework. Your only real choice is to either use one that's open source documented, tested, supported, maintained, mature, and proven, and I think we'd all throw, you can stack overflow stuff about it into that one. Or the option B is you cobble together some garbage, unmaintainable stuff yourself. I've made a few of those, actually. Yeah, I might have made one or two of those in my day before. And given that, we just want to start by recognizing that frameworks are part of the web, and that if you're using a framework, you are, in fact, using the platform. So let's go from there and kind of extrapolate to the further web stack that is based on that kind of insight. So on the bottom, obviously, you have the web primitive. So that would be the DOM, the Fetch API, Service Worker, stuff like that. Then above that, we have built-in modules. Built-in modules are pretty exciting. They're a new thing. It's the idea that we can build in a layered way for the web. They're high-level APIs that solve something like virtual scrolling or carousels. I think we all love to hate the carousel, but if we're truly honest, they're on almost every site in the known universe. So we can build something like a high-level API like a virtual scroller, and then that drives out the low-level APIs that we need to make those experiences accessible, searchable, truly fast. Those low-level APIs can then be used by the entire ecosystem of virtual scrollers. That makes a big difference because it allows everyone to level up together. Right. Then one level up, we get the framework. So that would be like React, Angular, Polymer, or like higher-level stuff like Next.js and Sapper that kind of treat you to everything that you need to build an application. So obviously, these aren't standardized. They're not web standards. But most applications use one of these, and so we think of them as part of the platform. The next part is web components, and you're probably thinking, wait, that doesn't make any sense. Web components are a web primitive or a web standard. I'm not talking about like the web components, the standards. I'm talking about your web components, your date picker or, I don't know, tabs. Maybe the carousel. Maybe your carousel, sorry. Yeah, these are the web components that layer on top of your stack. Why it's important to put them there is because they're an important measure of interoperability between different parts of your system. Right, to kind of make a point, let's do a quick poll here. Who here in this room has built a React application? All right, that's most hands. So if you, no, keep it up, keep it up, keep it up. If you're anywhere in your company, you also have an Angular, JQuery, Backbone, Polymer, anything like it, both hands. Yeah. So if that's the case. I love your honesty. Wouldn't it be nice if you could use the date picker in both of those? If you could have a design system that spans your entire application suit, no matter what framework it's built in. So that's where we think your leaf components, web components are really the right technology to build that. There is, however, a real problem with web development today. We're struggling to make stuff that's really fast and responsive and has buttery smooth animations. And I think we're struggling because in a bunch of ways, it's actually really hard. Not everybody struggles. There are examples of sites that achieve this. But at scale, what we observe is that our performance goals aren't being met. Right. And so what we're kind of observing is that in Web Dev today, we often have to make that choice between developer experience, how we feel as developers, and user experience, how users feel. And that shouldn't really be how things are. And the vast majority of cases, developer experience combined can lead to great user experience. That's just not how it works today. But if we can achieve those things coming together, the web would be better for everyone. This is why we're so excited about frameworks. Yes, frameworks sometimes make web apps slower. That's a reality. But they're also our best hope to make them faster. Right, so that's a bold statement. It is a bold statement. To prove that this is actually happening, we thought wouldn't be nice. We celebrated all the great improvements that Framework has made through this year. Let's start over with React. They have done a bunch of foundational work. For example, they're working towards making code splitting something that's first class supported in the framework, which is really nice. And then they've done a lot of work to break up the rendering of huge dump trees into tiny chunks so that if you have a big update, it doesn't lock down your browser for like many seconds. Everything's kind of done in small breaks. This is a theme you're going to hear us talking about a few times. Yeah. Angular made improvements too. The Angular CLI enabled performance budgets. This is great because how often do you not realize that you actually alienated a bunch of users by adding that one more library or NPM installing something that didn't install a whole bunch of other things? Angular also did a bunch of work to remove unnecessary polyfills. That's fantastic. We don't want polyfills for the most modern browsers. Right. Speaking of which, Vue basically did the same thing with a thing called modern mode. So you only ship the modern code to modern browsers, right? And that's effectively the same idea. And that's why we're so excited about Frameworks, kind of bringing this best practice to all the users. Similarly with another one, just making pre-loading, prefetching something the framework does by default. Polymer did some good work this year too. They're transitioning to LitElement for super small components. And they also got faster because yay, Firefox ship native web components support. Cool. Let's talk about Svelte. They got a bullet point for already being super fast. Yeah, it's kind of hard to... With Svelte, it was like, well, they're so fast. What do we even say about this? Right. But I think this was a great example. So they built a hack and use app. And they did it in an idiomatic way. So not like using super ridiculous hacks, right? So everything in that app, HTML, CSS, JavaScript, together is under 20 kilobytes, which is just amazing. Amp did some good stuff too. So this apparently is the only feature he shipped this year because now he's a manager. Yeah, I'm sorry. So he shipped feature policies against synchronous XHR for all ads. If you're going to ship one feature, this is a pretty darn good one. Right. If you imagine, you get to put your own feature on the slide. Nice. He also reduced... Sorry, they also reduced... Yeah, reduce the JS size on the wire by 20% by enabling broadly compression algorithm. We love this kind of thing, right? Because how great is it that all you have to do is turn on a different kind of compression and you get a 20% reduction in size on the wire. Right. Moving on to Amber, which removed jQuery from the default bundle, by the way. I have a jQuery t-shirt. Just leaving that out there. Making their bundle size 20% smaller, which is great. And they did it in a way that's backward compatible so that people can slowly migrate to this. Yeah, it's pretty neat. They actually, as far as I understand, made it so that anyone could turn on and off old code using the same functionality. So that's pretty great. And then another theme here, implementing the incremental progressive rendering with batched rehydration, which, again, comes down to that chunking of work theme which we're seeing here all the time. Great. So kind of to summarize this, we really want to get to a state where not only the super experts can make great web experiences. Everyone should be able to do it. And frameworks are an integral part of making that happen. Today we feel that we, and by we, I mean browsers, framework, and tools have under-invested in tools that focus on combining great developer experience with a focus on user experience. We can make this happen as a community, but integrating the best practices, performance, into frameworks, all users of those frameworks automatically get all the benefits. And that's how we think we can achieve great outcomes for users at the scale of the web. To make this happen, we're announcing three things today. The first one is we are including frameworks in the Chrome intent-to-implement process. How many folks know about our intent-to-implement intent-to-ship stuff? A few. Great. That's awesome. So we have basically two important checkpoints. We have more than that, but two very important checkpoints when we're going to ship a new feature. One is our intent-to-implement, and the other is our intent-to-ship. At both points, when we're about to build something and we're about to ship something, we want to get a lot of feedback. And so we go through this intent process in order to intentionally go in and draw in that feedback. Previously, we had listed web developers as folks that we wanted to get feedback from, but now we're actually explicitly adding frameworks to that intent-to-implement process. Right. And secondly, we want to put real dollars behind this. So we're starting with a budget of $200,000 to kickstart developing performance-related features in frameworks. In particular, we make available a list of performance features that we'd love all frameworks to provide to the users by default. Folks who are working on a framework can ask for funding to do the actual work. We're kind of still working on the exact details, but if you're interested, check out this Bitly link for more information. The third thing that we want to do is increase collaboration between frameworks and the Chrome team. It's funny to announce this today because it's actually something that we started in the summer and we've been working with a bunch of frameworks for the past several months. But we're excited today to talk through some of what we've started already. Right. That brings us to our next section. It's very much under construction. You heard that in the intro today. What we're hearing about here, these aren't things that are shipping tomorrow. Some of them you can try out a little bit. Some of them are like... Some behind the flag. Yeah. We're like really just thinking about them. So it's still very much time to give us your feedback, your use cases, your examples to kind of work out how they really work. The first one we want to talk about is display locking. We basically don't want to update the DOM inadvertently, and so this gives us a way to lock it. Before I joined Google, well, many times, I think this is one of the most requested features. But before I joined Google, the Polymer team and the Paint team had a bunch of conversations of a new primitive called display locking. The idea of display locking is that you can basically lock down a section of your DOM and we won't trigger render and other things on that bit of DOM until you unlock it and say, okay, it's ready to go. It's super subtle, and it may not be something you interact with directly, but it's definitely something that frameworks need in order to eliminate unnecessary browser work. Now we're collaborating with the React team to nail down the API, and we had an intent to implement out about a month ago. So if you have comments, we'd love to hear them. I have a question, though. Is it related to that BG color? Because I'm using that every day. Ready? Absolutely. Nice. Well, this is kind of awkward. Is it working? I don't know. It's kind of strange, right, to see a blank white screen in the middle of a presentation. Why do we think that's okay when web pages are loading? Sort of, ah, what's going on? Right, every time, every single time you load a new web page, the browser says, hmm, what's the good idea to do? Oh, yeah, I'm going to paint white, and then it's going to take a while, right? Why is that the case? So it shouldn't be the case. We should tell you that we totally scared the tech check people when we had a blank white screen during our tech check. All right, we really need page transitions on the web. You're probably familiar with stuff like this from the material design specification, right, or one page morphs to the other, and there's other ideas how to do this. Now, this is something you would see in your application going from one state to another. But we also want this for normal navigations, right? Like, here's a great example. So this is something the image search team at Google was working on. It's kind of subtle, so... Yeah, we'll walk you through it. So this is their image search result page, and when you click one of the images, you go into this light box, right, just to see the larger version of the image. So their goal was that more people click through to the underlying web page, and so they wanted to make that really easy by just literally putting that web page at the bottom of the page, right? So that, as a user, all you have to do is, you know, put your finger on it, and kind of draw it up, right? We're hoping that this greatly creates... increases the number of navigations actually happening, but this is something that you really just can't build on the web today. We'd love for you to be thinking, like, if you didn't have to have that blank white screen, what would you build? What would you design? Because I think it's actually pretty exciting a space. Right, today, you really have to choose between either a fancy transition or doing a real navigation with a browser load on your page. And there isn't really a solution for the cross-origin case where you navigate to a different domain at all. And so that's why we're so excited about portals. What portals give you are advanced transitions between websites that are real navigations, spas, single-page apps, right? That's the way to do it today. But here you can, and this is what this example gift shows, this is a navigation from one web page to another, and you can morph between them any way you like, which seems like something we should have in 2018. It does, doesn't it? Yeah, I think sometimes we will choose a single-page app with all of its complexity when actually what they wanted was fancy transitions. Right, it can't be really hard to do a single-page app. Cool. So I got a sporty new car, and within a month, I got a ticket. I was driving too fast. In Germany, hopefully. No, on the San Mateo Bridge. Oops. I won't tell you how fast I was going, but I found out that my car has this really cool feature. I can turn something on so that it beeps any time I go over 80 miles per hour. That seems really good because there should be limits and this car does not feel like it's going fast when it's going actually pretty fast. There's also something you can turn on for a motor or a car, which is called a governor, which actually will not allow the car to go over a certain speed. Feature policies are sort of like that. You have a couple of options. You have enforce mode and you have report-only mode. You can turn a feature policy on for something like synchronous XHR and you can say, nope, I don't want to allow that and it simply will not allow that to happen on your site at all anymore. Or you can turn on report-only mode and you get that beep, you know, hey, something's wrong, you should check it out. We're pretty excited about feature policies because they run in CI, you can run them in development, you can run a different set of feature policies on your third-party content, your ad content, then you do on your regular page. So you've got a lot of flexibility around how you use it and we're excited to see what you want to do with it. What you can use today is a synchronous XHR feature policy. We collaborated with the AMP team and they've got it turned on for all ads in AMP, which is pretty exciting. It's basically, by the way, the worst feature on the web. You want to turn this up for your website, right? There's no good reason to have SyncHR. There's basically no conversation in which Malta does not bring up that you should turn off synchronous XHR, even at your barbecues. There are other policies that are available as well. So there are some exciting ones, but they're behind a flag. So you have to go and turn it on behind the flag in order to check it out. There are three I'd love for you to look at, unoptimized images, oversized images, and unsized media. Go see how they improve your performance, see whether you like it with Enforce Mode on or Report Only Mode. We would love more feedback. And don't upload pictures from a DSLR directly to the web. Yes. And you'll hear Jason talk more about this a little later today. Right. All right. Next, we want to talk about instant loading. So how we get from super fast to zero milliseconds to kind of show you what exactly we mean by that because instant is an overloaded word, right? There's instant apps which don't do instant stuff. So let's talk about this. Here's a film strip you might be familiar with from like Webpages or called Webpage Loads. This one loads in eight seconds, right? And obviously you could make that faster. And we've spent years trying to like eke out every little second that we can out of the film strip, right? Right. But we want this. We want basically everything go away and only that last frame to render right away. So we have a solution for that, which is actually relatively obvious. We have to like load that web page before the user clicks. But now you might wonder why I put a bathroom stall on the screen, right? So US bathroom stall has this little gap which, like this, introduces a privacy problem, right? Does anyone know why it's the case? Anyway, so when you load some other web page, before the user said they wanted to go there, that web page might be able to like read your cookies, set cookies, stuff like that. And that's not something user expects, right? So we need a solution for that. And the solution that has been under development for a while is called web packaging. What it kind of provides, as one of its features, is privacy-preserving instant loading for the web. I want to explain really quick how that works, but we'll have a talk later today that goes into like actual detail from people who are working on it, which is great. So the way web packaging works is that you're a document author and you have a TLS certificate that we're using for HTTPS, something like that. And you sign that content to be created by you and then anyone can deliver it on your behalf, but the browser says, okay, this was originally signed with that TLS key. So I can say it came from like example.com because that was the original party. And I want to kind of drill down on this, anyone can deliver it. So you could have a different CDN, but you can deliver it over like BitTorrent or IPFS. Doesn't really matter. You can do basically HTTP over anything, which is really cool. So we collaborated with the AMP team on this because they're one of the frameworks that we're working with. And we're pretty excited about the instant loading and how fast we're going to be able to bring content to users. But Malta, what about the URLs? Yeah, AMP doesn't have very good URLs. So they're startwithgoogle.com. And with web packaging, that problem goes away, which is really cool. So that's why we excited about it. But, and this is the important part. It's a web standard. And so with web packaging, we can bring instant loading to all these frameworks. And this is not an exhaustive list. It doesn't really matter. This technology is completely technology except, doesn't care about technology. So really, really cool. We can't wait for this to land on browsers. Web packaging also addresses one more problem that I think is really subtle. Eric Meyer treated this and wrote this blog post, which is really cool a while ago. Traveling to Africa and noticing that, you know, HTTPS, which is the greatest thing happened to the web in a while among many other greatest things, actually introduced a few problems. So if you, let's imagine you're a cell tower, and you're not actually connected well to the internet, but you have LTE to everyone connected to you, right? It's great if you have an edge cache at that cell tower. Now with HTTP, that's something you can build. With HTTPS, it's something you can't build. So you always have to go to the origin to someone actually able to surf on that TLS certificate, right? And the great thing about web packaging is that it can bring back that feature, right? Because it's untangling delivery from TLS security, you can get that all great benefit of having edge caching for HTTP, but have it together with the security benefits of TLS. I also wanted to just really quickly say that web packaging is not related to Webpack. However, there is a bundling spec coming up, right? And so web packaging is actually going to bring bundling as a first class feature to the web platform, which I think is also way overdue, and so bundlers like Webpack can eventually use web packaging as the output format. You can actually try this today. There's a subspec called Sign Exchange, which is available in Origin Trial. You can go to this URL and try it out. Let us know what you think. Yeah. Later today you'll hear Kanoko and who else is speaking about? Yes. So the next bit that we'd like to talk to you about is scheduling. So anytime developers build something sufficiently complex, they end up needing to manage a task queue, prioritize work, and be sure that everything is getting done on time. So we don't run out of time on our talk? That too. A typical app has competing deadlines, so keeping the user input responsive while rendering smoothly at 60 frames per second and doing the actual work of fetching, preparing, and rendering the UI. In talking to the React team a few months ago, we realized that a framework-based scheduler has serious downsides. It can't schedule third-party code. It has a lot of difficulty interleaving tasks with browser functions like rendering and garbage collection. In short, it has to fight other systems for resources. With help from React, Google Maps, AMP, Polymer, and Ember, as well as the Web Standards community, Shuby and Jason, who you're going to hear from later today, are designing a scheduler that will run in the browser. It's meant to have high- and low-level APIs like Grand Central Dispatch, so borrowing some ideas there, and be able to interleave code from all different sources. They're working on things like, how do you interleave garbage collection or code that's in a promise? What's exciting about this in particular is that when you break things down into small tasks, it's much more useful if you have a great way to schedule them. When you can schedule things, they're much more valuable if you've broken things down into small tasks. This is where frameworks and the browser can work well together to provide a much better user experience than we're capable of providing today. This is one of the things I'm most excited to PM right now, and I'm really excited for you to hear from Shuby and Jason later. It's really a good example for how frameworks help because actually taking your code and breaking it down into small chunks, that's really, really difficult. But as a framework layer, this is something that people can just take a long time doing, but then having it for everyone. I mean, I think you saw in the framework award section earlier that three frameworks, at least, are working on breaking things down into really tiny rendering chunks. So this is pretty exciting. Right. Next, we're talking about Animation Workflat and Jankfree Pal X. By the way, ironically, in front of a janky animated GIF. So, Animation Workflat. First, we need to talk about other animation APIs because there are a few. There's the Web Animation API, which is great, and you should use it. And also, more importantly, it just landed in Safari Preview. So relatively soon, we should have it in most browsers. CSS animations are also awesome, and they're basically everywhere, so you can use them. And you should use these APIs. And Animation Workflat doesn't provide additional value except in some very important circumstances because these APIs are inherently time-based, which makes sense, right? Animations go from A to B over time. That's kind of what they do. However, there are some animations that aren't time-based. For example, this one, which animates Pac-Man based on scrolling left and right, right? And this is something that's really difficult to do on the Web today in a jank-free fashion. And then, so the reason for this is that with Animation Workflat, on the other hand, where you don't get jankers because that actual Workflat runs super close to the actual software that does the scrolling. And so you can do it even when the main thread does anything busy, right, which is really cool. My team actually has been working on getting this into what we call scrollbot animations, and it was just a drop in a replace and it makes everything so much smoother. Surma is talking about this later today and the talk about Houdini. And he'll have some really cool demos. Yeah. Almost every Web app eventually needs something like an infinite list. It just happens, right? On other systems, on Android, on iOS, there are things like UITableView. On the Web, you're sort of left to figure it out yourself. Now, many frameworks have really good implementations of a virtual list, an infinite scroller kind of situation. But the Web platform primitives make certain behaviors impossible. For example, have you tried to use Find in Page if you're using an infinite list? I often find myself going back to search for a tweet that I made or someone else made some time ago. You can't search for things that aren't actually in the DOM right then. This is obviously going to have accessibility implications as well. With new low-level APIs that we're putting into place, like searchable and visible DOM, all of the sudden these things become solvable and get addressed. You'll hear more about our collaborations with React Virtualized, Angular, and Twitter in Gray's talk a little later today. All right, so that kind of brings us to the end of our talk, just kind of summarizing. We have been talking about our vision for a framework-inclusive Web platform future. Instant loading and page transitions are coming to the Web, especially if your designer gets thinking about what you're going to build with that because it's going to be kind of cool. Right, and then we talked about a set of new low-level APIs making it easier to build reliably fast Web apps. And that's especially true if frameworks help Web developers take advantage of them by default. Thank you very much. Thank you. Day two means more big Web quiz, if I can get those words out. Yeah, good luck with your talk later if that's how you were going. Very awkward. Yeah, so let's get the big Web quiz on the screen so we're going to play another round. Actually, before we do that, can I play the intro video again? You mean my intro that you then ruined? Yeah. Yeah, go on then. Yeah, can I play it? Yeah, go on. Yeah, yeah. Here we go. It wasn't supposed to actually break it. I'm going to stop that. That wasn't actually intentional. It just actually broke. I don't think I know why. Just before we came out here, Serma ran up to us and went, ah, just on a deploy, I've changed the presentation view and I think that's exactly what it's done. I think it broke that. Perfect. Marvelous. Ah, because one of the things he added was polling. Polling every second. This is live debugging. And that's what's happening right now. That's exciting, isn't it? Well, at least we know that the polling is working. Brilliant, excellent. This is going to go really, really badly. So because it's day two, and just to remind you, the prize that is up for grabs. Use it to wow your colleagues and scare your pets. Ooh. But that means the questions are a little trickier. You know, just saying that. So just so you know. I don't want to press this button because I don't know what's going to happen. Let's find out. Let's find out. I hope you've signed into bigwebquiz.com and you're ready to go. Let's go. Let's find out. OK. Let's see how it works. The question. Here it comes. Maybe. Please. Come on. Something changed on the screen. Or not. Shall we roll back the deploy that we did earlier? I think that might be wise. Yeah. This feels like two years ago when we had a total failure on the login system, doesn't it? At least it's me and you. Right. You know, we didn't take anybody else down with us. Let's abandon that and figure it out. Do you want to introduce the next person? I do. It's Jason Chase. Ladies and gentlemen, Jason Chase. Round of applause. Good morning, everyone. I'm one of the Jason's that you'll be hearing from. I'm not doing as many talks as they said. So I lead a Chrome team whose mission is to help developers better understand and control the web platform. So we've been working on feature policy and reporting API, which we think can really make your lives easier as web developers. Maybe we won't save you from speeding tickets like Nicole said, but we'll see what we can do. So we know there's many benefits to being on the web, but as a development platform, it's far from perfect. On the Chrome team, we regularly provide web consults to help people address issues with their site. Now, these consults bring together developer relations, engineering, and other folks to help people deep dive into questions or issues with their site. Now, the goal is to provide actionable recommendations to improve things. Now, as a result, we've identified common mistakes that happen over and over again. And spoiler alert, performance is a common theme here. And the two big performance problems we see are too much script and too many image bytes being sent over the network. So you know there's a right way to do things, but it's adventure to find that sometimes. And even if you find the right way, there's bumps, or you might slip off the path. So as we've seen in other talks, there's a long list of things to keep track of and get right. Now, this is hard enough if you're an expert developer, but imagine you have a dev team with junior developers and you need to keep everyone on the same page. So we need help. What if we could make web development easier with guardrails? So if you imagine guardrails along a path, if you hit a rail, you know you're doing something wrong. And the rails stop you from leaving the path. Now, this isn't necessarily a new idea, but looking at repeated mistakes in our web consults, we're thinking about different ways to apply this. Concretely, how could we help you put guardrails in place for development? So AMP gives us examples, an example of guardrails and web development. AMP gives guarantees you're not going to be off course by making it nearly impossible to do things wrong. Now, AMP started by providing building blocks, tightly constrained to ensure great performance in UX. So this is an opinionated approach to carve off good parts of the web platform. Now, not everyone can or wants to use AMP. So learning from AMP, we thought about a more configurable approach. And that's feature policy. So we're really excited about it. We think it's going to be great to help guide you towards the well-lit path of web development. Now, with feature policy, you opt into a set of policies for the browser to enforce throughout your site. These policies restrict which features the site can access or modify the browser's default behavior for certain features. So let's talk through some of the common problems we've seen. So images often have extra quality or metadata that take up space but isn't required. Now, as we've seen in other talks of web, again, there's many tools to optimize images. We have to remember to use them or you have to integrate them into your build pipeline. Now, ideally, your page should never serve images that are larger than the version that's rendered on screen. Anything larger just results in wasted bytes and slows down page load time. So a common example might be sending a desktop-sized hero image to a mobile device. We've designed policies to catch common mistakes of unoptimized or oversized images. So we researched the impact that we think these policies can have on 10,000 most popular sites. Our analysis shows that 10% of sites can save over 500 kilobytes on average. Now, that can reduce load time by up to three seconds, sorry, 10 seconds, on a 3G network. So let's take a quick look. So this shows the policies being applied to a site. Now, on the left, policy for unoptimized images is turned off and you can see the image is loaded normal. On the right, we've turned on the policy and it's blocked one of the images. So we compute a compression ratio of bytes to pixels and if that ratio is greater than 0.5, we'll block the image and we'll show a placeholder. So the idea is to point out the images that need to be corrected. So this site might look familiar. You know, at Google, we're big believers in dogfooding so we went and applied all the policies we could to the site. Lo and behold, we found out we made a few mistakes. So the unoptimized images, sorry, we applied a bunch of policies, but it was actually oversized images policy that caught that we were sending some really images that were too big, so oops. Just shows that even the experts, like our developer relations team, can make mistakes and so we really need better tooling to help catch these kind of things. So another common problem is images without explicit sizes. Now as we see on the left, this can cause a user experience to jump around as a browser loads in new images and resizes a page. Now on the right, we've applied a policy to catch this. So the browser can set these images to a fixed size and it'll keep your user experience stable. So how does this all work? Policies, like I said before, are a contract between developer and browser. We use them to inform the browser about our intent for our site. It's a set of rules for how each page should behave. Now the browser helps keep us honest when our page tries to veer off the path by validating that it's behaving according to its stated rules. So if a page or embedded third-party content attempts to violate any of the rules, the browser will identify or block the behavior and in some cases, it may override the behavior to provide a better user experience. All right, let's take a look at how you configure policies. Now each policy refers to a single feature and then it defines a list of origins that are allowed to use the feature. Now with each page response, you can add this feature policy HTTP header. So in this example here, this will catch all oversized images and this is the one that saved our bacon on the CDS site. Now here, the non-keyword means that no origin is allowed to use the feature of oversized images, disabling it entirely for your site. Now looking at that example, you might wonder why we would say oversized images are a feature. Now we have policies for helping you enforce best practices to ensure good performance in user experience and the way these often work is we'll define a known bad practice as a feature and then you use the policy to prevent it. Now we do this for web-compatible reasons. We don't want to go ahead and break the internet by all of a sudden turning off oversized images for everyone. So we'll allow that behavior by default and then you opt in to turn it off. Now this header applies a policy that catches any images that are not optimized. Now here we're making an explicit exception for our photo CDN. Now in this case it might be because our users expect really high detail glossy pictures and those, excuse me, those don't compress well. Now in this case there's only one origin listed and all origins will not be allowed. So here's an example that puts them all together. So first we have a policy that ensures every image has explicit dimensions. So we saw this earlier preventing your user experience from jumping around. Now second, we have a policy to selectively enable the geolocation feature. So now we're applying this to our own origin and to our trusted maps provider. The self keyword here means the origin of the top level page. So by combining this with explicit origins you have full control over who's allowed to use a feature. And then finally we're allowing any origin to use autoplay. And here you see an asterisk keyword. That means essentially everyone. So by default Chrome will only allow autoplay on same origin iframes. With this policy we can allow cross origin iframes to play as well. So in that last example we saw features in the more traditional sense. You know well-known APIs that are exposed to the web like geolocation, camera, full screen autoplay. So before we saw policies about enforcing best practices. So now we have policies that give you granular control over the features you use. So we've probably all seen this example. You go to a site and before you interact with the site at all you get a pop up to use location or microphone or something like that. So with feature policy you can lock this down to either prevent the use of the feature at all or really just dole it out to specific origins you trust. So we've talked through some examples of an HTTP header. You can also use an allow attribute to control feature policy on iframes. So now why might you want to do this? Well as we talked about you can be really selective. So maybe you have one frame on your site that shows a map. You could use the allow attribute to grant the geolocation usage to that frame and on no other frames. Now in the example we see here this allow attribute is changing the default for the autoplay feature. Well you'll notice if you go to generate an embed on YouTube they're already using this in production today. So we've seen a lot of flexibility in how you can configure feature policy. So we've built in some rules to make sure they're being applied correctly. So multi in the keynote talk rather enthusiastically about not using syntax HR. And yes we do have a policy for that and AMP is using that in production. So the way the rules apply is first policies are inherited. So scripts will inherit the policy of their containing frame. So this means that for your top level scripts they'll inherit the policies of the main document. So this means you can apply the policies at the top and they cascade down to all nested subframes. So in AMP's case they can set the policy to disable sync XHR and know that it will be applied to all subframes regardless of how far they might be nested. And this applies to the allow attribute as well. So if you have an allow attribute and you have a header the stricter of the poo policies wins. Now on the other hand we have this one way toggle. So disabling a feature turns it off permanently. So that means again in AMP's case they've turned off the feature with the sync XHR policy and that means no frame nested or otherwise can turn it back on. Now for policy supporting best practices it might not be feasible for the browser to actually block the bad practice before it occurs. It can only detect the problem in some cases and then break your page in an obvious way or notify you. The goal in that case is really to let you know there's a problem to fix. Now we saw this in the policy for an optimized images and oversized images. Now in other cases the browser can actually block the bad practice so your site continues to behave well. The policy for unsized media works this way. So it'll apply a fixed size to images to prevent the user experience from jumping around. So we have DevTools and Lighthouse to give you great insight into the behavior of your site. So where does feature policy fit into your development workflow? Feature policy lets you set the policies up front and then catch problems early during development before anyone writes a lot of code. Now you can build policies as defaults for new sites or new pages and this way you can enforce standards across your Dev team. Policies are served with each page and validated at runtime so you can choose to turn them on for some users and not others. You can also turn them on for some pages in your site and not others as you incrementally improve your site, for example. So at a minimum we recommend enabling the policies on your staging server. This can let you catch problems that you didn't see during development. For example, you might have your images come from a content management system and they're not available as you're developing a site. So this can catch problems with that. In addition to staging, you can also apply policies and production. You can have your Dev team, your managers, all the way up to your CEO use your site with policies enabled so that you can find real issues in the wild. For regular production users, you likely won't enable the policies if violations will break the user experience. So for example, with the image policies loading unoptimized images more slowly, it's probably a better experience than having them missing altogether. You still want to know about all these violations so you can correct them. So we've designed report-only mode to give you the best of both worlds. So you can configure policies similar to the examples we saw before, but you mark them as report-only. So you see here, internal users, they have the policies enabled, right? We'll break things, we'll see problems with images that need to get corrected. For production users, we have them report-only mode so the production users get the normal experience, but then you can get reports about what's gone wrong. So speaking of reporting, feature policy violations are really just the start. There's lots of other info from the wild that would be really valuable. So for example, browser interventions. Sometimes a browser needs to intervene to improve the experience for the end user. So an example of such an intervention is blocking document.write on 2G connections. So there we've discovered that doing a document.write on a really slow connection can really impair the user experience. So to protect that, we've just blocked it. But that happens in the wild, you don't know about it. You really want to find out about it so you can correct it. Now what else? So just taking a look at this, there's lots of things going on in the wild that you'd really love to know about. Now a problem, some of these are just console messages which you can't collect. Now there's other things that have a bespoke API like window.onair, so you kind of have to remember to go hook up something there to get some errors. And then there's things like crashes and network errors that just aren't possible to capture from script. So the solution here is the reporting API. This gives you one place to collect all sorts of information from the wild. So one stop shot. It exposes information that wasn't available before. We've talked about like network errors, crashes, now that's coming soon, feature policy violations, right? Deplications, interventions and network errors, you can get those today for the reporting API. There's two ways to use the reporting. We have the reporting observer which lets you collect reports client side with a JS API. Now you can filter by report type. Here we've said we just care about deprecations and feature policy violations. And then you can use the callback to funnel these reports to your own analytics provider or however you'd like to capture them. Now because this client side, the available types are limited, but the key feature here is it buffers results so you can set up your reporting observer later and still get reports that happened earlier in page load. So every report has some common fields so we can see here the type of the report and the URL where the report happened. And then for each report type, there's a specific body. Now for feature policy violations, we see it's telling you the feature that you're using and it tells you where in your code the violation occurred. Now the second way to use reporting is with the report to response header. So this lets you configure out of band delivery of reports. The browser will queue up reports and send them to the location you're choosing separate from the execution of your page. And so here you can get all of the report types that we saw earlier. So you'll see a couple fields here. So first there's group and that just lets you name this particular set of end points. And then you can refer to that group name in other parts of your configuration. Now max age says how long this configuration is valid for. After that, we'll throw away this configuration. So this lets you, you know, change end points over time and gradually switch to new ones. And finally end points is an array of URLs. So you can configure multiple URLs for fallback. So if the browser can't reach one endpoint, it'll try another. And so talking about one place to configure reporting, CSP, content security policy, now integrates with the reporting API. So whereas before, you'd specify a report URI and you'd give it an URL to where you wanted your reports to go. Now you can use report to directive and you just point it at one of the end points you'd previously configured. So if we go back to the previous example, you'd send a report to header and you define a group called CSP endpoint. You can have all the URLs you need there, the age, and then you just refer back to it here. So where can you use this stuff? So both a feature policy framework and the reporting API are shipped in Chrome. Firefox is implementing feature policy and Safari has support for the iframe allow attribute. Now, what's really great is you can benefit from using these policies now, even though there isn't broad support across browsers. If you do some of your local testing in Chrome with flags enabled, you can apply all these policies and catch problems before they reach other environments. So I've talked a lot about a few policies. You know, walk through the examples, depending on why you might want to use them, but that's really just the start. So here's a list of all the ones we have available today. Most of the ones around granular control, like turning on and off features like geolocation, microphone, those kind of things, those are already shipped, so you can use those now. And we saw that with the YouTube example and with AMP. A lot of the best practice policies, those are behind the flag, and we're also working on a bunch more. Now, seeing that long list, how am I going to set up policies? How am I going to try them all out? Well, we have a handy DevTools extension. So this lets you really easily toggle a policy on and off and see the effect on your page. So you don't have to configure to set up headers or modify your code to allow attributes. You can just try it and see what will happen. So here we've seen an example of using the extension to turn off the geolocation feature. Now, this extension uses a JavaScript API. So the advantage there is you can feature detect which policies are even supported, and you can go one step further, and you can query to see which policies are enabled or disabled. So this can allow you to code defensively. So, for example, if you have content that's embedded, you can respond differently if you know the feature isn't available to you. So we really love to hear if these policies are useful for you. The ones we're really interested in are some around some of the images and even sync X-ray chart. We made it easy for you to hopefully copy and paste and get a header sent real quick. Now, if you have ideas for new policies or if you have feedback on the existing ones, we'd love to hear from you on our GitHub repo. And finally, you can head over to featurepolicy.rocks, and you'll get more information about feature policy, about code samples, and live demos. Thanks a lot. Why, Jake, would you like to try the quiz? I would, Paul. Shall we get the quiz on the screen? Wow, I wonder if that intro would be good to see again. I will press the intro button, and this time it will play all the way through. Thank you. Come on. Please. Oh. Two. Brilliant. OK, it feels like it's working again. Yes. OK, so get your phones out. Remember, laptop's out. Yeah. The set of questions today are if it could be... Oh, I'll press the button first. There you go. TLDs? Yeah. So we're going to be asking whether the TLDs that you're shown are valid TLDs, top-level domains. That little bit at the end of the domain name, right? Like .rocks.com. Yeah, and there's loads of them now. There's loads. Now, pretty much whatever you want, except you can't, because otherwise we'd have no fake ones. That's a good point. Let's start around. It is a finite list. We've got three seconds per question this time, hopefully to give you a bit more time for some people saying it was a little bit too quick yesterday. So what have we got here? Purple, brown, Washington, Vegas. Oh, some city names. OK. Yeah, some of those may be real. New York, black, white, Berlin. Oh. Confidence is just fluctuating wildly on these. It would be really unfair if some of the, like, cities and countries were OK and some of them weren't. That would be awkward, wouldn't it? They wouldn't have done that, would they? Let's find out. Oh, it's .San Francisco. Relevant. Topical. Very little confidence. That means people are voting roughly 50-50. Interesting. Ah, now that's how it's panned out. Let's have a look then. There we go. Back at the start there. Back at the start. There we go. Purple, brown. It's pretty evenly split, I would say that, Jake. Yeah, pretty confident Vegas is one. Shall we see what the answers are? Yeah. Here we go. Oh, so they were all fake except for Vegas. OK, so Washington fake, Vegas. Real. OK, fair enough. OK. Next set. What have we got here? Can I just remark on the fact that there's pretty much a 50-50 split? Yeah, I would have been exactly the same. Exactly, exactly. What are these ones? Oh, OK, so New York is fake. Berlin, all good to go. Oh, so we've got black but not white. And Berlin is one as well. OK, fair enough. Let's have a look at the next set. Teal, these are odd. Again, just 50-50. I want to see the San Francisco one. Oh, yeah. Is the home city represented? It is not. It is not. It's not. It's not. In all love in the room for San Francisco, right. And the last set we have here. Dot Jake. Yeah. Dot Jake. Yeah. Fake. No, it's fake. Yeah, that's a fake one. That's a relief, frankly. Honestly, if Rome is not getting in there, Jake is not getting in there. All right. I'm calling it now. That is a fair comment. Well, isn't it nice that it worked? Slight relief. Right. It's time for a break. We'll be back in here at 11.30. See you then. We're going to do another one of our HTTP203 microtask speedruns, right? Yes. Oh, no, it's microtask speedruns. Oh, I don't know. We are not going to branding, mate. I think microtask is a good branding for it, right? Yeah. Because the idea is you're going to explain a thing in two minutes. Broadcast channel. Broadcast channel. Yeah. Okay. Oh, hang on. Broadcast is a WebRTC. No. No, no, no, no. We're in a different area. It's in the API. It also has been around for quite a time, similar to workers, but it doesn't have as wide support, sadly, because it's not been used a lot, but it's really cool. So it was a type of worker? No, but it has to do with workers in a way. So a broadcast channel is basically a post message channel associated with a name. So I can say new broadcast channel, Soma. And now I get a post message endpoint for the name Soma. And every other site, every other realm from my origin, like my workers, my service worker, all the other tabs that are open from my site, can now communicate through this same message channel because they know the same name. So it's like pubs up? Yes. Across tabs? Yes. And you have to worry about knowing which tabs are open. Do I have a service worker? How many workers are there? It just works. And that is brilliant because what it allows you to do is to write little pieces of code, little modules, that just hook up to this broadcast channel and start working. So previously you would have to use shared workers to be the sort of middleman for all of this? And have some kind of management unit. So this module is now here. Do you want to talk to it? And now you just have the central unit in the platform. And whenever a module is finished loading, it just hooks itself up to the thing. And now you can talk to it. And that is absolutely amazing. I have recently discovered for code splitting because you just load things whenever you need and they just start working. You don't have to actually statically import. You just dynamically import. So what kind of messages are you sending through this then? So I have like a cache manager in there. I have things that do my color scheme adjustments, my animator. All these things are now just like little, tiny independent JavaScript modules. And they use broadcast channel to talk to a worker, to my main thread. So one page receives an update and it can just tell all the times, I've got this data. You don't need to go and refetch it. Yeah. Brilliant. And what browsers? No. Go to my Firefox. Brilliant. There we go. I like that you knew that. Let's do another round of quiz. Big web quiz again. Yeah. Yesterday we had CSS colors. Yes. I thought it was so successful and there were so many good ones. Yes. We should have like revenge of the CSS colors. So we're going to go again. You ready? Ready? Yeah. We're getting ready to go for it. Part two. Part two. On your marks. Get set. Go. Go. Lavender blush. I like this three letter word, like medium spring green. Pale violet red marvelous. Fire brick. Honeydew. I feel like that should be a real one. Honeydew should be. Yeah. Yeah. I can see that. I mean, it's a color. Medium hot pink. I love a good hot pink. Yes. But I'm not sure if medium hot pink is a real one. This implies that there is a level of pinks. Right? Yeah. Transparent green. I'm not even sure how that works, honestly. But it could be there. Yeah. Or could it? Ooh. Ooh. Any closing? Here we go. Any second now. Ooh. Yeah. Not surprising, I think. I mean, again, looking at this list, I think I was very confused as to which one of these... Yes. ...which one of these lot was real or not. Yeah. And in fact, they're all... Everything is real. They're all real in that block. But do you believe it? Yeah, you would. Oh. You probably would. Everyone is pretty confident about this. Yeah. Yeah. Do you really... I bet you really know your CSS colors, don't you? Yes, I do. You know your CSS colors. Yes. Yes. Interesting. Ooh. Ooh. Or on the fake side now. Yeah. It's all fake. Yeah. Wow. This crowd is, again, getting wiser to our charm. Space gray. Mm. Confident. Space gray, real, interesting. It is not. It's all fake. I heard the sigh there before it came up. Ah! I'm so disappointed in myself. Don't be. These are so silly. Yep. The point is to be silly. Exactly. So it's time for our next speaker, who's called Gray. A valid CSS color and also an excellent human. Yeah. Ladies and gentlemen, Gray Norton. Good morning, everyone, and welcome back from the break. It's true my name is Gray. And I don't think my parents could have known when they named me all those years ago that my hair would prematurely match my name or that I would be a valid CSS color. So I give them a lot of credit. So I'm the engineering manager for the Polymer project at Chrome. And my team focuses mainly on web components and libraries and tools to help you use them. But we're involved in some other stuff, too. And today I'm here to talk about something a little different. Or maybe we'll just stand here and look at my picture. So shockingly, we're going to be talking about performance. As you may have noticed, we're a wee bit obsessed with that topic. And by now, you've already heard about speed tooling and best practices for doing everything faster from loading to rendering to responding to user input. But for the next half hour, we're going to drill in on something a little more specific. And that's a proven pattern for improving performance that we'd like to see more of on the web. Ben and Dion mentioned in yesterday's keynote that today's web platform is really a high performance machine. It's way faster in almost every way than it was even a few years ago. But there are still some surefire ways to slow it down. And one of those ways is just trying to render too much. Especially on low-end mobile devices, you can bring performance to a crawl by piling on too much DOM. And sadly, there are key parts of the rendering process, like styling and layout, that just take longer the more nodes you have in your page. So for a certain class of performance problems, the best thing you can do to speed things up is to lighten your load, minimize the number of DOM nodes you have in the page at any given time. And there are various ways to do this, of course. But one of the simplest and most effective is to adopt a pattern called virtual scrolling. So for the last several months, a couple of members of my team have teamed up with some of our Chrome colleagues and some other folks inside and outside of Google to explore what it might look like to add first-class virtual scrolling support to the web platform. Today, I'd like to share what we've come up with so far. It's early, but we're pretty excited about where the project is headed. We'll start with a quick introduction to what virtual scrolling is and how it's currently being used on and off the web. And then we'll take a closer look at the performance problems we're trying to address, seeing firsthand how too much DOM can impact the responsiveness of your pages. Next, we'll talk briefly about the approach we're taking. And we'll show you what we've come up with so far. And then finally, we'll look ahead to what's next and how you can get involved if you're interested. So without further ado, let's see what virtual scrolling is all about. For the sake of comparison, first we'll look at ordinary scrolling. So the blue box here is our viewport. And as you can see, as we scroll, the document moves behind the viewport, revealing content further up or down. But notice that all the content was present from the start. The only thing that changes is what we see. Now, here's the same page in a virtual scroller. As you'll see, only part of the content exists in the document. As we scroll, nodes are dynamically added and removed, keeping just enough to fill the viewport. And this is an obvious one at load time, since we only need to render one screenful. But as we'll see, virtual scrolling actually also improves performance in other ways that might not be as obvious. So when does it make sense to use virtual scrolling? It's clearly a natural fit for any time you need to display a lot of data in a list or table form, for example, like this address book. And also, it's an obvious call for content feeds, obviously in cases like social media or messaging. But feeds have started showing up in less traditional places, too. So some publishers have started stitching together articles in a way that takes advantage of the low friction of scrolling. And this may not look like your garden variety infinite list, but nonetheless, it's absolutely a case where virtual scrolling can help. And for that matter, we actually think there's a strong argument to be made for looking at virtual scrolling in cases beyond lists and feeds. A single piece of long form content, like this Wikipedia article, might easily contain tens of thousands of nodes. And the same techniques that we use for lists can also help us keep the weight of documents like this under control. So virtual scrolling isn't a new idea, of course. It's widely used on native mobile platforms. And in fact, both iOS and Android put virtualized views front and center in their SDKs, making this one of the first concepts you learn as a new developer on one of those platforms. And this is a big part of why native mobile apps tend to feel fast. A basic native view is actually heavier than the equivalent chunk of DOM, but the mainstream use of virtual scrolling on platforms like these helps keep performance in line. Now, you might be thinking, virtual scrolling isn't new to the web either, and you'd be right. This is an effective virtual scrolling solutions for all of the top frameworks, including, among others, React Virtualized and its younger sibling, React Window, the Virtual Scroll Viewport in the Angular Material CDK, and the View Virtual Scroller Library. But these solutions are currently getting by without much help from the web platform. So in the age-old tradition of paving cow paths, we asked ourselves, what might it look like for the web to have a fast virtual scrolling support? How could we make things better? And we came up with a few ideas. First, virtualization generally means you can't link to locations within the same document, since the browser can't scroll to a node that's not currently in the DOM. Now, this may not be an issue for many use cases. You don't see a lot of links between address book entries, for example. But in other cases, it is a big deal, and it breaks a fundamental feature of the web. And there's a similar problem with the browser's feature, which only sees text if it's present in the DOM. And then there's search friendliness. Since virtualized content is added on the fly, it's not generally visible to search engine crawlers. And again, this isn't relevant for every use case, but we'd love to get the performance benefits of virtual scrolling without sacrificing indexability. Finally, and maybe most importantly, we'd like to see if we can greatly increase the amount of virtualization that happens on the web by adding platform level support. While you can do it today, virtual scrolling on the web is basically a fringe feature. You have to discover you need it, and then you have to jump through some hoops to get it. We'd like to put it front and center, the way it is in mobile SDKs, making it easier to discover and easier to use. Okay, so we've talked about what virtual scrolling is and why we're interested in adding it to the web platform. But to really understand the nature of the performance problems we're trying to solve, it'll help to look at some examples. Since problems are the worst on low-end devices, we'll use a typical Android Go phone for this exercise. So I actually recorded a bunch of real-world interactions for this talk, but I wasn't super happy with the quality of the recording, so I ended up creating dramatic re-enactments in DevTools with CPU throttling turned on. But what you'll see here is representative of the performance on the real device, and I'd be happy to show you on the device itself if you come find me after the talk. Okay, so first we'll look at how the size of the document impacts rendering and what that means for responding to user interactions and content updates. So I created a simple example to demonstrate. It's a bit contrived, but it exercises the browser engine in the same ways that real pages do. So this mocked-up content feed has just five items in it, and it performs pretty well. You'll see it has advanced features like a dark mode and a compact layout mode, and these are all made just using standard CSS techniques, changing classes. And then lastly, we have this slider here, which lets us increase or decrease the size of the items by changing the base font size in the document, which is a technique that many of you have probably used in the layouts of the items themselves are based on Ms, so that the whole thing scales. As you can see, this performed pretty well, but again, it's only five items. So let's see what happens if we bump up to 50 items in our feed. So I'm scrolling here to show there's more content, but don't focus on the scrolling performance. We'll talk about that in a minute. For now, we want to focus on the other interactions. And notice I added a jank indicator, so the screen flashes red when rendering gets slow. And we're not too bad here at 50 items still, although there are noticeable delays for the mode switches, and then you'll see there's some sluggishness as we start moving the slider. But where we really start feeling the pain is when we get up to 500 items or more. So the lagginess in this example is going to be impossible to miss. So you'll notice that when we go to dark mode or compact mode, there's a very noticeable lag and the slider is going to be virtually unusable. It's so slow to update. And if anything, it actually feels worse on the real phone than this. So again, this example is a bit contrived. You may not do exactly this on your page, but the effects it illustrates are very real. Rerendering the content on your page, any content on your page, whether it's in response to user interactions or changing data, it gets much slower as your document gets bigger. And on devices like this, it can kill your page. So let's jump ahead of it and see how much virtual scrolling might help. So this is exactly the same page with our work in progress virtual scroller swapped in for the scrolling region. And we've bumped all the way up to 5,000 items, but of course only a small fraction are actually in the document at any given time. And as you can see, we're pretty much back to the same performance we had in our original five-item list, and this is essentially how it feels on the phone as well. And remember, this is a very low-powered Android Go device. Next, let's see how virtualizing might help us at load time. So a quick caveat about this. Load time performance, how quickly we get something on screen, how quickly we make it interactive depends on a lot of factors, and virtual scrolling doesn't directly impact many of them. But to the extent that styling and layout are slowing things down at load time, virtual scrolling can definitely help. So being the big web nerds that we are, we'll use the single-page HTML text doc as an example. So this thing is massive, somewhere on the order of a million words, I'm told, and it's notoriously slow to load. So on my Android Go device, with a good but not great network connection because I'd gone over my data allowance for the month, it took about seven seconds to get something on screen. So for comparison, we've built a virtual scrolling version of this page out of duct tape and bailing wire. Basically, it loads the original doc into a hidden iframe, and then it populates the virtual scroller as the nodes stream into the iframe. And you wouldn't build a real thing this way, but it's an interesting test case, and it illustrates the impact. So as this visualization shows, we got something on screen much faster, just over three seconds instead of seven, and it actually gets even better from there. The original version on this Android Go phone was suffering from scroll jank for well over a minute whereas the virtual scrolling version is usable right away. As you scroll, there's occasionally a delay as it updates, but unlike on the original version, you're never sitting and waiting for seconds while the screen is locked up. So before we move on from performance, one quick word about scrolling performance because it seems a little weird not to talk about that when I talk about scrolling. So because scrolling is driven by the GPU, and it's a low-end Android Go device, you actually get pretty decent scrolling. It scrolls at a reliably high frame rate and it actually looks really good at normal scrolling speeds. But when you scroll quickly, something funny happens. The screen blanks out, goes completely white for long periods of time while the rendering pipeline catches up, and this actually gets worse as the size of the document increases, just like the other performance problems we've been looking at. So here, our virtual scroller, it turns out, significantly reduced the blanking problem. It happens much less frequently, and when it does happen, the renderer catches up much more quickly. The trade-off is that scrolling is mildly but noticeably less smooth overall. As you would expect, doing JavaScript layout on a CPU challenge device like this does impact the frame rate. But as you can see from the video, it's a very good experience, and it's a slow power device. Okay. So we know what virtual scrolling is. We know we want it to add it to the platform, and we've seen firsthand why we need it. We see the performance issues we're trying to overcome. So I want to talk for a moment about the process that we're going through. Because it's actually not that common in recent years for the web to add new high-level features. We had a bunch of things that we really want to do and treat this sort of as a guinea pig for how we might add higher-level features. We'll start with some basic principles. Anytime you're building something in the web platform itself, as opposed to in user space, it's important to get the very basic use cases right, because web APIs live forever. And it's much more important that we do the simple stuff right. So with that in mind, we wanted to make sure that we learned from prior art. There's obviously virtual scrollers on other platforms, and plenty of good work in the framework space. As I said, we want to make it simple. We want the feature to be easy to use. As much as possible, we want you to be able to create web content that you just happened to put in a virtual scroller as opposed to picking up a brand new API and thinking fundamentally differently about the way you build stuff. Make it work. It sounds trite than it is, but what I mean by this is everything needs to work. We talked about links. We talked about find and page. It's critical that accessibility works. We want tabbing from item to item to work. And, of course, it needs to be fast. After all, this is a performance problem we're trying to solve. The next thing we want to do is we want to embrace layering. We'll talk a little bit more about what we mean by that. Nicole actually mentioned this in the keynote. One of the important things we want to do out of this process is identify and implement any lower-level primitives that the browser may be missing in order to support this high-level use case. We also want to enable this thing to be used right out of the box, much like you use vanilla HTML and JavaScript. We don't want you to have to pick up a framework to do virtual scrolling. We'd like you to be able to use it on the web platform simply. But with that said, we also want to give frameworks and libraries a solid base to build on. And certainly the low-level primitives are things they can use, but ideally we'd like the virtual scroller itself to be something that they can take in whole or in part and layer features on top of for their own virtual scrolling solutions. And as I mentioned, this is all really with the idea of blazing a trail. We think that there's a lot of room to improve the developer experience and the user experience of the web platform by adding some high-level features in coming years. With that in mind, we want to use virtual scroller as sort of a lens or a testing ground for us to examine what it means to add new high-level features. And so we'll talk about some of these things as we look at the virtual scroller. So with those principles in mind, we sat down to do our homework. And the first thing we did was we set up what we called at that time the infinite list study group. And we basically had a bunch of people from across Chrome who would get together for a couple of weeks. And each time someone would take the homework of looking at a particular virtual scrolling solution and reporting back and we would discuss it and we'd learn about what was working and what wasn't. And we documented all this here on GitHub. Once we'd done that for a while, we came up with a set of requirements and it was time to start building. Now, I don't know what this is. I think it's molding or something, but it looks like something I'd really like to build. So I put it on the slide. And we have the same sort of love for virtual scroller. So once we started implementation, we did that on GitHub as well. And one of the things you'll notice here is that virtual scroller is being implemented in JavaScript. And this obviously is sort of unusual for something that's under consideration for a web platform API. It's not typically how we build platform APIs. But this gets to the layering concept. We wanted to make sure that we didn't reserve any special powers in adding these features. And one of the best ways to do that is to develop things in the same environment that framework and app developers develop in. So the next important thing about the process is we wanted to be in constant dialogue. We talked about the fact that we're doing these things out in the open in GitHub, but we also have been talking to people all along. So we've been talking to other browser vendors about the ideas behind virtual scroller and the principles that we're trying to look at for adding new high-level features. We've also talked to framework and library authors. So along the way, we've talked to members of the Angular team who are working on the Angular Material scroller. We talked to the AMP team. We've talked to members of the Ionic team who are working on their own virtual scroller. And we've talked with Brian Vaughn, who's the author of React Virtualized and React Window as well. And we also have very active discussions going on in GitHub. So it's important to us that this process be carried out in the open and with the input of the community, including frameworks. Okay, without processes backdrop, let's go ahead and look at what we've built. So this is that example we were looking at before, and I show it here in DevTools just so you can see as we scroll, there are just a few items in the DOM at any given time. And when we scroll, you'll see them cycle in and out. And so even though we have a list of 5,000 items here, at any given time, we're only laying out styling, rendering a very small number of them. And as you can see, the virtual scroller itself is a custom element. And let's see how you would actually use that in vanilla style. So starting from a blank slate here, the first thing that we're going to do is we're going to use the browser's native module loader to import this. Now, this is also a new concept that we're trying out as part of the way we're thinking about higher-level APIs. Typically, browser APIs are sort of baked into every page. You may use them, you may not, but there's a memory impact and the size of the browser is impacted. We'd like to explore a pay-as-you-go model. And so we already have, in the form of the module loader, a way to dynamically load libraries and code at runtime. And so we're using that here. And you can see we have a proposed syntax for requesting something that would be part of a standard library provided by the browser. And I won't go into the details here, but you can read about it. So once we've imported it, we just put the virtual scroller tag in our HTML document. And this is what you get without doing anything more. You can see the virtual scroller has a default size, much like an image or an iframe. Not an image, but rather an iframe. And so it's not doing anything yet. Let's make it do something. So first, ordinary query selector to find the virtual scroller. We're just going to fetch some data. And then here's the first bit of virtual scroller API. It's an item source. You can assign this an ordinary array as we've done here. And then we'll call our async function. And we get that, which probably isn't exactly what we want either. So we wanted the virtual scroller to do something by default. And so what it does is it just takes the item and it tries to render it into the item. And in this case, it's string, and it doesn't look very good. So let's fix that. So the next bit of API is this update element hook. So here we're basically just going to take the contact name and put that in the text content. And there we have something that looks much more useful. So just a few lines of code, vanilla HTML and JavaScript, and we have a virtual scroller. So let's take a look at one more example. You notice the virtual scroller initially was empty. And by default, it will just create a div for each item. You can actually override that. In addition to the update element, there's a create element hook, and you can give it any kind of DOM you want. But you can also put a template inside the virtual scroller, and whatever is there will be what's instantiated for each item. So I'm going to go ahead and, in my update element function, I'm going to assign the image. And then lastly, I'm going to specify that I want a vertical grid layout. So out of the box that supports vertical and horizontal, I'm going to add a model normal and grid layouts. Layouts are actually pluggable under the hood. It's TBD, whether that gets exposed in the APIs that eventually ships. OK. So as I mentioned, we're taking a layered approach. We're not building fancy features like custom headers and things in. But it is important to us that these things work, because again, we want layering. So here's a case where we have headers interspersed with list items. These are all examples that you'll find in our GitHub repo. So the other thing is it's important that we want to support various loading patterns. So this is a common one where it's sort of an infinite scroll, and as you go, each item gets loaded, and you just keep scrolling. We also support the other common pattern where you scroll to the bottom, you explicitly ask for more. So again, we're validating that our basic constructs are working for these use cases. Fancy your features. If you've ever built stuff like this, worked with the UX team inevitably, they're going to want fancy stuff. So here's a proof of concept that shows that things like swipe to dismiss can work with this virtual scroller as well. OK. So we mentioned that we thought there would be some missing primitives that we might discover. And we think we found one. Specifically, it's called invisible DOM. If you remember our earlier examples, in this one, you'll see it's a lot like the virtual scroller, except instead of actually getting rid of and adding nodes, it's basically just making nodes invisible. Invisible DOM is in the document much like a display none element, but it's different from display none in that it can be linked to and it can be found by the browser's find in page. And so here's an example where we have a non-traditional virtual scroll type of content. This is a document, much like you might find in Wikipedia or a long-form news article. And you can see, here it is. We get a certain way down the list and all of a sudden there's nothing highlighting over in the window because some of these DOM nodes are invisible, they're just not there. And as we scroll, we'll see that they're basically being flipped to visible. And so when the virtual scroller is working with invisible DOM, this is how it works. So you'll see here we'll do some resizing, changing orientation, and you'll see that the layout is preserved. Again, this is just ordinary styling and layout on these items, and a virtual scroller works with them out of the box. This is what we were talking about when we said we wanted to make it simple. We don't want you to have to think hard about using a virtual scroller. So what we're looking at here, though, is the cool thing. It's links, and this is what invisible DOM is making possible for us. So when I click on one of those links, the DOM nodes that it's linking to are not even in the document yet. But, well, they're in the document, but they're not rendered. And so you get an event on them, and by default the browser will actually flip it visible and scroll to it. But in the case of virtual scroller, we're capturing that event. We are scrolling to it, but in the process of doing that, we're rendering all the context around it. So it's perfectly seamless. To the end user who's using invisible DOM with a virtual scroller, it's exactly as if the entire document had been rendered the whole time. And I am just about out of time, but I wanted to show you one of the exciting things about using invisible DOM with a virtual scroller is that it means you can actually put invisible content directly in a document. And we're talking with our friends over in Search, and the idea is that you'd be able to effectively have your content be entirely indexable, but to benefit from the performance wins of virtual scrolling by not having to render at all on first load. So very quickly, talk about the path forward. Again, very early days, but very encouraging results. What's next? More invisible DOM integration. We've really just started exploring how the two play together, because thanks to some great work from Rakina on our DOM team, we've just had working a version of it become available very recently. In fact, it's available behind a flag in Chrome Canary. We think there may be additional privatives as well. We've talked about some. I won't go into detail here, but invisible DOM is just the first. Framework collaborations. So from the start, we've actually in the repo had proof-of-concept integrations with LitHTML and with Preact, and for this talk, but I ended up getting cut. I did a basic React integration as well, but it's very early there, and we haven't worked closely with the frameworks on that specifically yet. More advanced use cases. So we saw some of those examples, but there are others that we have not gotten to yet. Performance optimization. It's pretty darn fast right now, but there actually hasn't been intensive optimization done on it yet. And then importantly, down the stack explorations. So one of the first questions we get when we talk about this is, why do you need a virtual scroller? Can't the browser just do this? And these are discussions that we're having not only externally, but also internally. And it's a long complicated story. Browser layout is complicated in much the way that AMP simplifies building pages. Virtual scroller can simplify layout by locking you into a much simpler model, and there are some wins there, but that being said, it's very possible that we can also pursue other types of optimizations further down the stack that get you some of the same types of benefits without even having to go to a virtual scroller, and those are certainly things that we're discussing. The standards process. So again, been talking to other browser vendors all along, but it's very early for this and other similar high-level features. And the list could go on. So if you're interested, I invite you to engage with us on GitHub. We have a very active set of discussions going on in the issues here. And here is the GitHub repo where you'll find us. And with that, I've been excited to show you virtual scroller. I hope you're as excited as I am to see it come into the web platform. It's time for another one. Yes, we're going to do another question, but also a few people have been asking about the Chromebook discount stuff. Yes, everyone here gets a 75% discount off one of these. Woo! The place to get that is the registration desk. It's free. We've been told that people are going to the forum space, but the forum space doesn't have that discount code. Please go to registration. I think they're quite good. You should buy one. Have you been paid to say this? Yeah, pretty much. Okay, let's do another round. Right. Now, this one is going to be about CSS properties and attributes. You're going to see a series of letters in a row. I think we call them words. You need to decide whether it's a CSS property or HTML attribute. You have three seconds to get each one, so you need to be fast. Right. Let's have a look at the ones that come on the screen. Here we go. Okay, default, touch action, Unicode. The cell padding there. That takes me back. Speak. Okay. Unicode base. Yeah, exactly. Zoom access key. I want you to have one of those. Oh, DC color. Remember that one? Yeah. Auto-capitalize. That sounds useless. I wonder what that's for. Gap. Sounds like a style thing. Gap. Who knows? I remember that one. Auto type DIR. Okay. The ground is closing. Let's have a look at which way you have all voted. When it appears on the screen at some point, I hope we don't need to get some around again because it's not ending. Right. Maybe I can do it manually. Here we go. Right. There we go. Excellent. What's the voting like? Default. We're saying that the Unicode is a HTML attribute. Okay. The answers. There we go. There's a small clue. CSS properties tend to have the hyphens in. Yes. That's not always true because we get a lot of SVG. So, let's have a look at the next set. I'm curious about this code base thing. Yeah. So, we're saying HTML attribute for the code base, HTML attribute for speak and access key. The answer, of course. So, what's code base? I don't know this. Code base is from the applet tag. Is it still a valid tag? Is it still a valid tag? Let's say probably maybe. Okay. That works. Everything in HTML. Speak is to do with how a screen reader or something would pronounce that bit. So, it's essentially sort of display, although not visual display. Access key, yeah. That's CSS as well. Let's have a look at the next set. Autocapitalized VG color resize gap. We're mostly thinking those are CSS properties. VG color is a HTML attribute. Yeah, that's from the olden days. Yeah. Autocapitalized is for an input element, and it auto capitalizes as you're typing. So, very much functionality. Yeah, resize and gap are CSS. So, let's have a look at the last block. Zoom, order, type DRR. Okay, everyone's saying Zoom is CSS. Yeah, and it absolutely is. I built my career on top of this CSS attribute. The Zoom. Yes, this is like when I was fresh out of university at the BBC, I built a career of knowing all of the IE6 bugs. And I would be called to different departments as like, we've got an IE6 bug. Let's get Jake in, and I'd walk in and it's like, I've got this, I just need to find the element to put Zoom one on, and then I'd walk away. Like, promotion, please. Amazing, amazing. Did they promote you? No. All right. Yeah, that's it. Okay, so yesterday, me and Jake came on stage to talk about the build app we built, which is a web worker, right? But we were honest about the little laugh edges of working with workers. However, Jason and Shuby coming on stage next had some ideas. Oh. So please welcome to the stage Jason and Shuby. Hi, everyone. My name is Shuby Panaker. I'm a software engineer working on the web platform in Chrome. And I'm Jason Miller. I'm a DevRel for Chrome. Our talk today is about a key strategy for the development of web apps and that is scheduling of JavaScript on the main thread, as well as approaches for moving script off the main thread. Jason and I have both been deep in this space exploring gaps and APIs for what we are calling achieving responsiveness guarantees. We're excited about the opportunity here, both with existing primitives, as well as the new APIs we will show in our talk. Right. Let's illustrate this problem of space using a demo. This is a simple application that searches through photos as you type. You can see here with this JavaScript control red spinner animation, it's doing a fair amount of blocking work on the main thread. While this is happening, the app can't respond to input, so typing gets cued. Looking closer, what we see if we pull out the profile or something like this, a sequence of long tasks that we are executing. We can see this on a simplified view here, so if we receive input and we start doing some processing in response to that input, say searching photos, rendering some list items, we're skipping frames already. But in addition to that, if we receive additional input while that task is running, it will get cued and only is able to execute once that has completed. So this is data captured from real users on real websites and it shows a breakdown of where Chrome was spending its time while handling input. So there's a lot of interesting data here, but we don't really have time to get into it. The main thing to look at is the amount of time we're spending in this v8.execute task. That is Chrome running JavaScript during touch handling and it's clearly the biggest contributor to touch input latency, both on average and also in the worst case. So a problem with our search as you type is there are a lot of different types of work happening here. And all of these different work have what we are calling grade deadlines. So for example, the user is typing in that search box, their input has to be responsive, there's ongoing animations on the page, they have to render consistently and smoothly. And then there's the heavy lifting of fetching search results, post-processing, preparing, rendering these search results in time . The difficulty is that it's hard for apps to balance these competing needs to reason about all these different deadlines and keeping everything meeting these timelines. Right, so we have a bunch of different types of work and each of those types of work has a different deadline. And what we need to be able to sort of work through this is priorities. So there's a couple high level approaches to try and achieve responsiveness . You can just be doing less work. And there are ways of doing this, such as in an infinite feed, you might only render what's visible. We just saw a strategy with the virtual scroller talk right now. Now, this is not always possible. Modern apps often just have a ton of work to do. So a second strategy here is chunking up work and prioritizing these chunks of work. In practice, though, this is very practical to achieve this manually on your own as an app developer. And we think there's a real opportunity here for frameworks to step in and help their users. Frameworks are in a great position to ensure chunking and prioritizing of work. So stepping back a bit, what we need here is some way to provide our chunks of work, our tasks, to a system that can hold them, say, in a task queue . This system can make good decisions about when to take tasks out of the task queue and execute them at an appropriate time based on everything that's going on. And this is the definition of a scheduler. So Google Maps is a really great example of an application that uses a scheduler to keep its interactions smooth. This app has to manage multiple different types of interactions and give a much higher priority to input response tasks. We can see that here. So let's say I'm panning the map and as I'm panning, additional tiles are coming into the viewport and need to be loaded. However, if I stop panning and I pull up the drawer at the bottom, all of a sudden that is far and away the most high priority task to execute. So those map tiles being loaded need to be deprioritized. So a key aspect of a good scheduler is to understand what is the best time. And this is an appropriate time based on everything that's going on, various factors like what's the type of the task, what's important to the user right now, what's the overall state of the application, what's the internal state of the browser, et cetera. So to understand this notion of best time, we have to kind of step down a level and look at the browser's rendering pipeline. The browser is periodically pumping up to 60 frames per second display rate and each frame has a set of things that happen in sequence. For instance, we have request animation frames followed by style, layout and paint. In Chrome input handlers are aligned right before request animation frame callbacks. So the point here is that there is limited time to do the urgent work that needs to happen in the current frame. And then the app has to immediately start thinking about preparing for that next frame. And the third type of work here is idle work which might be left over in the current frame or there might be plenty of idle time if no frames are being rendered. So this is the terminology we are using for these three types of work. We have user blocking tasks for the current frame. This is typically to provide the user an immediate acknowledgement of what they're doing. So in our example this might be keeping that typing interactive in the search box, keeping those animations going on the page, overall keeping the page responsive overall, buttons should be toggleable. Default work is this next category of work. This is typically user visible and this is preparing for the next frame or a future frame. And in our example, this would be the work of the post processing preparing the search results rendering them in time. And finally, the third category idle work. This is typically work that is not user visible. This can be at the end of the frame or if no frames are being rendered. Things like analytics, backups, syncs or indexing. So on the right here, we've sort of listed some existing primitives, existing ways how a developer might be able to submit work to the browser to target these priority levels. So for user viewing, input handler and request animation frames are great for this. It's also worth noting that micro tasks are suited for user blocking urgent work. They do not yield to the event loop. And we've seen some bad cases where developers are accidentally doing non-urgent or large amounts of work without realizing it's blocking rendering. The second thing, default, we have things like set time out zero, post message, these are really hacks that aren't a real primitive here. And we are working to fill this gap. And finally for idle, request idle callback is a great API. So JavaScript schedulers can be built today using these primitives. Now, while it's possible to build a scheduling system in JavaScript, they suffer from gaps, primarily because they don't have enough control on signals and, you know, to properly control scheduling. So we'll go through some examples. So for example, we've seen JavaScript schedulers are trying to estimate the frame deadline. So they're doing a whole bunch of bookkeeping, trying to guess at it, but they're doing it poorly because it's just not possible to do this well without knowing browser internals. So we're considering exposing an API for that. This input pending is a really useful signal for schedulers. And we are going to do that in a different way. So if you're doing an API, then there's other coordination work. So for example, handling fetch response priorities is pretty relevant. If you're doing urgent work for the current frame, you don't want your low-priority fetch responses to come in and interrupt that. In practice, though, there's a lot of other work that's happening in the browser. The browser might come in, or a post-message might come in from a worker. There's internal work, like GCN, and it's just not possible to codify priorities for all of this and expose signals. So this got us thinking, how about moving the scheduler one level down and integrating it directly with the browser's event loop, where we already have most of these signals and a lot of great information. And this would solve an additional problem. That is this coordination problem between multiple parties in the app. If you have third-party content or embedded libraries or legacy code or even other frameworks, they can all coexist and use the same scheduling system with consistent priorities. So this is a very early sketch of what an API might look like. The key thing here is a set of global task use targeting each priority level, and so it's really simple and straightforward compared to using immediate different APIs. The second thing is we think it would be useful to have a notion of user defined task use or virtual task use, and this would give developers more control over like managing a group of tasks and doing bulk operations, like updating priority, cancelling all the tasks, or flushing the task use if the app is going away. So here we can see sort of a simplified version of that map scheduler that we looked at using this task queue API. So first we hook into the user blocking and default task queues just to give ourselves a high and a low priority queue. And then we start listening for pointer moves events and each time we receive one of these events we enqueue a pan task with the coordinates of that pointer move. The pan task translates the map tiles obviously, and then it might let's say enqueue a low priority to detect any tiles that have moved into the view port and potentially load those tiles. The thing to note here is if we receive a new pointer move event before we've invoked this load more tiles task, that would be given a higher priority than loading more tiles, and that's exactly what we want. We give higher priority to input driven tasks. And let's say the team behind maps needed to track analytics or do something in response to pan gestures, that would be a good case for something like this. So here you can see most of the frames are in green and getting rendered in time. And this is what a well scheduled system looks like. The work is chunked up. There is high priority work happening at the beginning of every frame, followed by style layout paint in purple and green. It is immediately followed by default priority work in yellow to prepare for the next frame, as well as idle priority work being properly interleaved and this time is adequately utilized. Next. So all the API proposals we showed today are super early stage. We actually don't know what the end game here is going to be. This is a really great time to give us feedback and help us chart the course here. For web developers, we really think that there is an opportunity here with improved scheduling, even with just properly using existing primitives. For framework authors, we want to urge you to consider a scheduling system and collaborate with us now to develop the right set of APIs in this space. React's work on concurrent and time slicing has proven that frameworks can really play a good role in terms of helping apps improve responsiveness of apps. And we are already working with React and actively looking to form partnerships with other frameworks and apps. This is a link to our GitHub repo. Filing issues on the repo is a great way to get that feedback dialogue going. So what about work that can't be chunked, though? What if we have a bunch of JavaScript we need to execute and it's really difficult or even potentially impossible to break that work up? Here's an example that illustrates what I mean. Let's say we have a text editor that is bundling as you type. If I load in a decent amount of code here, things start to get a little bit slow. Every time the bundling process kicks in in response to my input, it blocks the main thread and this causes the cursor to freeze and it queues up my text input until bundling is completed. This really disrupts the typing experience. You can see that in the CPU profile here on the right. So it would be really difficult to break that work up into 50 things. So let's look at two reasons. First, I didn't write any of the bundler code, so modifying that would be a lot of work, particularly for me. Plus, there is a whole bunch of different libraries that are being used to actually make these things happen, those dependencies, and downloading parsing and evaluating those dependencies on the main thread blocks. So using background threads lets us offload that work. There's a few use cases that lend themselves extremely well to this approach. If you're building a computer-aided design tool, a game or doing encoding, these are great places to just start with threads. Same thing for AI, machine learning, crypto. If these are the types of things you're doing, you should start here. In the browser, our primitive for threading is the worker. So if you haven't used workers, you haven't used them in any way. You can send a message to the worker, and you can receive a message back. They have no DOM access whatsoever, and a very limited global scope, kind of just fetch and modules stuff. And they shipped around 10 years ago, and they're available essentially everywhere. So the API for workers looks like this. You will instantiate the worker constructor and pass it the name of a script, and then we can listen for messages coming back from the server. So we're sending in a message that's an object, and this describes that we would like to invoke a compute hash function, and we're going to pass it the contents of a file expressed here as an array buffer. This second argument to post message is interesting. This tells the browser to, rather than structured cloning the array buffer, it will transfer it in. Finally, once compute hash has completed, it's ready to go. So under the covers, this post message of the data is incurring a serialization on the main thread and it's getting queued up, hopping over to a worker thread, followed by deserialization, and end to end, this is called a thread hop. And this thread hop has a cost, and primarily from the data being subject to what is called structured cloning, which is a copying behavior while recursing the JavaScript object, the size of the thread hop. So one downside of the post message API is that it doesn't have a notion of statefulness between the request and the response. So if you make a whole bunch of requests, you'll get a whole bunch of responses back, and it's hard to correlate those responses to requests. Right. So we've seen how to communicate with a worker using post message. There's actually a number of ways you can do this. A second way would be to use post message API and you get back two ports. You can pass your other port to some other context like a tab or a frame or a worker, and you can message between the two. They have the same interface as we just saw. Another one would be broadcast channel. This is sort of like a message channel that's shared to all contexts associated with an origin, so all tabs, frames, workers, service worker, and all you do is you instantiate a broadcast channel with a channel and you have a fourth way to communicate. And this is transferable streams. It lends itself really well to things like audio and video where the format you would want to use to express these things is streaming. The thing with all of these APIs is that they're message based. And based on some of the common usage patterns that we've seen and what we've heard from developers, we think there might be a case here for a higher level API. So we've seen solutions to this in user land through libraries .org. These all help coordinate messaging across boundaries by abstracting that post-message using something called proxying. So messaging certainly improves over a post-message, I'm sorry, proxying improves over post-message, but it comes with a number of downsides. Every method called to a proxied object incurs this cost of a thread hop and this can come as a surprise to developers. Platform gaps in these APIs, these APIs don't really have a notion of a backing thread pool or a concept of managing threads and resizing the pool. And better libraries are not able to share the same thread or thread pool. And for complex APIs, it can be impractical to recreate this API surface cross thread. So this raises the question is there an opportunity here for better integration with the browser? Is there an opportunity to provide a more compelling API? So this, we think there might be a use case here for something that looks something like this. So here we're passing the name of a function in some other context and some arguments to a theoretical post-task method. This post-task method would return a promise that eventually resolves to the return value of that function somewhere else. This abstract code helps us move from a model to a more task-oriented model. So in looking at the requirements for a better API, we considered other platforms. We looked at iOS and Android, which have plenty of precedent for usage of background threads. In particular, iOS has an API called Grand Central Dispatch, which is a very stable, well-proven API that's loved by iOS developers. Android, amongst other things, has something called Async Task, which is a very stable API. We talked to framework developers and experts in usage of these APIs who were deeply familiar with the pitfalls. And we learned things. Some key things we learned in terms of the basic requirements we want for our model is, number one, good ergonomics, a way for developers to off-load work by just thinking in terms of submitting tasks versus coordinating over threads. That's shareable with embedded libraries and other parties in the app. And finally, system-controlled thread management, where the system can be in control of making decisions on resizing the thread pool or decisions on where to run which tasks. So we set off on a path towards building a basic task-Q-based API inspired by Grand Central Dispatch. And a naive API might look like this. Let's say we have three tasks, A, B, and let's say each one depends on the results of the previous task. And we can submit these tasks from the main thread over to worker threads, and then we'll start getting responses back. So here for three tasks, we paid the cost of six thread hops. There's a few downsides here in gotchas. So for one, these thread hops can be expensive on lower-end devices. And depending on the data size, it can be up to 15 milliseconds. And this can add up. This means that if these hops are in the path of user interaction, this can add up to multiple frames worth of latency now. On Android, we've actually seen this in practice in the real world in their usage of async tasks. So one conclusion here is this notion of default posting back results to the main thread is not a good idea. Besides the latency issue, it can cause congestion in the main thread. So we're going to take a look at some of our Q build-ups. And then you might remember from our earlier main thread scheduling talk, we're doing all this work to carefully chunk up our work and, you know, execute our high priority and our default priority work. And all these post messages coming in at random times, messes with main thread scheduling. A second thing to note here is that we're going to have some of our Q build-ups in the main thread. So this brings us to a new proposal we have that incorporates some of our learnings from other platforms. It lets developers avoid sending data back to the main thread. It lets you chain tasks together without data transfer and pay the return cost only once. It also minimizes thread hops using a built-in file. So let's dive into that. If we revisit the code editor that we showed earlier, the one that bundles JavaScript as you type, if we do this using task worklet, we can leverage some of these features to improve performance fairly considerably. Because task worklet avoids transferring data between threads, the bundling and minifying tasks in this demo can actually all reuse the same AST that is generated from the minified code, which is a relatively small string, actually gets sent back to the main thread. So the implementation looks something like this. First we create a worklet module, and that registers named task processors. It just classes with a process method. Then over on the main thread, we can coordinate that data flow using this post task. So we're going to parse the code and then pass that resulting AST through the main thread. So what we do is that none of these variables are actually holding values. These are just pointers to data that exists in the thread pool. Data transfer back to the main thread only happens when we await the result property of that last task. So doing this in a typical worker's implementation would normally take six hops as we saw. We executed three tasks. We need to pass a message down and back up for each of them. In this case, we can transfer data between tasks. Task worklet is also backed by a thread pool. So let's say we start off with a task that produces a large set of images. When we post a task with some of those images as its argument, it will attempt to run in the thread where that data is already available. So data is never transferred between threads in this case, and that leads to fewer thread hops. To take advantage of pooling though, if there is no resource to transfer data between threads in order to get parallelization. And then finally, let's say the result that we're looking for here is actually just a comparison of the number of cats versus the number of dog photos since that's what's important in the end. In this case, the only thing we ever transfer back to the main thread is a single integer. And as you can imagine, that's extremely cheap. So we've been thinking a lot about what the future of web development is. We have libraries like comlink that use reflection to emulate the interface of some code running in a worker so that it can be called from the main thread seamlessly. In the future, we think we might move towards a task workload model where developers approach multi-threaded web programming in sort of a different way. You have a thread pool that's managed automatically, named tasks, and this concept of a task graph that optimizes execution and data flow. So we're looking at a lot of things that we're looking at in this early proposal. We are looking for feedback and we are looking for real-world use cases. There's an implementation available in Chromium behind the experimental web platform features flag. And also we have a polyfill and some source code and demos available at this GitHub repo. There will be a link at the end of the presentation as well. So there's been a lot of interest in this project. We've done a lot of work and apps. So we dug into this in the last few months to understand how far can we get with just using the worker API as a way to achieve threadedness. And to set some context here, a new worker doesn't just spin up a raw OS thread. It actually creates its own JavaScript environment on top of this. And part of that is what's called a V8 isolate, which has a V8 of the OS thread. A key implication here is that the worker, by creating its own JavaScript environment, is not able to share data or code with the main thread. And this is fundamentally different from background threads on other platforms and other languages. So this has implications in terms of using workers in a mainstream way. And by that I mean when the worker is in the path of user interaction. So here we looked at two app development models using worker. The first one is doing state management in a worker. And this is where you can do sort of the heavy lifting business logic stuff in a worker. And the second model is goes even further and does the bulk of rendering in the worker. Now while worker doesn't have access to the DOM, there are libraries like worker DOM so you can do virtual DOM updates in the way that you want to do. So this is where you can do a lot of work in the environment. So real apps have been built using these models. However, there are some significant challenges that we want to sort of highlight here if you're planning to go down this route. The first thing is that it's hard to have synchronous access from to a worker. But real apps need synchronous access. So you can update that app state on a worker. And this means you now have to maintain and replicate this app state in both places and synchronize it continuously. And this has a cost in terms of thread hops. The second thing here is that the worker has to be bootstrapped with all the script and modules that it needs. Because like we said it cannot share code with the server. So we run benchmarks and to kind of dig into this base cost of a worker. And these are some numbers from a medium Android device. Startup takes upwards of 10 millisecond. This is again a Chrome on Android. A thread hop varies anywhere from 1 to 15 milliseconds depending on the device and the size and type of the data. Look out for a block post that will be identical to and we will have some detailed links to our benchmarks and data there. We also set up more realistic benchmarks. These were, we built apps that were representing the app development models. We mentioned that is the state management in a worker and rendering in a worker. And we did a ton of runs on real mobile devices both with and without worker. And we looked at a variety of metrics. We looked at a variety of metrics. We looked at memory metrics to rendering metrics such as frame rate and input latency. And we approximated input latency using cycle time. So again, the block post will have more details on this. But I do want to highlight one bit of interesting data. So this is basically showing runs with an app that is running without worker. So what we are seeing here is that on worker, we are seeing a higher, more improved frame rate. But on the flip side, we are also seeing a higher input latency. So there's a fundamental tradeoff here between improved smoothness versus user latency. Workers are able to sort of free up the main thread by offloading work. And so they are able to free up the main thread to focus on rendering. And fewer, less script means fewer long task hiccups on the main thread. Again, on the flip side, input latency suffers from thread hops. And the worker environment is a limited environment and doesn't have APIs. And it's not just the DOM. There are many other APIs that are still not available, like media, audio, software. But they might do it at the expense of a bit of input delay. There's cases, though, where this is completely worth it. So AMP script is a great example. AMP script renders using workers in order to sandbox potentially misbehaving JavaScript. Slower or problematic code that's running in the worker in this emulated DOM can't negatively impact the AMP document. And so for AMP, the benefits they get out of this is that they don't have a lot of latency that they get from transferring events. So we wanted to summarize when to use workers, but it turns out there's no perfect rubric for this. So there's a couple of hints you can use, though. If you have code that blocks for a long time, if you have code with small inputs and outputs, or something that follows the simple request response model, you might be in a position to start off with a response or just code that needs minimal overhead. You might want to start off with a different solution. You could approach workers later. When adopting a threated approach to state management, make sure that your state management and business logic outweighs the cost of creating a worker and sending and receiving messages. Make sure that your worker is pulling its own weight. So we're at the beginning of a fairly major shift in how we're doing this. We're excited to explore new possibilities for effective scheduling and threatening, and we hope that all of you are too. So we just want to leave you with some of these key messages from our talk today. It's hard to achieve responsiveness guarantees because there's so much work happening in modern apps, and we think scheduling is a compelling strategy for tackling this. There's an opportunity here for people who are in a good position to play a big role here. In terms of offloading work from the main thread, you can think of using worker as an extension to better main thread scheduling. Some types of work are better suited to worker than others, and we think new APIs like task worklet are going to be compelling to utilize worker for scheduling. So that's about it. We'll have a blog post coming up next week. Thank you. Thank you. Thank you. Thank you. Again, the links to the GitHub repos. Issues on the repos are very welcome and appreciated and a great way for the feedback loop. And do not hesitate to reach out to us on e-mail or Twitter. Thank you. The usual pattern? Shall we do a quiz-y question? I don't know if you've heard of JavaScript before, right? It's this programming language thing, whatever, scripting there. But there's a group that do all of the proposals and stuff. Yes, so TC39 is a technical committee 39 at ECMA international, which is an international standardized organization. Anyway, so TC39 is a group that defines a future JavaScript features, and the way it works is people submit a proposal and we discuss it. So we're going to see a name of a proposal and you need to say if it's a real proposal or not, a fake proposal. Just something we made up, right? Okay, you're going to get a few seconds per item that comes up. So guess quickly. Here they come. Right. Okay, what have we got here? Object seal, that sounds good. Power ranges, that's probably definitely real. Logical hash pipe, that sounds delicious. I'm curious to see how to use it in my code. Tiny Indian. Logical assignment. Exceptional seal, brilliant. Rolls and realms. I think I've seen that movie before. Knowledge correlation. Temporal permanence, that sounds like optimum prime spreadable mixings. These are just all words. No idea. Imagine going to TC39 meeting. It's all words. We'll be saying everything in words. Optional training. I don't know. Membrane, smooth operator. That sounds great. The question has closed. Okay. People are very confident about object seal. Object seal, that sounds great. Power ranges, logical hash pipe. We're not convinced about power ranges. Okay, fair enough. Wasn't sure about the logical hash pipe there. That sounds good. Let's see the answers. They're fake. Object seal is real. Blind ref is something you would complain about in soccer. Just to make sure, these are the proposal. I'm not promising this is going to make it to JavaScript. Let's have a look at the next group. What have we got here? Tiny Indian. Logical assignment. Let's have a look. Let's have a look at the next group. I don't want to have an exception. Frozen realms is a real thing. Yes. A realm is a global, essentially. If you create an iframe, you've created another realm. It's a frozen one of them. Brilliant. Excellent stuff. I like the way you just went, yeah, Jake doesn't know what he's talking about. Brilliant. I'm being careful. I don't want to start Twitter fight somewhere. It's a good poll. I have ideas to bring it to. People think something about spreadable mix-ins. I shouldn't make a proposal about it. People want it to be real. If that changes the winner, we're not going to change prizes. Okay, fair enough. Optimum primes is of course a robot. Next one, optional training. Men blend. People kind of spread. It's fake. I like that. 55% of people thought smooth operators won. So, yeah. Great. Is that it? Yeah, that's it. Who's speaking next? Well, up next, talk about application architecture stuff. A couple of people who've been on and off the stage over the past couple of days. It is Paul and Surma. Big round of applause. I was on the way to the office this morning and I realised it is very much like web debt. Rush hour. All the traffic. Everybody is trying to get to the office at 9am. All these people in their cars and on trains, everyone is just rushing and nobody can move for anybody else. And I think that's like web because you've got... Please explain this a bit more. You've got the main thread. You've got this one thread and on that main thread you've got all this work. Everything is running and everybody is competing for this one resource, the road or in this case the main thread. Oh, so like Mr Framework has a car Mr Paint has a car Mr Business Logic has a car and they all just want to go on the road but it's already full. Exactly. And everybody is in the same boat where everybody gets the angry tweets and sees kind of all this performance advice and nobody knows what to do because all this stuff is just strained into this one place. Yeah. Look at that production value. I know, right? Now. Anyway, look, rush hour. As we described in that video, that's kind of how we feel when we look at the web at large we look at it and we go all this code should be here but it just feels like the traffic is the problem. There's just too much going through the main thread. And traditionally the main thread is full. It's overworked and underpaid. You would say, cool, I'd use threads on any other platform. I'd do that. Just spin up a thread, put some code there, run it there, call a function. Hooray, everyone's happy. But turns out JavaScript and the web is special and it is inherently single threaded so you can't do that. Right, exactly. Every thread is kind of its own little universe, isn't it? We heard it's a V8 isolate, right? And so you can't just go just call this on another thread but you've got some shared stuff that you can work on. So that's a challenge. And then it gets more interesting because we're trying to build, I don't know, like a chess game, just for argument's sake. And you've got a chess engine and the chess engine is, it takes a few hundred milliseconds to calculate a move. Or more, I mean it gets exponentially difficult. And you build it with DOM and then some bright spark goes, you know what we should do? 3D. And you're like, I was already behind on my perf budget. If you could just not do the 60 frames a second thing, that would be great. And then some even brighter spark comes out of VR. Yeah, I want to stand on the chess board. Yeah, I want to be like right in the game and you're thinking there was already rush hour. Turns out frame rate quite important when it comes to VR. Yeah, and it could be voice. There's so many things that it could be. So this comes increasingly unlikely for you to be able to do this successfully on the web currently. Right. And so this is the question that we have been thinking through for the last little while. What do you suggest think of to help? We have two birds, we're looking for stone. Exactly. And... Oh, that went well. Okay. Wow, I really need to think of what I was going to say next. Okay, this, yeah. Acto model, yes. So we kind of stumbled over this. The actor model is, as it says right here, about 45 years old. There is languages that use the actor model to this day and successfully so. And we kind of realized that it's actually a really good fit for the web. Yeah, because what it does is it kind of, it makes a feature of that single threadedness of JavaScript. But we like to explain, if you've come across it before, great. If you've not come across the actor model, we like to explain it in a very specific way. So check this one out. So when we did supercharge, you saw us on screen. But behind the cameras, we had an entire crew. And that means we had one person working the camera, we had one person worrying about that our audio was good, we had a director, and each of these people was so responsible for that specific device. And instead of going over and pressing one person's buttons or just messing with settings, those people actually have to communicate with one another to get the job done. If you like their actors in the system, but they've got to send messages to one another and communicate and collaborate in order to get the final thing working. So that's kind of where more video, more production value. That's good, right? So that's where we kind of see a mentality that fits the web really well. And you start thinking about where can you draw a line for individual pieces of responsibility in your app. And instead of thinking about classes and how you call the method on the other class, you can now think about these actors and how you can send the right message to request something to happen. Think about how, at a conceptual level, like how would you think this through? What would it look like a little bit? Okay, so imagine you have an actor, and its job is to run your user interface. That's its area of ownership, that's what it does. That's its job, and only that. You might also have another actor whose job it is to handle state fear application. And yet another one who handles the storage. Now, imagine in your app that the user interface will actually be something like favoriting an item. The user interface, when the user taps on it, will send a message over to the state that says this was favorited. In turn, the actor handling the state will send a message to the storage to say we need to remember that they favorited this item. Now, we could also, at this point, introduce a new actor into this story. Something that could broadcast. Because when the state changes, typically what we'd want to do is to run the storage to be reflected in both, I suppose, really. Yes. And so you can kind of see here that that really is a separation of concerns. Ah, the click. There we go. Anvil transition. Do recommend. It really helps you to think about your app in a different way, helping you to figure out where does new code go, which module can you switch out to fix a problem that you're having. It's a really good way to structure your app in this way. There's a lot of separation of concerns when we talk about HTML, JavaScript, and CSS, or when we talk about components in a modern framework. So it's another version of that same story, I suppose. Another benefit that you get, and we have heard about this problem a couple of times, I think, yesterday and today, is that we often see big chunks of monolithic JavaScript to just run. Frameworks updating the virtual DOM and then the DOM or something like that. And with this pattern, you introduce a natural breaking point to ship a frame. Because every time you send a message, that kind of is a point where you say, okay, browser can intervene and ship a frame if we are out frame budget. Exactly. Now, a little sort of side effect, a positive one of this is location independence. And we'll come back to this as a sort of repeating refrain that we're going to get into a little bit more. But think about actors as they're not all the same. They have different requirements. Some actors will not need to have access to the main thread, for example, because of the kind of work they do. And as such, we might be able to run them in different locations, not on the main thread. As I say, we'll come back to that. But the idea here is maybe we just bought ourselves a little capacity for rush hour. Because as long as the messages get delivered to the actor, the actor will then do the same work a little bit before and will respond with the same message. So the entire app keeps behaving the same way no matter where it runs. And because of that location independence, we can lower the likelihood of long work impacting the main thread and making your app janky. Exactly. So, conceptually that's what it is, but I think maybe it just does, but I like seeing code. I think code helps. So we have we're not launching a product. We're not launching a framework or even a library. We just wanted to have a chat with you about architectures. And we've been using some actor based stuff for the last little while with our colleague Tim. And the three of us have just been putting some code together. So what we're going to do is we're going to just show you a little bit of the code that we've been using. We've been using it to build some of our apps. And we'll share that at the end. And you're very welcome to try it out. You're also very welcome to write your own. We don't really care if you use our version or someone else's version. It's more about the concept, about the architecture. Absolutely. But with this in mind, let's talk about a particular app. Something like a stopwatch app where you can pause, play that kind of thing. And then you might reset the time if you're done. So in our code, we have this actor based class. And that top function up there, hookup, is the first thing that you'd need to know. And the job of hookup is to kind of register an actor in the system so that we can talk to it later. So we can send it messages later on. Because ultimately, we won't know where this actor is in the system. And so we just need a, it's like a registry, where we can say, I'm going to tell you there's an actor and it's found under this name. It's basically the equivalent of what you might know from custom elements. There's custom elements.define and you say, this custom element is now known under this name and this does the exact same thing, but for actors. Cool. So then we have our two actors, a clock one and a UI one. And then in the bootstrap, we instantiate both our UI and hook it up. So it's available under UI as a string name. And it's also available with the clock. Like so. So now we can talk about how you might implement something like the clock itself. And in our case, when you've got something like this, it's almost like a pure data actor. It doesn't have any need to go near the DOM. It just wants to tick and pause and all those kinds of things. Yeah, I mean, what do you need? You need a set interval and that's pretty much it. Is that time out? Sure. I have a thing against set interval. Can we find me after this? I'll explain why. Anyway, imagine this then. We're going to model this as a state machine. What an idea. We start with a paused state. Our clock is paused. We can transition to a running state. Every second we'll tick and we'll go to the tick state and that will take us back to the running state. And you can imagine being in this kind of tick-running-tick-running state like a clock, like a clock. Indeed. We could pause and we could reset and when we reset we go back to that. And there's a really nice pattern here that plays along well with the message passing of the actor model because all of these triggers as they're called in the state machine world could just be a message. You send a message to this state machine, it's being ingested and then a transition happens. Absolutely. So we found actually that there's a lot of implementations for state machines out there in the wild and that's not very unexpected. And we've kind of been using X-State written by David from Microsoft and it's working really well. It allows you to declare your state machine as a JSON object to just declare your states and then what the transitions are and then you kind of just pass this to this machine constructor and you get a state machine. Yeah, so our clock extends the actor base class and what we do is we instantiate our state machine. We say go to the initial state which happens to be that paused state. And then later on imagine that we receive a message and the base class has this onMessage callback which is, I got a message, what do I do? In this case, we would assume that the state would be changing inside of our state machine. So we use the state machine transition to get from wherever it was to wherever it needs to be. So the message is basically driving the clock. And we'll inspect what the new state value is. So if we find that our clock is running, we'll set a tick timeout for one second. If we tick, we increment our tick count and then our clock will send itself a message. Now, it could call its own functions, but we tend to be a little bit, we like it fair. And so what we do is we make sure the clock sends itself a message like every other actor would have to send it a message. You can imagine like a message queue that buffers all the message and the actor goes to one message, processes, then goes to the next message. And so if you just call your own function, you would kind of be cutting that line. So if you want to keep it fair, you would just be cutting it off. And it's not a matter of if you want to do it correctly or a computer. Sure. Yeah. Okay. Cancel the tick if you're paused. If there's one pending, when we reset, we'll reset the tick count and again send the clock will send itself a message to pause. And now let's talk about, oh no, talk about sending messages. That's next, yes. So the clock is going to have to send a state update to the user interface so that it can reflect the message. And then we can send it a message. And that's very important to note here, the handle, this UI variable that we have there, it is not the actor instance. You can't go in and change a member variable of that class. It is just an object with a send method and only that method because that's the only way you're allowed to interact with any of the other actors. Exactly. And so in this case, we're going to send a message to the user interface and the message is going to say what the time is and whether or not the clock is running. We found that TypeScript is really helpful at this point because those messages need to be well-formed and well-understood and there needs to be a data contract and we've just found from practical experience that TypeScript is a really good way of saying this object looks like this, this is a number, this is a string, this is another object and so on and so forth. So just take that as what it is and we found that useful. Let's talk about the UI a little bit. Interestingly, you can bring your own framework. Yeah, in this model, we don't really care what kind of framework you use. You can use React, you can use View, you can use Lit, you can use Svelte, whatever you feel comfortable with, whatever makes sense in your scenario. The interesting shift here is that the UI framework is not your base platform, not your entry point anymore. The center of the UI is not your platform, it's not your platform. The UI is just one participant of many in the system of actors. And if you find that it's not behaving well for whatever reason, you can swap it out. The only thing it has to do is listen to messages of a particular type, which is kind of cool. In our case, we're going to use Preact, which is great. So we're going to import Preact, UI extends the actor class and when we do that, we're going to send messages back. Absolutely. And to do that, it is a case of the, on the UI side, we find our clock actor, we do a look up on it, and we will send it a message. In this case, for this particular example, sending it a message to start, but there will be one for start or reset and so on. Now we get to talk about that location independence a teeny bit more. Because one of the questions that Sirma Tim and I ask is, why does the UI actor need access to the main thread? And this is kind of to do with the rush hour thing. Our general rule of thumb, and there are some caveats mentioned in a moment, but our general rule of thumb is that the UI actor is the one that really needs the DOM and therefore it's the one that ought to be on the main thread wherever possible. There's an exception. The exception here is that certain APIs, those for media security device capabilities and identity are only available to the user. So we're going to talk about that in a little bit of a bug. We've been talking to some Chrome engineers about exposing these kind of APIs in a worker and somewhere else, but that's just not the world we live in today. So for now, that's a restriction. And tools not rules. So you might be thinking, ah, I should move all my actors away from the main thread. We'll get to that. But the thing is, if you've got a little bit of a problem with the UI, there's a cost to the thread hop, and that might be more expensive than just sending a message and just keeping the actor alongside the UI actor. So basically, if you want to do that, just measure and see what the impact is. Exactly. So the location independence, for all that notwithstanding, imagine we were back here where we started with our four actors, and they're all on the main thread, which is probably a little bit different. So we're sort of saying you might want to look at it more like this, and you might be thinking, why did they say not main thread? Surely they just meant web workers. And we kind of did, because in most cases, when we build these apps, web workers do feature heavily. We do move quite a lot of our actors to web workers, especially if they're non-chatty. But not quite. So we actually think that it's possible to run an actor, for example, on the server side. And this is kind of an interesting jump to make, because it allows you to incorporate your back-end into the architecture of your entire app. It is just another actor in the system. And as a matter of fact, the game that you've been playing all day, and it totally had no bugs at all whatsoever, is actually written in this model. So every player that is playing is an actor. The admin panel that the MCs are working on is actually running on the server side. And this is another actor, and then the Firebase storage and shared actor that's running on the server side. Interestingly, the mechanism by which they chat using the hookup and lookup can be anything. It could be fetch, it could be a web socket. It really doesn't matter. So long as they can talk, these actors can talk, and they've got a way of sending messages to one other actor, and they can send messages to another actor. So that's one thing that we definitely did achieve, is that we are kind of making it less likely to have big chunks of uninterruptible JavaScript, and more little chunks where the browser can stop in between and ship a frame. So that's definitely one advantage that we have. The location independence, hopefully some of our actors can be run successfully away from the main thread. So that's one of the things that we have to do. A lot of work that can happen in an unexpected way if you're processing a big API response something that can happen in a worker and not affect your main thread from going into jank mode. There are some other benefits that Tim and I have noticed as well working in this particular pattern. One is better testing. The other thing that we have to do is send a message that I can call, and I can make sure that you do the right thing. So the testing seems to become a little bit easier. From the other side, you can mock another actor by just implementing the messages that that actor needs to receive, not do the actual work, but just send pre-recorded messages back. You get the clear separation of concerns. Which, again, it helps you in making sure that you don't have to do that. Which actor needs to be responsible for this part of the system? And you get the code splitting. Because you have actors that can be hooked up to the system at any point in time, it allows you to split them up and load them lazily. You can import them when you need them. That's good. And bring your own framework. If you want to use a particular thing, use it. Great. Now, there are some considerations in this world, in this setup that we've described. One is actor performance challenges. If you imagine your UI actor, imagine it decides to run long and just be not very yieldy, you still have that problem. It's no different to a process or an application in an operating system deciding it's going to hog the CPU. This is not going to go away. But we do think that the way that Jason and Shubby mentioned in the previous talk is a huge part of this story because it's a great way for individual actors to start breaking their work up into smaller chunks. You may also be sitting there going, I'm not sure I could actorize my blog. And we would agree. It's not necessarily for all use cases. This works really well when you've got apps and in particular apps where you think, actually, yeah, I can ring up some kind of owner for it. That's where it works really well. And there's definitely a different mental model to this. As I said, it kind of shifts the center of the universe away from the UI framework into many center pieces. All the actors just communicating. So it definitely took us a time to build an intuition for where do we draw the line? What becomes an actor? What is just part of an already existing actor? What kind of messages should we send? How do we do that? If you're playing around with this, it seems weird at first if you're playing around with this. It's a very different way of architecturing a web app. So that was the rush hour bit. That future faces of the VR, AR, and so on. We have some thoughts there too. Watch this. You were talking about actors before. I want to talk about cameras. The reason I want to talk about cameras is that the modern camera has two bits. The camera body, which is the thing that kind of holds the state, whether you're shooting in JPEG or RAW. It's like the business logic. It knows how to take the picture and how to store it and does all that. Whether you're shooting video, taking photos, whether you are autofocus, manual focus, you get the idea. Similar to like a web app, right? That's the state of what's going on. But you've got to be able to take something of a landscape. As long as we make sure that the mounts are compatible, which I guess in actual work means that they speak with the same messages to each other. Yes. So the messages that they sent are really important. They're sort of standardized, right? And everybody kind of plays to that same data contract. Other than that, you can do what you like. Plug it in. Off you go. Last video, I promise. Yeah. So, camera lenses. Why camera lenses? When we talked about this earlier, I think very naturally we would have all thought of the DOM. We would have thought of in the chess game, we would have thought of this version. But there's the freedom that you get from a UI actor, that it can, as long as it can speak the right messages, it can be implemented in different technologies. So you could have a different actor that does 3D. And now you get that. It just has to send the same messages to each other. Similarly, it needs to be able to do that. Send the right messages. And perhaps something like voice as well. Similar kind of story. So as a byproduct of the actor model, we've all of a sudden got these different swappable UI actors that we can take with us wherever we need to go. And you might say, well, actually all I need in this particular app is the XR and voice. Or maybe I want DOM and voice. And one DOM actor that implements the same app, but with a much less intense memory consumption version, like a low end version of your website. And once you detect that the device we're running on is actually kind of struggling to keep up, you can actually switch it out in the middle of the app and downgrade to the lower visual version. Or one with reduced motion or something like that. And technically this will be called multi-view or multimodal, which is something like that. And as a byproduct, we think the actor model enables you to do that pretty well. So if you're interested in looking at the hook-up and look up in the actor base class that we mentioned earlier, this would be the place to go if you want to take a snap of that. This is actually not only the actor base class, but it's actually a boilerplate. So it gets you started. It's roll-up configured where you can just build it, does the code splitting, the stomping ground that we've been using for the last little while. And we'd love you to take a look and have a chat with us and just tell us what you think. So in summary, we're actually quite excited that something from 45 years ago has kind of come full circle and is it seems to be the thing that respects... It's hiding in plain sight. From some other places in computing, we've kind of brought it over. Not in a purest way. We have our own take on these things. But it respects that single threadiness and it seems to help with rush hour and it also seems to enable us to go multimodal, which is really kind of exciting. And on that note, thanks. Thank you very much. Thank you, Simon. Paul, I've been told my co-MC from last year, Monica is in audience. Where is she? She might have just... Hi, Monica! Oh, good. Everybody say hi to Monica and talk about machine learning and art, okay? I'm really jet-lagged. Do you want to just take over? We can mic you up. It's fine. How do you feel that it took, like, not one but two white guys to replace one of you? I'm sure there's some social commentary there. All right. It is lunchtime. Excellent. Same as yesterday, lunch is out, like, over by the forum. Any specific dietary requirements, they're all catered for there as well. And then come back here at 2.30. See you then. Pinterest's mission is to help people discover things, collect them, organize them, and then find ways to apply them in your life. With more and more people on the go, the mobile web has become central at providing our discovery experience. But our mobile web experience in the past was basically an upsell for the native app. Which brought us to the realization that we needed to fix it. The technology was already in place for us to be able to do so. And so we brought a team together to rebuild the mobile web from scratch. Having a fast mobile web was crucial to the success of the project. We made sure that we split out what was sent down to the user to be only the crucial things to start and then everything else that wasn't immediately important sent down later on. We made sure to test out on average devices on 3G, just like our users would be using. And we could see the dramatic difference that there was on that initial time to interactivity. By using modern caching best practices and service worker, even if you're on a bad connection or have no connection we were able to preload the user interface for follow-up visits. Like the native app, our site was optimized for touch interactions. This immersive experience resulted in mobile web becoming our top platform for new signups. We wanted to make sure that users for our mobile web continued to use the product as time goes by and one of the most important technologies that's been added to browsers was the ability to add that site to your home screen. Pinterest is not just about the content, but about what you do with it. The browser is a discovery platform so making it easier for Pinterest to use our own discovery service is really a perfect fit. And action! This is designer versus developer that explores the challenges that we face in industry, such as the convey about its designer works on something developer takes it and developer screams. Creating software that's not frustrating is actually hard to do. If you put all the work in, no one should realize anything actually happened at all. How you go about doing that is a good question. You're basically trying to hack human perception as much as you can. If you make a developer guess how to design a site, we're going to guess really poorly. We stop at this data level and we don't go through this extra step to make some more art on top of it to make it extraordinary. Does it feel like you're breaking the web? There's no point in implementing a feature if no one's using it. That's it. You can find these and more on the Chrome developers YouTube channel. I feel like I'm being trolled by that music. No, it's going to happen until we all die. That's how I feel. Wake up in the middle of the night hearing. No, no, no, no. The worst bit is when you're backstage and you can hear just a little bit. No, no, no, no. Yeah, and you think, is that actually playing somewhere or is it just in my head now? Head to my head. Anyway, lunch, awesome though it is, can make you feel a little bit sleepy. Yeah. So we thought we need a way to wake everyone up and I've seen some conferences do the thing. The exercise thing. Get everyone to stand up and jump up and down. No. No, we wouldn't do that to you. But we would do a quiz. We'll do a big web quiz. Yeah, let's get the screen up. I am super fond of this question. This is our favourite, isn't it? So there's a couple of changes here. We're going to give you four seconds per question, a little bit longer than usual. And it's also two points. They're all points because it is quite exciting. And I will explain the rules to you now. True or false? Off you go. So what are we looking at here then? Low confidence I think is what we're looking at here, buddy. Yes, we are, aren't we? What is that? An array, not an array, plus two equals 22 as a string? I don't know. Empty array? I don't know. Is finite zero as a string? Does na equal na? Sounds like it should. Is this array one again? Or is it different this time? The first one was a not array, wasn't it? Is na equal false? Now you see why you got four seconds. Number is finite, we had is finite zero, but we know that number is finite zero. You were going to tell me there's a different, aren't you? Coversion. Yes. What are they different? We'll find out. Oh, okay, fair enough. We've reached the end of that round. Let's have a look. Honestly, if you're programming driver's license, I would say certain people might need it revoked. We came out yesterday and we did say, hey, don't worry about these silly questions. Don't worry about if you're getting them wrong. I think that first one, really, it's a bit of a worrying sign, if I'm honest. Okay, so they are all false except just the object, I guess. Object is truthy, which is truthy. That makes... That is truthy. Hang on, what's going on with this array nonsense? What's this array plus two plus two? I have no idea. Excellent, let's move on to the next screen. So finite zero. I love it when it's just almost 50-50. It will be true. That's true. Zero is a finite number. It is finite by itself. It's happy to coerce the string to a number. Okay. I guess we saved that piece of knowledge for the next screen. Okay. It's not the same as nan. Good to know. Okay. It's also type of number, not a number. What? Yes, it is. So this time, because of the coercion, it's a number equals a string. It's true. Hang on, what's going on here? Is that true? I don't know. Are we going to double check that? No. No equals false is false. It's false, excellent. That one is right. And then the final set. So we've got number is finite here. So you're saying that should be false. Is this false? Because it won't coerce the string to a number and it'll look at it and go, that's not a number and it's not finite. So no. The not not false in strings, because it's false in a string, but it's true again. Not stringy true. Which is saying the word false. Sure. I'm not feeling any more awake than I did before lunch. But are you feeling deep sadness? Well, I always do. Even more so now. Should we take a little look at the leaderboard? Oh, I think we ought to. I just like the animations you've done on this screen more than anything. Right, who we've got here. Also some CSS blend modes. We've got two people featuring from the leaderboard yesterday. Liz has stepped up from, I believe, third to second. Will has been in first since the start. Good luck catching up. And Arrives in third place. Excellent stuff. Can they stay there for the finale? For the ultimate prizes? It's all up for grabs. Oh, okay. So we should get the next speakers on the stage. So here to talk about web packaging and portals. Let's have a big round of applause for Kaniko and Rudy. Hi, everyone. I'm Kaniko. I'm the tech lead of a loading project for the web platform in Chrome. And I'm Rudy. I'm the lead product manager for AMP at Google. This presentation is a developer preview of two upcoming APIs, web packaging and portals. With these APIs, we believe that you'll be able to take the web flow friction nature to the next level and create zero-friction user experiences. Thank you. Thank you. Thank you. All right. So here's a slide showing a slide. To be more precise, this is one of those old-school projector slides. In their day, slides like these were a useful way to communicate or held important memories for people. To view the slide's content, the projector needed to mechanically position it into place so that the light source would shine through it. And you'll probably remember how tedious it was to progress through each slide. This kind of makes me feel like how we think about the web today. The same feeling. Just like progressing through cylinder slides, when you browse the web today, you can feel all the navigations. Today, the web is used a lot on the go. Say in between meetings, in an elevator, or maybe you're on a poor connection. When I've got limited time focused on my phone before the next distraction comes along, spending five seconds or longer staring at a white screen, waiting for the content to come in, for the page to be interactive, that's incredibly noticeable. Many of you may have observed we use some exaggerated long transitions on our own slide so far in this talk. Don't worry. We're going to put it into that soon. Those were just one-second fades. We're going to do a little bit of reading and then waiting, reading and then waiting that we've kind of grown used to. We think it's time to do better. We have some ideas to help you create seamless zero-friction user experiences. That's what this talk is all about. So, I decided to see seamless user experience on the web. It's not new. In 2013, back in 2011, we launched a feature called Instant Search, which provided user experience. So, by rendering the search results, the user would most likely click on. However, the feature only worked in limited scenarios for various reasons, in particular, for privacy reasons. It only worked for the search results the user had already visited and for which we had high confidence of user interest. Then, in 2015, another example of such user experience was launched. So, in 2015, the user experience is going to go through what it took and where things are headed. So, as Kanuka said, advancing the state-of-the-art and page loading has always been of intense interest to us at Google and for Google Search. We point users to a whole lot of web pages and thinking about the totality of the experience that the user gets, we really wanted to feel as fast and as seamless as possible, even using the full power of the web platform that was available, but where we wanted to get was here and we felt that what we could achieve on a scalable way on the web kind of just got us up to here. So, we gave it a bunch of thought and came up with an architecture for instant loading that could work on today's web. It's still what we use today. First, there's the AMP JavaScript library. It helps to ensure that the experience is fast by default and this is enforced by a validation step that the site is getting updated. The next layer was thinking about how server response times can vary a lot globally and not every site is situated on great infrastructure. Also, as we've seen earlier, sticking huge images into pages intended for mobile viewing is still pretty common. So, for these reasons, we added the second layer of caching where we can ensure that the content is pushed to the edges of the network for faster delivery and we can do common sense optimizations. Truly, the best thing we can do to get load times down to zero milliseconds is to pre-render the content, use our psychic powers. As we just discussed, this was attempted in search before with instant search, however, you need to think really carefully about the privacy implications of such a design and we had trouble scaling it. The cache will actually help us complement this pre-rendering very well and Kanuku will explain more about that in a moment. Now, let's take a look at the URL of the AMP page. This is where we ended up and this is where we ended up. Most of what you're looking at here is what we call the AMP viewer. That's the page you're visiting. It's responsible for displaying the content, serve through the AMP cache for speed and privacy reasons. However, you'll notice by looking at the address bar that the URL is still saying Google.com in it. To help the user understand where the content they're viewing came from, the URL was achieved, but the design constraints that we faced and the workarounds that we had to build for them ended up being put on full display in the product experience and that wasn't great. We hear a bunch of feedback on this, maybe even from some of you. Earlier this year, we started down this path to make the URLs for AMP pages better and after having AMP in the wild for two years, we decided it was time to take all that we had learned and develop the necessary primitives so we could make all the content across the web be able to benefit from this kind of technology. So this means that for the cases where you click on a link in search and it's just a simple navigation, then we want the publisher's URL to be the one that shows up in the browser's address bar while still having the instant or nearly instant loading experience. So I already talked a lot about how AMP has been pushing the goal of highly optimized user experience. There was a lot of special handling required because of the gap in the web platform. We are now taking inspiration from past efforts like AMP and Instant Search and trying to eliminate this gap by extending the web platform. And by doing so, we also want to enable this zero-fiction user experience across all content on the web. We are working on multiple proposals to achieve this goal, but in this talk, we will introduce web packaging and portals. So let me start with web packaging. As the name implies, it's meant for packaging a piece of web content. We think we can enable various interesting use cases, but let me start first explain how it can help instant navigation, both for AMP and non-AMP content. So stepping back, we've long wanted to make web content load instantly and reliably. Here's why it's hard. When you publish something on the web, in the simple setting, you would have a server and your content would be a project there. Then someone brought your content, but the server might be overloaded and your content will load slowly. Change experience is not good. So what could we do to improve this? One way would be to fit your content. Suppose that your content is linked from a popular traffic source site. When a user visits site, it can trigger a prefetch when it thinks that the user is about to visit your content. Then, because the content is already in the user's cache, the navigation happens very fast. Unfortunately, the prefetched website can learn about the user's interest, even if they don't visit the website. One way to fix this would be for the referral site to add a cache here. Then the referral site could bootstrap your page load in a privacy-preserving manner because it could let the browser prefetch your content from this cache. This fix the privacy concern and the content loads instantly. So is this the holy grail? No, not yet. As Ruthie explained, this design allows you to bootstrap your content in full display in the product. The browser address shows the URL of the referral site instead of URLs because this is where the browser thinks the content is coming from. This is confusing to the users. The issue is that the web platform doesn't provide a proper way to let others bootstrap your page load. But what if your render page load is loaded on your behalf? This would let traffic sources bootstrap your page load and when the user navigates, it's just a regular page load from your servers, only much faster. So how can we achieve this? The browser needs a way to verify the two original resources that are served by a fast cache. This can be done by adding a proof of origin in the browser. So let's see the actual standard proposals. Webpackaging is not the name of a single proposal, but an umbrella concept for multiple spec proposals. The most important one is the sign exchange. It's basically a format that represents a single HTTP exchange or a pair of HTTP request response. Very simple. It's digitally signed so that the user can use it. There's another proposal building on top of signed exchange called bundled exchange. This is a bundle of exchanges that can represent multiple resources in one package, like a whole web page. We think bundled exchange will enable other interesting use cases while we start with signed exchange, since that's the key building block. And after a year of work, we will be able to implement the project. This is an experiment of the feature on Chrome 71, which is in beta now. You can play locally with it by enabling a flag or can join the experiment to enable it on your site. Please visit bit.ly to find out how. We'd love to get your feedback on this. That will help us improve this feature more in the future. This is an experiment of the feature on Chrome 71. You need to first acquire a certificate that can sign exchanges. And host it at a public URL. Such a certificate can be created at Digisat today. Once you do that, you can generate signed exchanges for your resources by using an open source tool. This process is fairly manual, but it also includes platforms like Google. We've gone ahead and enrolled Google. We'd now like to show you a demo of using signed exchanges for delivering AMP content from Google Search. To walk you through it, please join me in welcoming to the stage Sumo from 1-800 Flowers and Rustam from Cloudflare. Cool. Thanks, Rudy. AMP has provided a prominent pathway for user discovery of Google Search. We've done a great job of discovering speed to engage in thanks to an active developer community and thanks to a roll out of new web components. Web packaging further deepens the experience of providing a native UI while serving the benefits from the AMP cache. Today, we're excited to demo an example of web packaging live. Are you ready? Check it out. We're going to show you a demo of Google Search. You can see that Google Search is prominently featured. You know instantly that this is an AMP search unit. Click on it or tap on it. Instantly, as Rudy is mentioning, you see that there's no Google in the URL which is pretty key over here because now you instantly feel that you're natively on the website versus being on a website. You can see that you're actually on this case when you have flowers.com. Furthermore, having attribution so seamless adds to a much more confident realization for a lot of brands that now there will be absolutely wonderful attribution going from the SERP to the native site. And a big shout out really to the Google team and to the Google mobile consultants team. We've really been pushing the boundaries of UI, UX, enhancement and really making sure that the web is taking all the strides possible to go to the next level. Rustam, you want to go through how this works? Sure. Thanks, Jamal. Let's look under the covers and talk about how you would actually deploy something to support sign exchange. At the top here, in green, we have the request flow from application and proxy to your user's device. On the bottom, you have the request flow into the AMP cache. And in between, you have an AMP package. This prepares the documents for the cache and signs them to support signed exchange. Now, at Cloudflare, we sat down and thought about how to use our global programmable network to make this all simpler. And this is what we ended up with. We took all the logic necessary to use Cloudflare Worker. This sits at our edge and supports both the cryptographic operations, the packaging operations, and the logic necessary to arbitrate between the user and AMP cache request flows. So you might be asking, what's a worker? And simply put, it's V8 running on the edge. This allows you to write JavaScript targeting the service workers API, deploy it to our edge and have it running instantly in 155 locations. Supporting signed exchange is a great example of what workers is capable of. So in addition to releasing the code that supports this demo so you can build your own workers to try signed exchange, we also plan on building a full-fledged Cloudflare feature to support it at launch. Back to Rudy. Thank you, Sumo and Rustam. If you're publishing AMP content, we'd like to invite you to try out a developer preview of signed exchange AMP content in search using the web browser. You can learn more about creating packages and building an end-to-end flow like you just saw for your own AMP content. We've seen the benefits that signed exchanges bring to AMP publishers, but it's important not to forget that this will also benefit all pages on the web too. Now, I want to show an additional example, a regular cross-page navigation on the flow network, the content will load slowly on the page. The right side shows how it can be done with a prefetch with web packaging. You can see that the user is navigating to a page on a different site, instantly. The prefetch is down from the cache of the referral site, so therefore done in a privacy preserving manner. So, publishing how to use web packaging to realize privacy preserving instant navigation. It still feels like we're progressing through pages as this joint experience, not a nice seamless experience. And we've been wondering how we could improve this even further. And let me introduce our latest proposal, portal. So, let's see what we mean by navigation versus transition. Maybe it's not too surprising. Again, this shows a regular navigation. This loads slowly depending on connectivity and the flow. And the right side shows an example of a transition, as you can see. When the user taps on the article, a nice and smooth animation triggered, creating a sense of continuity. The navigation just happened without being felt. The navigation is so subtle that it's worth taking a closer look. As it can be seen from the address bar, the user starts their journey from a page on the page. And when the animation is finished, the user ends on another website, news.toro.jp. So it's a cross-site transition. Combining portal and sign exchange enables these types of user experience while preserving the user's privacy. You might not be interested in it, but portal is now limited to cross-website navigation. Let's take a look at how it can improve the user experience. I would like to thank Hatena Young Jump webcomics, our partners in Japan, who we've worked with to create this early exploration for their reading manga on the go website called Tonalina Young Jump. So let's see the user experience without portal. When you reach the end of the chapter, you see an instance of it. As you can see, it takes time to load the next chapter. That's because the website is using multiple-page architecture, and it needs to load a new page for each chapter. Now, let's see how that could look like with portal. At the end of the chapter, we can preload the next chapter and make the transition seamless. Pretty cool, right? The beauty of this is that you can achieve this smooth user experience without having to run the page. So, what are the portals? Portals are like iframes. You can create one as an embedded element of a page using portal tag. At this point, it looks pretty much the same as an iframe, and then navigate to an element by calling an activate API. When that API is called, the element is detached from the page and becomes the new top level version. You can also add animation to smooth out the transition. So, what are the differences between portals and iframes? The biggest difference is that portals can be navigated into. Another interesting difference is that portals are always created as top-level version context, while they can still be embedded in a page like iframes. Let's recap the benefits. Portals enable seamless page transitions like what you get with single page apps, but without having to react to your site, and even across different origins. So, you can just build your website using multiple pages, and can connect them with portals. So, here's the example called snippet of portals. You can create a portal as an HTML element, and then can append it to the page to have it embedded. Then when the user touches the embedded portal, you can add a nice animation and activate the API to make the actual transition. That's it. Exciting, isn't it? And you probably want to know the current status. We have an explainer on the GitHub. Visit between portals to learn more. Chrome implementation is in progress. We are aiming for an origin trial next year, and eagerly awaiting your feedback that will help us find this proposal. So, that's basically all from us about low friction to zero friction, but I have one more topic, bundled exchanges. So, remember that we hinted that bundled exchanges earlier. They allow multiple resources to be bundled in one package, and you might be wondering if about the current status of development. So, while the Chrome team has started building a prototype to build this, we think this could enable interesting use case scenarios like offline PWA installation and much more. Here's an example of a newsreader PWA. This is based on TOYDEM readup built by one of our awesome development folks, and now runs on a custom Chrome build to use bundled exchanges. The app allows the user to read news articles in a reliable way by letting the service worker download and save the articles as bundled if the news sites provide them so that the user can later read the saved articles from multiple sites even while offline. Note that the articles are still shown as coming from the original news sites, and the sites keep maintaining the control over them. Here's another example. As you may know, loading a large number of resources costly and bundling them in one big JavaScript file using bundled exchanges is a very popular technique. We did an experiment to see if bundled exchanges can be used there, which could allow the browser to process and cache individual resources in the bundle without executing the JavaScript. And one of the results looked like this while still preliminary. It looks promising. We think there's some potential, and we want to know what you think. Let's go back to the main topic of this talk. We talked about two new proposals for zero-friction user experiences. First, web packaging enables privacy-preserving instant navigations, and second, portals enable seamless transitions between pages or sites. Combined together, two enable zero-friction page transitions on top of any web pages, even across origins. Here's a look at our roadmap. Appliance to ship signed exchanges in the middle of 2019 and also to start an origin trial for partners sometime around then as well. For Google search, we're really excited about both signed exchanges and portals as a path to building more zero-friction user experiences across the whole web. Following the footsteps of the demo you just saw earlier in AMP, we'd like to launch support for AMP signed exchanges next year. We're also actively working on how non-AMP pages can adopt these same technologies for user experiences. We believe that we'll eventually just have highly optimized content on the web, regardless of whether it's AMP or not, with all the standard work we're doing today. And we've been engaging various partners as well as other browser vendors because we want to refine what we have, and we want to make sure that it will help them and the developers like you achieve highly optimized user experiences. For instance, such as Google, Google, content publishers and web developers at 1800 flowers, CDNs and certificate authorities such as Digisat and Cloudflare, as well as folks working on the decentralized web at protocol labs. We hugely rely on your feedback and are eagerly awaiting to hear what you think. Here are the important links that we referenced and that you can check out to learn more about. You can also come to the Ask Chrome area in the next break and we'll be there to answer your questions. We're really excited about the future of the web and enabling the kinds of experiences that we showed today, and we'll appreciate your help in joining us to move these technologies forward. Thank you. Thanks. Big web quiz time. Yes. Get your phone or laptop out. Right. Who's heard of the window? Yep. Document. You heard of document? Cool. This one is also known as window or document. Yeah. Yeah. You ready? Ready? Not yet. I'll give you a few seconds. Do the little done. Getting ready done. Before your entertainment. Huh? All right. Starting the round. Ooh. Title. Window. Dot window. Dot what? Is window on window or is window on document? Is document on window or is document on document? Hmm. Implementation? Title. Does title belong to the window? Ooh. Is secure context? Hmm. Well, the confidence is fluctuating a lot. Not beginner. No, no, no. It's changing a lot. Wow. Webkit is full screen. Sure. Sure. Has focus. Does a window get a focus or document? A few seconds. And low confidence. I'm not surprised because I don't think I know half of these either. I don't know. Right. There's a lot of chattering in the room. Yeah. How to make yourself unpopular as an MC. Ask really horrible questions. Okay. Title. Document. Window is on window. I think we did this yesterday because you can have window. Window. I will do that for hours. Probably not on request. Document. Of course it is. Why wouldn't it be? Is secure context. Everybody is very split. I'm sure of that one. Okay. Interesting. It's window.navigate. Yes, that makes sense. Good. And device pixel ratio. I'm on the side of the people who said window. Having written a lot of canvas code. Yes. Webkit is full screen. It's on document. Isn't Webkit the window though? No, the window doesn't go full screen. The document does. Yes, that makes sense. Oh, the web. Right. I always get this one. I always have to search mvn. Maybe it's because I do terrible code. I know it's on window. Window.getComputedStyle Document.all Is that a live node list? I think it might be a live node list. Honestly, I mean window or document. Who doesn't ask themselves that every single day? Everybody in this room. That's the answer to that question. Right, our next speaker. You've seen him quite a lot already. I know, right? Again. I want to be on stage. First name, the. Ladies and gentlemen, Das Zorma. Hello, everybody. Yes, it's me again. Prepare for more bugs. I'm trying to get the clicker working. But it's not. There we go. Apparently, my name is Zorma. Good. I'm excited to be here. You reached a point where with Houdini, I can talk about actual APIs because they're starting to land. And that's kind of really exciting. But as with any talk, I kind of have to start with what Houdini is. On my Twitter, I often see confusion. There's a software called Houdini. Apparently a magician that was called Houdini. So I kind of want to clear up what Houdini really is about. So every browser has more or less four major stages in the rendering pipeline. It starts with styles where the browser collects all the styles in the document and then figures out which element is affected by which of these styles. And now we know with the height, and if it's flexbox or grid, which style is on which element, we can do layout. We can basically calculate how big an element is and align them on the page and get boxes on the page. They're so empty and transparent because in the next stage, we can take that layout page and paint it. Just draw it. And we can draw it all on the page. Sometimes elements get on their own piece of paper, which is called a layer. And once we're done painting, respecting things like background color or border color, we can give all of that, all these pieces of paper to the compositor and kind of put them together into the page that you will see on your screen. And if something was on its own layer, we can move the pieces of paper around and that's how animations are made. I mean, that's obviously kind of a shortcut, but you can kind of see where I'm coming from. So bringing it back to the question, what really is Houdini? Well, Houdini is a standards effort in the CSS working group in the W3C to expose hooks into these major stages of the layout phase to the developer, to you. So you have more control, not only over the visuals, but to write better polyfills and have more control over how your page just appears to the user. It is hard because these four stages are different in every browser. Sometimes they're even parallel or not even that clearly separated, but we're working with all the browsers to make sure everybody can implement these APIs. Houdini can be super intimidating at first because under the Houdini umbrella, there's a lot of APIs and you don't really immediately know what to do with what, but they actually kind of form a hierarchy. So you have four high-level APIs, four major APIs that basically represent those four major stages of the rendering pipeline, and then you have a couple of lower-level APIs that build the underpinning the basis for Houdini and that make these higher-level APIs possible in the first place. The workloads are really interesting. Worklets are kind of the Swiss Army knife within Houdini for performance, and so I want to make sure I take a second to explain these because we're going to use them for the rest of the talk, and more importantly, I want to distinguish them from workers because lots of people confuse these two, and I can't blame them because they sound very similar, and in fact, they actually have a lot of overlap, but there is a couple of very important differences that I want to explain to you, and I have to talk about the event loop. This is an event loop, and if you don't know much about the event loop, that's absolutely fine. Everything that you need to know, I will explain to you today, but if you want to know more, I really recommend watching Jake's talk about the event loop, which you can probably find on YouTube if you just type in his name and event loop, and actually I'm using a lot of his visuals in this talk. So this is an event loop. It's a loop, and it processes events, and it's called an event loop. Whenever an event happens, the JavaScript engine checks if there's a handler for this event in your code base and then takes the code for that handler and queues it up, and every turn the event loop takes something out of the queue and runs it, and then in the next turn it takes the next thing out of the queue and runs it, and that's obviously super simplified, and there's much more nuance to this entire thing, but that's kind of how an event loop works. And in this world, a worker would look like this. It's a completely separate event loop. It's an isolated scope, it has its own handlers, its own events, and they have nothing to do with each other. They might be able to put a task into the other loops queue with post message, but that's pretty much it. And there is considerable cost to spinning up and maintaining an event loop, and that's why you can't just spin up a thousand workers and call it a day, because that's actually quite costly. Worklets are different. They're also isolated JavaScript codes with their own scope, but worklets don't have an event loop. Instead, they kind of attach to already existing event loops, and that makes them a lot cheaper to create and maintain. You can even attach multiple worklets to an already existing event loop, and because most worklets are specified to be stateless, we can even migrate them at the same time. So if it makes more sense for your code to run in sync with another event loop, we can just take it off and take it to the next event loop where it makes more sense. So this will come in really handy later on, but that's basically the big difference between those two, and now that we have worklets in our back pocket, we can finally talk about the very first Houdini API which is for the paint worklets. This is more like an order of availability, if that makes sense, but for the paint worklets, the API is called the CSS paint API, and as I said, all elements have to be painted sooner or later to be able to appear on screen. And so far, you have been able to use CSS to customize how your elements appear on screen, but only with those ways that CSS exposes. So for example, if you want to do rounded corners, it's kind of great, but it also turns out that there's different ways to make a box seem like it has rounded corners, and if you want to use any of these other ways, you're kind of screwed nowadays, because what do you do? So for example, there's a so-called squarkle, which mathematically speaking is closer to a circle and a square, and it doesn't change curvature abruptly at all, so it has a kind of different aesthetic look, which could be kind of nice. So if you wanted to have this thing like, hey, what do you do? You can maybe do an SVG background image, which is not really a border. You could maybe use a canvas or nine slice image, I don't know, but with Houdini you can actually teach CSS how to draw the exact look that you want to have on your page. So how does this work? Step one with all workloads is that you have to load a JavaScript file into the workload that the browser gives to you. You have the CSS namespace, and all the workloads that Houdini brings to the browser are going to be in the CSS namespace sooner or later. So in this case, it's called paintworklet, and every workload has an add-module call with which you can load a JavaScript file into that workload. Let's take a look inside this file. In that file, we want to teach CSS how to paint something new with JavaScript, but first it needs a name. So there is a registerPaint function, and it takes a class, and you can basically associate a name with that class. And every paint class, in this case I'm using my paint, we need to define a paint call. And this paint call gets a context which is almost identical to the canvas contact that you're hopefully familiar with. A geometry object which tells you how much width and height the element has that you're supposed to paint, and a properties object which allows you to read the styles of the object that you're painting. And now what I'm doing here is basically I'm setting my full style to hot pink, and just draw the biggest possible circle in the middle of the element. Not very useful, but in the way useful for me to show you what is actually going on. So now that we have defined how to draw this appearance, how do we tell the browser to actually use this new appearance? So we do that in CSS, so in this case I'm just setting a new style, and you can use paintworklets everywhere that CSS expects an image. So I'm going to take that ground image not to an URL, but to a paint and use the name that I use and register paint in here. So here I use paint, my paint. And this is actually if it works. No. This is not good. I told you, prepare for my bugs. No, the entire laptop froze. No. Okay, maybe it's coming back to life. I'm going to try something. I achieved something today. All right. I'm going to try and get out of here. What do you do in this kind of situation? All right. I might be able to remedy this. So just, I can see, maybe you can follow along. Hey. We're trying to kill this. All right, give me two seconds. We are hopefully back up in a couple of minutes. This is like good old supercharged with life debugging on stage. I'm really hoping that this was really just the browser having a hiccup more than anything else. I am not sure this is going to work. I'm really sorry about this. That might be the only, only chance I have, honestly. Let's do this. Let's do a reboot. I mean, this entire thing is just not responsive. So it's not much I can do. I have to. I'm not going to open. This thing is not responsive at all. So I'm actually going to do a reboot. What we could try, I have it in here as well, right? Hmm. All right. Basically, give me a couple of minutes, okay? All right. We're doing it the hard way. I'm already thinking in my head which content I can cut to still stay in time. So, oh, you know what I was liking? Power was out. This is exactly the kind of experience I was hoping to give you at a Chrome Dev Summit. Now, the thing is I can't turn it back on because it's telling me my battery is empty. All right. We're almost there. Now, I'm going to make really sure that this power plug stays in. This is like a speed run. I always want to participate in a speed run but not without me knowing it. It's booting up. You just can't see it yet. So we're getting there. I know. I have a point. Why should I walk there? I'm trying to set my laptop. I'm setting up my laptop. You dance. Please boot up faster. Is it on? Did you make sure it's charged? We're never going to let you forget it. Never going to let you forget this. I know. This is not how I expected this afternoon to go. It's amazing. With friends like these... I mean... Almost. Keep going. Do you need more support? Do you have a dance to the music in your head? I actually dance at my desk. You do. I don't sit down at my desk. It helps my back to stand. But there's nothing quite like having your headphones on and coding and being like... Because everyone around you is like, what is he doing? I'm debugging. Just so you know, that's how it's done. I was told there was a problem. Can you dance? This is your framework. That's why I'm worried. Your framework is actually doing quite well. Brilliant. That's all I care about. Is it open running? My favourite thing that's happened in the last couple of days in terms of things going wrong was when the big web quiz went wrong for you to yesterday and the answers were coming up weird. Because I was watching that backstage on the live stream. But the live stream is about 30 seconds behind. So I was sat watching it and the answers were coming up wrong and you said, oh no, this doesn't look right. This doesn't look right. And then behind me a voice came and just said, what's going on? And I turned around and it was you. And then I was on the screen and it was you. And for one moment I thought, Paul's cloned himself. That seems natural. That's fine. It is a good way. You're working backstage and seeing yourself on live stream. That's a really interesting question on the big web quiz. I wonder how they'll answer it. So what did happen? Is it running yet? Almost. Is he recompiling the kernel? What's good about this is because we have an amazing team backstage. The editors of this will just be like, either that or they'll cut to me dancing. And then they'll come back to you and people will be like, I don't know what happened. I hope that's the edit we put up that would be way more enjoyable than this. I have another story. So the prize that we printed, I ordered it from online form and I couldn't check out. And I was just like, why don't you let me select the query card? There was JavaScript. Really? They were using Backbone. Have we just tried? There's a follow up to this. They had a 1-800 the total free number. So I called them. I need to give out on Tuesday. This was last Friday. And they were like, that's no problem. We can give it to you. Please tell me your account numbers and all of that. And then there comes that visual. Yes. Some might think that we didn't plan the content of that poster very well. It looks like it's been just saying maybe badly designed. But actually, it is exquisitely designed to be awful. That was our intent. This lady, poor lady, I'm seeing misspelling outside of the plating area. Are you sure? Well, you are welcome, my friend. Thank you. Ladies and gentlemen, Das Zoma. You've got seven minutes left. Thank you. You were roughly at the squircle. You were roughly at the squircle. Welcome to the worst talk I've ever given. So I'm actually What did you do? What did you do? Nothing. Absolutely nothing. Jake, that's your framework. So fix that. This is Houdini. This is nothing to do with me. Oh, just like the cake. Yeah. Oh, look, look, look. Is that a good thing? It's moving. Is there any chance you could switch out to say videos? I have a couple of videos. That's true. So what actually happened? Did you restart and then Chrome updated? I don't know. Maybe. Because remember when we were in the office and I said, yeah, what you should do, right? If you were doing an experimental build, what would you do tonight? I mean what I could do. I'm just saying I told you so. That is precisely what he needs right now. Yeah, exactly. I'm going to try something. It's not going to be ideal, but hopefully it will do the trick. Well, I'm on the edge of my metaphorical seat. Yeah, this is good. How much time do I still have to fill with this so that nobody knows I actually didn't prepare a talk? Hmm. I feel like at this point I'm just going to re-record the talk and put it just out as a video. That's great. What's your plan? I'm interested. Stable Chrome. So the demo is not going to work. I have most of them as videos. That seems like a good backup plan. You could have gone without first time around, really. I know, right? I only have five minutes left, so that's good. You're using my slave framework, right? I am. Enable experimental web platform features. That's a good one. Wow, amazing. The number of stories you get out of this one, mate. Yeah. Never going to let you forget it. Everybody warned you for not to use Jake's framework. My framework is not to blame here. It's fine. I'm actually wondering if it's still the battery issue, because it discharged because the power wasn't quite plugged in. Oh, that was what it was. The diary. Unexpected things at the Chrome Dev Summit today. Okay. So what are we waiting on now? The beach ball of death. I'm going to put it on screen so everybody can see it. It's beautiful, isn't it? Hey. I'm going to get my laptop in case we need to do a big web quiz question. I have my phone. We can do it from here. Do you want to do a big... We'll have it as a backup. You've noticed I've had my notes every time. I know you look like you've got a clipboard. I can't make without my notes. I'm going to judge everybody. What questions do we got? What can we do that we've definitely checked the answers for? All of them. I mean, the CMS. Oh, the CMS would be the next one. We've definitely checked that one. I think my laptops are gone, honestly. What? My laptop is gone. Oh, mate. This is, like, the worst... I told you. That's the word of encouragement that Salmanis right now. I mean, at this point, the only thing I can legitimately do is, like, go out, try to fix it. Should we do a quiz question? You've got until the end of that quiz question. Should we call it? Yeah, put it on. I'll try. Should we switch to quiz? Yeah. Hey, look at the quiz. Hey! Yeah, you're all on our side. Oh, imagine. Hey, look at that. CMSs. Yeah. CMSs known to have funny names. They are. And actually, researching this round was, frankly, hilarious. Yes. I mean, not quite as hilarious as this. Anyway, um, let's start the round. Well, should we introduce what the idea is? You're going to pick the ones which are real and which are fake, right? That's what you're going to do. And since, like, every developer in their career does this at some point and gives this a name, this is a really difficult round. I like easy-peasy content, squeezy. Yeah, that's probably one. Magnolia goes getting a high score. Oh. The far cry. Oh, CMS. Who put that one? Feeling content. Oh, that's a good one. Ultimate content managerizer. I think that's a wrestler. Yeah. Oh, that was a quick round. Very confident about Magnolia. Reveal some answers. Let's have a look. Yeah. I'm really disappointed that easy-peasy content, squeezy, is not a CMS. You know what? Now that you say that, somebody is, like, publishing MPM module right now. Brilliant. I will gladly use it if it's a good... Hang on. Magnolia must be the popular one, right? I've not used this content management system in yours. Is Magnolia a popular one? Is it a popular one? I think so. These are well-known ones. That would be why you've got 92%. Yeah, I assume so. Okay. In the next batch, Jake CMS. Most people don't think you are a CMS, buddy. Am I a CMS? I don't know. Should we find out? I should press the button. Go on, I'll press the button. Go on then. Hang on. Is the failure spreading? No. Okay. I mean, there is a common factor. Wait, wait, wait. Far cry? Far cry is an actual CMS. What kind of content are we talking? Because the far cry I'm familiar with is a different kind of content management system. You're basically managing people from being alive to dead. Right? Or the name based on the person who developed it, because, you know, writing content management system is just like... Is that right? Is there a joined history between the two? It's hard. I think it's probably an homage. All right. And then finally, Alchemy. Tiddly wiki. 60% think it's real. Do you know what I think of that 61% who think it's real? I think they're right. Yeah. Despite its name, I had a giggle about that one backstage, Tiddly wiki. I would call it that. But ultimate content managerizer. Also, like easy peasy content squeezy. Not real. In fact, fake. Someone get that, NPM. Was that when you made up? How is it? Dead, Jim. My laptop is legit broken now. So you were saying to me, like, because at Google every few years we're able to get a new laptop, and you were boasting like, I've only got to get this laptop to last until April, and I get a new one. Yeah, you've got no laptop for a few months, mate. That's it. Yeah. I'm without laptop passed. Houdini broke my laptop. So if you want to try Houdini, there are no flags in Chrome. Aw. Because they're already waiting. We might as well. I think we should. Our next speakers are Chris Wilson. First of all, can we give a round of applause to Sarma, though? Absolutely. All of my worst talk failures have happened at Google events, and none of them have been that horrific. But you know now, if they ever happened, you can be like, but at least it was Sarma's thing that happened. You know, there's always somebody worse off, and that's someone is Sarma. Oh, thank God that wasn't live-streamed. Right. He'll be okay. Our next speakers are Chris Wilson and John Pallett. Ladies and gentlemen, Chris and John. He discharged my laptop. I've not just sent it back yet. What was that? Stay away from my laptop, Sarma. I think Sarma's never been so happy to see me. So I am Chris Wilson. I'm here with my colleague John Pallett, who will be out in a minute, and we're here to talk about the next great platform, the immersive web. And this is actually not running off my laptop, so hopefully I won't have any problems. At least they won't be mine. Now, we talk about this word immersive a lot, and I kind of wanted to define what I meant by the immersive web. And at this point, I think pretty much everyone has heard of virtual reality at least. So I'm going to close this up. Who saw Ready Player One? It's actually a really small percentage. You should go see it. It's a good movie. It's good entertainment. So it's totally just like that. Well, not really. But virtual reality is all about immersing yourself in a completely alternate reality. Putting what I refer to as the reality blinders on, completely replacing everything you can see and usually hear, and immersing yourself in this totally different world. Like visualizing a data set. It can be a virtual workspace. My kids like to play this game where I put on a VR headset and they see how close to me they can dance before I notice that they're there, which usually it's pretty close. But certainly when I'm at my desk at work, this is my favorite place to go into VR because it kind of masks everything off around me. The sea of cubicles isn't there anymore. And you usually experience virtual reality through a tethered VR headset like a Vive or a Rift standalone devices like the Oculus Go or smartphone VR systems like Daydream View, Gear VR or Google Cardboard, my personal favorite. And any of these devices end up using a combination of head tracking, screen display, optics and controllers to make you feel like you're present in a totally different world. Now at Google, we've been working on exposing this to the web for really quite a long time on all these devices from high-end desktop headsets on Windows to a polyfill that supports WebVR on any smartphone in Cardboard. Yes, even Safari on iPhones. And in fact, you may not even have a VR headset. You might not even have this 29 cents worth of Cardboard. WebXR and the XR polyfill can actually view VR worlds just on a mobile device using the accelerometer and orientation API so you can look around a 3D scene. And this lets users look around your 3D world even if they don't want to drop their phone into a headset. Now, in addition to bringing VR to the web, we also, like my team actually works on bringing the web into VR at least on Daydream devices. Starting with Chrome 67, you can actually launch a VR version of Chrome inside the Daydream home screen, and we put a lot of work into making browsing the traditional 2D web a really great experience. But of course, the really cool part is when the browser in VR can be used to browse immersive worlds, like you can hop back and forth between the 2D web and VR content that's hosted directly on the web. So this gives you a really great easy experience and really actually totally immersive. It's like you're just navigating that world. Now, having a browser inside the VR world is so useful. It turns out 83% of Daydream users also regularly use the browser in VR. This was kind of something we added on after the fact. Daydream shipped without a browser and now this is a regular occurrence for most of them. This shows how important the content from the web is even when you're just living inside a VR world. But enough about virtual reality for a second. I want to talk about when you don't want those reality blinders on. I actually like to interact with my kids. I want to be able to see them and not have them dance in front of me. And the most exciting extension of the computing platform to me is a concept of augmented reality, not just overlaying AR stickers and animated characters or being able to drop virtual objects into your reality. The key to understanding AR's potential is that it's really about the concept of your computer getting to see getting to see a world around you interpret parts of it, find surfaces in the future, recognize objects and then augment that reality with virtual bits of user experience. Instead of trying to totally replace your reality we really want computers to just convincingly blend virtual and real experiences. Now for AR there are some headsets like the HoloLens, the Magic Leap One there are projection systems to display on real world systems or real world surfaces but most users will probably first experience AR like I did using a camera pass-through experience on a mobile device showing things like AR stickers. And the cool thing is if you think of the things the web is really really good at the long tail of software content products you expect from the web, the experiences that users will probably happily click on but they wouldn't necessarily install on their devices. The massive success of the web as a commerce platform is a huge huge benefit. You can start to see how enabling developers to build immersive experiences that are delivered in this really ephemeral fashion is a fantastic idea. You don't have to install an app to see how that couch is going to look in your living room. You don't have to install an app to view an immersive video trailer. The ephemerality of the web makes the ability to do these immersive experiences a fantastic match. And our mission, John's and mine is really to enable web developers to break that plane, like break out of the flat design world that we've been living in for so long to enable these truly immersive experiences. And to enable that we really need to start with the baseline. We need to be able to connect immersive displays and render to them. And that's where the WebXR device API comes in. This replaces the old WebVR API, evolves those concepts to expose not just VR, but also AR functionality. And in true extensible web platform layering, this is really the underpinning only. Like this lets us connect to those devices, render displays, understand which way they're pointed, get to interact with controllers, that kind of stuff. And this is a really broad multi-year effort by a bunch of different companies. Google, Mozilla, Microsoft, Samsung, Amazon, Oculus, a whole bunch of others as well have been working on this for a while. And this has all been developed by the way in the W3C. In fact we have a brand new immersive web working group. I personally co-chair this with my colleague Ada Rose Cannon sitting right down here in the front. I just had to make her blush. From Samsung, and we're tasked with like taking this spec to an actual final status. And this is why we created a working group, because we feel it's super important to actually land this now, and not just keep talking about how cool it could be in the future. This really shows the maturing of this API, because it's moving closer and closer to becoming a final standard. And of course we also continue to incubate new ideas in the immersive web with a community group. Now if you want to experiment with WebXR API today, you can enable it with Chrome's flags, with the about flags. If you want to try out AR scenarios you have to enable the second flag that should be going away soon because we've done some new mode work in the spec to make that work too. We have a currently running origin trial too, if you want to try to deploy this out to normal users. And of course if you're willing to accept the responsibility of making changes, as the spec and our implementation change. Now finally, I've mentioned the WebXR Polyfill a couple of times, and this is something I wanted to give a little more detail about. This is a Polyfill JavaScript library that's maintained by the community group. It helps developers in a couple of different ways. First, it offers a JavaScript only implementation that works for VR scenarios in any mobile browser using orientation events. So even mobile Safari with cardboard devices or flat displays you can actually get a WebXR implementation just through JavaScript. Secondly, if a browser does implement the older WebVR API, like Firefox did this and Microsoft Edge did this, it can actually build XR on top of that and you get the hardware speed up of their former WebVR implementation. So you can instantly make your WebXR content accessible to a much wider range of users with just one script. Now, with that I want to bring out my colleague, John, who's going to drill down into the augmented reality possibilities in a bit more detail. Thanks, John. Thanks, Chris. So let's talk a little bit more about augmented reality or AR. As Chris mentioned, augmented reality is largely about being able to overlay information over top of the real world. And if you've tried out AR stickers or if you've put masks on your face on a smartphone you've already seen augmented reality. And the reason for this is that there are hundreds of millions of phones and tablets out there right now that support augmented reality and the number is growing. And most of those devices have web browsers, which means there is a big opportunity for web developers here. And the lowest hanging fruit really is the ability to add a new experience to an existing 2D webpage. So it doesn't require an entirely new site, you can add AR capability to an existing 2D website. There have been a number of partners that have been experimenting with this, using the WebXR device API and turning on the hit test flag that Chris mentioned. They're doing this in Chrome Dev and in Canary. And there's one example, Plater, which is an augmented reality platform that lets businesses put virtual objects in the real world. They've done a couple of demos and there's some interesting ideas here. On the left you can see that users can learn about a product by getting information in context on the product itself, rather than having to go through data sheets. And it actually saves shipping demo units to businesses that are thinking about buying machinery or heavy equipment. On the right you can see that looking at objects from different angles has a lot of value, in this case for fashion and generally for shopping, but also for education where students can explore the object or artifact that they're learning about. Now this doesn't all have to be an augmented reality. You could do both of these using a 3D model on your website, but if you can put it into the real world, you get a better sense of context as well as scale. Now what's interesting though is that from a user user experience perspective there are a lot of interesting things to learn about augmented reality, particularly how it fits on the Web. And it's a good reason to start experimenting with it now if you're thinking about adding it. So by way of example, West Elm, who sells home decor and furniture, did some in-store testing. And what they did was they went to one of their stores and they picked four shoppers at random and they showed them a prototype shopping website that incorporated augmented reality. Now this isn't a huge study, it's four shoppers, but they had some interesting findings and they gave us permission to share them. So the first thing that they learned is with these customers the terms AR or augmented reality it's not really a common vocabulary. Basic terminology like view in your room is a better way of telling users what to do. But even then, without a visual showing what's going to happen, that text can really get lost with everything else that is visual on the site. So what they're looking at now is ways to add both an icon and text so that the user has a call to action, they know what to do. One approach might be to have a rotating 3D model so that the user understands hey, this is more than just another image on the page. The next thing they learned is that users get confused without clear directions. You can see here that there's a delay while the user is trying to figure out what to do. Do I tap? Do I move my phone around? What is this circle on the floor? I've never seen this before. You have to actually guide users to the path of success here. And West Elm is researching clear directions as well as things like progress bars and loading indicators so that users don't get lost and they can see how to get to the point that they're already placing furniture or an object into the real world. After that, once participants successfully placed an object, the most common request was the ability to move it or spin it or even remove it from the scene. The original placement wasn't always where they wanted it to. It wasn't always clear to users how they could do that in this case. Another thing that they heard from their test subjects as well was that getting validation that the size of the virtual object matched the real world would also be helpful because if you're shopping for furniture, you want to make sure that it fits into the space that you have. So from that perspective, showing some real world dimensions with the model could help. And finally, there was some feedback on how real the assets look. And now this study was particularly unique because West Elm was doing it in store and so they had the opportunity to put the virtual object right next to the physical object and get real shopper feedback. Generally speaking, the feedback on realism here was only 10. And you can see clearly that there are differences between the two. So West Elm is looking at how to handle typical lighting scenarios and also make sure that the shadows underneath the object are a little more pronounced and more detailed. So the key message here really is that there's a whole lot of things that you can learn. There's a lot of streamlining going on here and fundamentally augmented reality on the web has some differences from apps where somebody might be installing an augmented reality app and so they're quite those challenges a very I think a telling finding from this study was that three of the four participants said they would absolutely 10 out of 10 use AR once they knew what that term meant to do furniture shopping. Now if you're like me and you grew up with commercials talking about three out of four dentists recommend this toothpaste you probably all wondered what does that fourth dentist think? Does it rot teeth? And the answer here is actually maybe. So three people absolutely one person maybe I personally visualized that as these are people shopping for home decor and they say would you use this and they're like maybe I'm here to buy a couch but I don't actually think that's how it went but the point here is that three people said absolutely and pulled from random three people said absolutely it's a strong signal and it's also consistent with what we've heard from partners and users who see value and be able to visualize 3D objects ideally in the real world. So if you're thinking about experimenting this and you're thinking about adding this to your website let's talk a little bit about how you can build things. Chris mentioned the flags you can enable a little bit earlier in the presentation but let's talk about how you actually add the immersive experience. So you could write WebGL code directly and if you're doing augmented reality that would be over top of the camera feed that's one way you can do it generally we recommend using a library to help. For example 3.js is a helper library for 3D graphics on the web and it takes care of a lot of the heavy lifting for 3D geometry management and rendering and it makes it so you don't have to work with WebGL directly. So let's look at an example of how 3D and WebXR can work together we're not going to cover the whole process of creating a WebXR session and doing all that we are going to look at how a virtual object can be placed into a real world scene and we're going to do that with a narrow use case in this case putting a reticle onto a real world surface for those who haven't heard that term before a reticle is an indicator in this case as it moves around and it's typically a user interface construct so that the user knows what they can do you saw some of this in the West Elm Screenshots earlier so what we're going to do first in this case is take the mesh for the reticle which in our case is a flat circle and we'll add it to the 3D scene it's worth noting we like the reticle to be about half a meter wide and so it's worth noting that the WebXR device API locks the coordinate system of the real world to the virtual world what that means is that one in WebGL or the virtual space or in 3, one unit is one meter in the real world, 10 units is 10 meters so in this case if our mesh is half a unit wide it means it will appear as half a meter in the real world next what we're going to want to do is actually render the reticle over the camera on real world geometry but to do that we need to know where the real world geometry is the WebXR device API includes the capability to do a hit test which is firing a ray into the real world and getting intersection points with real world surfaces so for example taking a ray from my eye down into the stage and then getting the intersection point of where I'm looking on the stage but also the normal facing up so I know where the surface is and what direction it's facing now three in order to do that you need a ray and a good example of some of the things three can help you with is a ray caster function for the ray that you're going to fire into the scene from the camera so we've already got that here and what we're going to do is pass that into the hit test API with WebXR and then what we'll do is we'll take the return value we'll actually possibly get a list of things because it might be more than one object behind the first hit we'll take the first one the closest and we'll convert the position that we got into a 3.js matrix and then we'll use that to set the position of the reticle the reticle is now positioned so that it will render directly over the real world object that was detected from that ray now it's worth reiterating that this works because the virtual coordinate system matches the real world one and obviously I skipped over a lot of steps here but the point really is that you can combine frameworks like 3.js with WebXR and it's fairly straightforward if you know 3D programming basics but some of you might not know 3D programming basics and the fact is that if you've tried to add a 3D model to your site you probably know already it's not super easy 3D models can be pretty complex both to read and to display we saw that even in the West Elm example we saw that there are user experience considerations if you're starting from scratch you have to think about how do I want to rotate objects allow people to move them around and so forth and then there's also responsive design if you want this to work on mobile and desktop you need it to be able to even if you're just doing a simple model you have to know how to handle resizing do you need to display a poster image on mobile to prevent the download of the 3D geometry until the user actually wants it for newer technology like augmented reality ideally you'd be able to take advantage and progressively enhance use some of the capabilities on different platforms even if they're not available in all browsers and as WebXR comes out and starts moving to stable it's one more thing to learn it's one more thing to add just know that if you've experimented with this before and you found it a little bit tricky you're not alone so to that end the team has been looking at this problem and we recently made public an early version of a 3D model viewer Web Component this is really really early but it does some things today to make life a little bit easier and the reason we released it early was to get your feedback to give you some context for this Web Component there's three things we're trying to do the first is we want you to be able to add to having to learn 3D programming or writing 3D code the second is we want it to work well and responsibly across browsers and across different device form factors with progressive enhancement to take advantage of capabilities on browsers where they're available and then the third goal is that as new API ships such as the WebXR device API we want the component to start taking advantage of them so that you don't have to keep up to date with all of the changes that are coming out and it's going to be super early but we've made some progress and so I want to give you a sense of what the component can do today let's run through a few examples and this is the first one the simplest use case which is a static GLTF model for those who haven't heard GLTF before it is a 3D file format and it is a required component of this model viewer because it's a format that will allow us to work across all of the different browsers here you can see that if we add a couple of attributes we can bring the model to life and we can add a layer on color and we set it to auto rotate with the controls attribute we also can allow the user to spin it around and move it and take a look at what's going on but they didn't look at it from the back or the front we've also added a poster image capability and you can delay the loading of the model so you're not consuming data on mobile if that's what you want to do and the attributes are also dynamic so if you add a little script which switches the poster image back and forth you can animate it a little bit but if you add another image it's actually a 3D model that they could click on it works in that way similar to an image tag the component also handles some forms of responsive design so you can see here that it will scale up for desktop and it will scale down for mobile and it will manage the staging and the lighting and the rendering of the model properly it can also manage multiple instances on the same page so it will take care of WebGL from that perspective and it uses Intersection Observer to make sure it's not burning battery and you can't actually see the model finally the team is experimenting right now with some of the more progressive enhancement capabilities in this case you can see they're experimenting with the WebXR API and incorporating that so that you can add more attributes to turn on AR and do that across different devices again it is just really early the team is still working on more features for user interface and responsive design features to make it as easy as possible to add a 3D model to a web page looking for there's a lot we can do on realism augmented reality use cases interactivity the whole reason we made this public is that we'd love you to try it out and we'd like to get your feedback in the GitHub so if this is something you're interested in please do go to the GitHub take a look try it out and then let us know what you think we're done so we covered today a little bit about the WebXR device API we talked about 3JS and we touched briefly on this the model viewer web component if you're interested in more this is a slide to take a picture of the links are all on the screen if you're watching this at home you can rewind and you can check it out on YouTube later I imagine as well and with that thank you very much we thank you for your time so we said at the start of day 2 that day 2 would be experimental you're already laughing that it would be experimental as some time experiments fail what we'd like to do is we'd like to give him a one shot just the one shot because what we're going to do is we're going to roll into the break a little bit there'll still be a little bit of a break and we'll pick it up and be back in as we plan to but we're just going to give him one chance to finish up that talk because it's amazing Houdini's amazing I mean you two don't know this but just literally before we came on stage I went to Summer and said is everything is it looking like it's going to be okay I'm not joking no I don't believe you are joking is that getting better well that sounds good I'm audible so that's winning shall we shall we I know you're all behind him give him a massive round of applause ladies and gents it's Summer so the squircle right as I was saying what do you do if you want to draw a squircle we're still on the same page you totally know I was going with this alright come on let's do this I've got 15 minutes we load the squircle we load the file we had that we had this one we draw a circle in the middle and I want to do the big reveal where we use the paint function for the background image instead of just a normal SVG image I'm going to press the button we have a pink circle I'm so happy by the way everyone thank you so much for the kind messages on Twitter and your support here in the room it could have been so much worse thank you very much what I'm doing here is as I said I set the background image on a text area so this is a text area where I'm animating within height don't do this at home kids never animate within height but you can see the circle is pink and in the middle so you might be asking what is the advantage of using Houdini's paint API what this talk is about over using a normal canvas so there's a couple of things I want to talk about the first one is auto repaint meaning that the browser can figure out when it actually needs to run your code to do these painting operations I'm going to talk about this a little bit later in the talk it is auto sized if you've ever worked with the html canvas you know that the number of pixels on an html canvas is completely independent from the number of pixels the canvas has on the screen which is super painful to work with and in this case you don't have to worry about it because it is automatically set to the correct size it is of main thread meaning the code that you write to do the paint operation doesn't run on the main thread and that's a lie currently in Chrome we do run it on the main thread but as I said the workloads can be migrated so the second we have the infrastructure in place to run this kind of code somewhere else it will just happen and that means you don't use any of your main thread budget making sure your page is buttery smooth no DOM overhead and this one is actually very underrated often I see effects on the web that use a couple of DOM elements or any kind of assembly of different styles to achieve a certain visual effect with this you're just using a virtual canvas so not even one DOM element to achieve this effect and that can really add up on low end phones so for example this is one of the no no wait this is actually not that bad we got this we got this this is just an aw snap we can work around this is this full screen I can look up here all right sure why not hang on we almost got this I'm going to move this over here I'm going to move back and we're going to go to here all right so this I'm literally I'm not giving up for me this just means I'm going to skip that slide so what I'm going to do is I'm going to go out of full screen and moving this to a different slide we're going to back into full screen and we're going to go to this one my point that I was trying to make was that we found that on low end devices implementing these kind of effects like this wonderful effect that Una implemented is actually more efficient in custom and the paint workload than using a million DOM elements so that's why this is actually a performance primitive to make your app run buttery smooth even on the low end devices this is another fact that Una wrote that I think is really nice and this is kind of point a nice example to show how the browser can decide when to paint and when not to paint so in the paint class you can declare your dependencies you can say these are the CSS properties that I rely on and then any of these properties change will the code have to run otherwise it won't and so in this case you have a couple of custom properties saying I want to have this number of stars and this hue and this different sizes kind of thing and then in this case I just do a keyframe animation on the hue and you end up with this effect and it's actually efficient in the sense it doesn't run just a RAV every frame but only repaints when the animation actually tells you it is necessary and so in this case you have a couple of things that you can do you can do this with a simple clock that many people might write with SVG or with a canvas some people would again probably bend over backwards and try to make this happen with a DOM and just a couple of DOM elements but if you look closely you can see that the hand actually has a trail and that suddenly makes it a lot harder to do with SVG or with a DOM and maybe canvas would be more appropriate for the background color or the color of the hand the thickness of the hand the thickness of the circle at the end or the length you can show or not show the individual stops on the clock, you could do all of these things at once but it gets really stressful to look at so I wouldn't do that but it's just a clock and once again this trail is just a CSS transition so the browser knows while the transition is going on I need to repaint this every frame but the second the trail is done and you can see the performance win so far we've been using CSS Paint for background images and we've gotten pretty far I think if you still remember what I was talking about a couple of hours ago it feels like but you can use it anywhere with CSS Expectation Image so for example you could also use it for a mask image or a border image so in the land of border image you could make this kind of organic look where the border looks kind of hand drawn and if you want that kind of look it makes it really easy to achieve that effect this is probably very important progressive enhancement you can detect support for Paint Worklet in both JavaScript and in CSS one note about the add syntax for add supports it detects support for Paint Worklet not for one specific Paint Worklet so even if the name like in this case something doesn't actually exist as a Paint Worklet it will still get evaluated which is kind of handy for this talk I want to introduce the three-pix stability index which is supposed to be a little notion of how stable an API is based on the story of the three-pix and the wolf who tries to blow their house down so in this case in the CSS Paint API the spec is a candidate recommendation which is basically W3C speak for it's stable it's shipped to stable in Chrome Safari announced that they have it in development right now and with that I will call it brick stability at the bottom by the way is supposed to be a brick emoji but the brick emoji just got standardized in June of this year so none of the fonts have it yet but luckily it still kind of looks like a brick if you want to know more about this API I'm going to shamelessly self-block the article that I wrote which you can find here if you have any questions hit me up anytime and with that out the way we finally have talked about the first Houdini API I'm going to go on to the next one and see if the browser will handle this should I risk hiding the URL bar I will do it good the next one compositing as I said the compositor main job is to do animations with the papers that move around so as such the animation is called the animation workload API so let's talk about that a bit if you currently think about animations on the web you have three choices it's more like two and a half choices you have CSS transitions which allow you to transition the CSS property to a new value you have CSS keyframe animations which are a declarative timeline API which is more powerful and can do some kind of looping and you have the web animations API which is the imperative version that allows you to nest the timelines and all kinds of things but the problem is that it's really badly supported it's behind the flag in Safari Chrome has an implementation but it's missing a lot of features Edge doesn't have it at all so it really isn't usually a good choice but even if it was there are often scenarios where what web animations API offers is not enough and this is where animation workload API would come in so what you see here is a normal web animations API animation it would usually web animations API is thought about in the .animate call this is the same thing just a little bit more elaborate it's also part of the API and a workload animation is actually very very similar so for example I would just use a workload animation and because we are associating this animation with a workload we need to once again provide a workload name other than that it stays the same so we have a keyframe effect that we target an element with and we have two keyframes that we want to use within two seconds then now we have an animation workload on the CSS namespace and can call add module and now within our animation file we can use JavaScript we have an animate callback where we get the current time and the effect of the animation and now it's our job to set the local time of the effect depending on the current time if we do it like this where we don't think about it it's literally a pass-through and it will behave just like a normal web animation API but this is JavaScript so you can basically implement arbitrarily complex time mappings now what does this actually mean I'm not going to go into all the details of this API but just going to give you a little taste of it so if you want to know more about this you can look at this in the description and you can see yourself like another article I wrote which you can find here and again read it and give me feedback I would very much welcome that but what do you use animation workload for so for example I think a year ago or two years ago Safari proposed the spring timing function and it's implemented in Safari I don't think any other browser has implemented it so far so I'm going to give you an example of an animation workload that would come in handy so in this case we can for example write a bounce animator because if you think about it if you animate an element from A to B you can either just move it from A to B but you could also move from A to B and a little bit back and a little bit less back and it will look kind of like a bounce and that's exactly what we're going to do so we have a constructor where we take some options in this case there's going to be an option for example for bounciness which makes sense how bouncy is it supposed to look but in this case we're going to use a bounce function and we're going to determine where in between these two keyframes we want to end up so the bounce function I used I implemented with some really dodgy and hand-wavy physics but in the end you can just think about it like implementing this kind of graph between two keyframes so if we do that and we run that you can see that it's now actually a bounce and keep in mind that this is literally just two keyframes and we're just spending time so to speak and it actually runs on the main thread it actually runs on the compositor thread meaning that even if the main thread is super busy it will make sure that this animation runs exactly at the device's frame rate frame perfect and make sure that your animations look really really smooth so so far we have done this, we have done an animation but if you look at this I explicitly wrote out document timeline even though it's an optional argument the point is that with animation workload you could get time from somewhere else you could, for example, think of something like a scroll timeline or you could even conceive of something like an input timeline we are talking about input timeline but I have nothing to show here but I can show you scroll timeline so here you see a pacman that I've linked to a scroller at the bottom so what you see is I can basically scroll the scroller at the bottom and the animation will jump to the position where I am within the scroller and so it kind of gives you an animation scroller and this is kind of fun but not that useful but you can conceive of more useful users for the full scroll linked effects where I have three independent animations the follow button, my name and my avatar are all scroll linked effects that can assume their position within the animation depending on how far I scroll the parallax effect would be another very popular choice to use with this and it's much easier to implement than it currently is on the web now in combination with CSS scripts CSS scroll snap points that we just implemented in Chrome you can now have a lot of synergy here where you have really smooth transitions between different sections of your app and you see like an indicator in top and the images kind of zoom into view and now for me the really interesting thing is that animation rocket is the same thing as the paint rocket was for rounded corners if you don't like how scroll snap points work animation rocket is low level enough for you to implement your own version so it really future-proofs the web for whatever people will come up with in the future let's talk about the three picks the ability index for this API which we have gone through many many iterations on animation rocket but always in collaboration with Apple with Microsoft and with Mozilla and we are now at a point we feel fairly confident that all the browsers are at least on board on a conceptual level with what we come up with I would give this a word stability as I said we feel pretty confident about it but now it's really time for us to see what you feel about it so it is in canary you can play with it now we are going to an origin trial in Chrome 71 so if you want to test out in production and we would love it if you do please sign up here as I said the article should give you all the insight that you need to get started if not please contact me I'm very eager to help out so that we can get some real data I have I don't know how many how much time I have left at this point I'm just going to go for it because I can talk a little bit about Layout API so the Layout API I'm going to start with the three picks ability index because it's complete straw we literally refactored this API two weeks ago at TPAC so there is like a half finished implementation in canary and you should play with it but don't expect the code to still run next week but there is so much potential that I really want to give you like a little insight into this so with the custom Layout API or the Layout Worklets you can basically define your own display values so I'm just going to have a main element a couple of divs in there and that's it all the other magic will happen in the Worklets so now we have a Layout Worklets on the CSS namespace I'm going to add the module and in there we have a Layout Callback I'm not going to explain all the parameters because A I don't understand them all but the layout is pretty complex but I'm going to keep this one simple so we can get a feel for it so what I'm going to do is I'm just going to loop over all the children on my child nodes that are on my custom layout elements I'm going to lay them out in empty space basically asking how big would you be if you had no constraints and then I'm just going to give them a random offset basically giving them a random position within the rectangle that I occupy and then I return this list to the browser saying I did my layout now please go forth and paint me and that's exactly what the browser will do so if you look at this you can see a couple of rectangles and every couple seconds I add or remove a rectangle to force a re-layout and they get a truly random position which is also truly useless but you can see that this is actual the layout phase of the browser this runs during the layout phase of the browser something that was completely closed off to developers so far and to give a little more of a useful example I also implemented a masonry layout so here I'm just loading random images from unsplash with different aspect ratios and the masonry algorithm if you will takes care of assembling these in three columns and the number of columns is once again just the custom properties which I can increase and it will kind of scale up and give you this really nice masonry look which so far you would always have to do with position absolute magic and main thread JavaScript and the actual layout which I think is really exciting if you're interested in the code because I don't have an article for layout work like yet mostly because by the time I would be done with an article I would be out of date again please go to this repository that I maintain where I have samples for all the work that I talked about today and you can fork them and play around with them or I would be so happy if you contributed to them I kind of want to build a collection with popular effects which I think would be really useful if you want to keep up with the development of Houdini I made ishoudinireadyget.com where you can see all the browsers which Houdini APIs they support which ones are in development which ones they at least want to implement in time there is links to the articles, to demos, to the specs hopefully I can make this like the one stop shop that I need to keep up with Houdini and I can't believe that I made it to my last slide so now come and take a bow I think you should come and take a bow I just gave my talk they are the awesome ones thank you so much claps all around now it's time for a break is it still? good, you didn't completely take all the time we are here at 4.30 so go and have a break I think we could all take a break I need one, absolutely see you then when you are developing your web app with a local development server a build system, caching headers and a servers worker it gets hard to tell if a stale version is being served from the server or is just lingering in one of the caches or maybe the build system hasn't even kicked in yet one small trick to easily figure out Curl is a command line tool that makes the HTTP request and shows you the response of the server but you don't need to learn all the options that Curl has DevTools has learned them for you if you right click on your request in the DevTools network tab you can select copy copy as Curl your clipboard now contains a Curl command that exactly mimics the request that Chrome has sent off including user agent, cookies, compression and all the other HTTP request headers paste it into your console and you will see the response the server sends without any caches or servers workers in place an additional tip is to use the dash i flag so Curl prints out the response headers as well if a response is not what you expected your server or your build system is the culprit if a response is correct you need to check your caches or your server worker terminal tricks gotta love them see you next time I think so does it trigger layout well before you start do you want to just very quickly do the what style what layout is I'll explain so style calculations will be something where the browser calculates the style which will be something like what height it thinks it wants to give something when it has to look at the whole cascade and see what styles apply to the individual layout would be where it's figuring out the geometry of the elements how wide how high and all that kind of stuff and if one pushes one out of the way and all that kind of stuff painting would be where you actually fill in the pixels and then compositing is where you put the various layers of your page together we have a whole section on developers.google.com slash web slash I think it's rendering performance but you should go and read about it I wrote it a while ago but it's okay but this question oh sorry yes this question it's about does it do the geometry thing does it trigger layout you ready let's go now box shadow if you change box shadow does that trigger layout about outline containment can never remember the difference between outline and border I guess just one round the other there we are perspective that's a 3D transform one background image forms talk about those quite a lot when I'm talking about performance text to the line display, display flex, display block display inline inline block display table display and then the layout alright sorry all the display modes we're just going through my head then did you ever use display run-in what? yeah display run-in something I don't believe you right shall we find out whether they trigger layout box shadow no it doesn't it just changes the shadow but it doesn't actually change the height of the element it doesn't trigger any layout outline is the same it's like border which we'll come to in a moment I suppose except that it's a paint thing so it adds an outline but it actually doesn't change the size border actually changes the size of the element containment yes as soon as you put containment on it will change the actual essentially trigger layout to figure out what it needs to contain and the same with height yes well done for everybody who got the one on height perspective isn't it shall I reveal them perspective doesn't but it can change where the element appears yes but it's only done in a compositing phase so we'll do 3D transforms as a compositing step and that's what perspective changes now there's an interesting one about transform it can sometimes trigger layout oh but most of the time it's a no so that's why it's a no but actually it can if you transform something off to the right or the bottom of the screen and we add scroll bars on just so you know that's a full on no isn't it for cursor and I'm going to say I agree with that one yep and they are absolutely correct so the difference between outline and border is border is actually going to change the size of the element yes indeed and display you're quite right if you do display flex the display in line we have to figure out the geometry effect of that so there you go okay should we go away and introduce the next speakers I feel we should introduce the next speakers I'm Thomas and I'm the product manager for WebAssembly my name is Alex and I'm a software engineer on Chrome OS and today we're going to talk to you about WebAssembly we're going to start off by briefly describing what WebAssembly is and what you can use it for then I want to show off some of the amazing new features that the WebAssembly team has been working on to deliver to you in this last year then I'll showcase some of the amazing applications that have managed to build with WebAssembly and our shipping and production alright so first off what is WebAssembly actually WebAssembly is a new language for the web and offers an alternative to JavaScript for expressing logic on the web platform it's important to note though that WebAssembly is in no way a replacement for JavaScript rather it's designed to fill the gaps that exist in JavaScript today specifically WebAssembly is designed as a compilation target meaning that you write in higher level languages such as C++ and then compile into WebAssembly WebAssembly is also designed to deliver reliable and maximized performance which is something that can be difficult to get out of JavaScript most exciting though is the fact that WebAssembly is now shipping in all four major browsers making it the first new language to be fully supported in every major browser since JavaScript was created more than five years ago alright so what can you actually do with this well as I already mentioned because WebAssembly offers maximized and reliable performance you can now expand the set of things that you can feasibly do in the browser things like video editing complex application codex digital signal processing and many many more performance demanding use cases can now be supported on the web secondly WebAssembly offers amazing portability you can port not only your own applications and libraries but also the wealth of open source C++ libraries and applications that have been written lastly and potentially most exciting to many of you is the promise for more flexibility when writing for the web since the web's inception JavaScript has been the only fully supported way to execute code on the web and now with WebAssembly you have more choice alright so now that we all know what WebAssembly is I want to jump in and show off some of the new features that we've been adding in just the last year first off is source maps you likely all know how important source maps are when you're working with something like TypeScript or Babel but it's even more important when you're trying to debug your WebAssembly code source maps let you turn something that looks like this into something just slightly more friendly like this with source maps you can see the specific line of code where an error has occurred you can also set break points and have the program actually pause at the appropriate moment one of the big feature requests that we've heard from users is for better performance when you're starting up your application so that your module can actually get going faster and for that we've created streaming compilation in the past when you wanted to compile a module you had to wait for the entire module to be loaded off of the network and only then could you move on and actually compile it now with streaming compilation so you can start compiling each piece of your module immediately even before the other parts have finished downloading to show you what that actually looks like here's a simple example where we call the fetch function fetch for a WebAssembly Fibonacci module and then we just pass that fetch promise directly into WebAssembly that instantiates streaming and it takes care of all of the underlying bits and pieces for you to deliver this experience we did some profiling at different network speeds to see the impact of this we found that all the way up until 50 megabits per second network speed the network was actually the primary bottleneck and that the compilation was done as soon as the module was loaded it wasn't until you hit the 100 megabits per second speeds that you actually needed additional time past the time it took to download the module in order to fully compile and get it going to make start-up time even faster the team built and launched an entire new compiler that we called the lift-off compiler this lift-off compiler takes the WebAssembly bytecode that comes down off of the wire and then starts executing it immediately the WebAssembly module is then taken off the main thread and optimized further by the turbofan optimizing compiler when turbofan is done optimizing the WebAssembly code it's then hot swapped in directly without any need for explicit developer action Unity did some benchmarking on the effects that lift-off had on a benchmark where they tried to load a very large game they found that it went from taking seven seconds to load the game to less than one second which makes all of the difference when you're waiting to get into a game experience probably the biggest feature that seems to be working on this year is WebAssembly threads WebAssembly threads lets you run fast highly parallelized code for the first time across multiple browsers including existing libraries and applications that use threaded code such as P-threads to the Web this is such a big feature that I'm actually going to leave most of the explanation to Alex later on but before I get into that I want to show off some of the cool new applications that have already been building and launching with WebAssembly this last year first off is SketchUp SketchUp is a 3D modeling software that you can learn and start using really quickly unlike traditional computer aided design most people can learn SketchUp almost right away SketchUp lets people draw in perspective and then push pull things into 3D in no time people can draw and redesign a living room plan out a do-it-yourself project or create and export a model for 3D printing this app is a lot of fun and you should all check it out which you can do right now instantly just by going to app.sketchup.com SketchUp has been around for a desktop application but the team's strategy has always been to expand and broaden the spread of people who can use 3D modeling and by making it simple and easy to use and accessible to everyone delivering the app over the Web was a critical step in that strategy and the team knew that they wanted to use the same code base for their desktop as their desktop applications because rewriting their entire application in JavaScript was just simply not an option the team's approach was to use the WebAssembly and Emscript and Compiler to compile their core 3D modeler and bring it to the Web the initial port took two engineers only three months to bring to the Web which is pretty phenomenal when you realize just how drastically it expanded the reach of their application the early focus for SketchUp has always been on the next generation of 3D modelers and today SketchUp is already one of the most popular apps on the G Suite for Education Marketplace at the same time the app has opened up a subscription model for SketchUp and in just over six months the team has increased its paying customer base by 10% SketchUp has also seen consistent growth in session time returning visitor percentage and active users moving on the next application that I want to mention is Google Earth I'm happy to say that Google Earth has successfully gotten their application to ported to WebAssembly including the newly added support for WebAssembly threads the best part is that they actually got this threaded build working in both Chrome and Firefox making Google Earth the first WebAssembly multi-threaded application to be running in multiple browsers Google Earth did some benchmarking comparing their single threaded version to their multi-threaded version they found that the frame weight almost doubled when they went to their threaded version and the amount of jank slash drop frames also reduced by more than half alright so the last big company and launch that I want to mention is Soundation Soundation is a web-based music studio that enables anyone to make music online with a wide selection of instruments loops, samples and effects no additional hardware or installation or storage is required everything is accessible instantly and everywhere Soundation's mission is to facilitate musical creativity in the world and they do this by offering a powerful accessible and affordable service to the growing demographic of casual producers as an online web app that is able to streamline the ability for producers to connect with peers get feedback, enter competitions and even get record deals Soundation is using a wide variety of advanced web features to deliver this smooth experience and the first of these is audio worklets launched in Chrome 66 the audio worklet brings fast performance and extensible audio processing to the web platform it can be used in conjunction with other cutting edge web technologies or a buffer Soundation is also one of the world's first adopters of WebAssembly threads and they use these threads to achieve fast parallelized processing to seamlessly mix songs so let's have a look at their architecture and see if we can learn anything on the JavaScript side of the world they have their application UI that application UI then talks to an audio mixing engine this audio mixing engine then spawns off multiple different worker threads in WebAssembly these worker threads can talk to the same piece of shared or a buffer memory this shared or a buffer memory is then passed on to the mixing thread which further passes it to the audio worklet which produces the final result here's a visualization showing the performance improvements on rendering a song as they added multiple threads adding just a single additional thread doubled their performance and by the time they added five threads they had more than tripled the performance of their application so that's a great visualization showing the performance improvements that thread can bring but since this is Soundation I thought we would instead try and listen to it so here is us trying to render a song in Soundation in the single threaded version of WebAssembly and fair warning this is not going to be an entirely pleasant experience so as you can imagine this is not the experience that you want when you're trying to create beautiful music but luckily Soundation succeeded in launching with WebAssembly threads that sounds just a little bit better so as you can see not only is this a much more pleasant experience but the CPU also has additional cycle for other work alright so I want to close off my segment by just talking about some of the amazing community projects that we've seen people working on out there and the first of these that I want to mention is the awesome work that's been done by the Mozilla team and the Rust community to bring Rust to the web through WebAssembly they have an awesome set of tools and materials to help you get started and you can check those out at rustwasm.github.io speaking of languages we've also seen more than 30 different projects trying to bring other languages to the web through WebAssembly these languages include ones like Perl Python, Ruby, Kotlin, OCaml, Go, PHP and the .NET framework many of these languages require a garbage collection which isn't currently something that's supported in WebAssembly though we are working on it these languages come to the web by actually taking their garbage collection system and other runtime bits and compiling that itself down to WebAssembly and then shipping everything down to the application page this is a great strategy for getting started and experimenting with some of these languages on the web but because of some of their performance and memory characteristics these aren't currently suited for production applications the fully supported languages today are C, C++ and Rust but everything else should still be considered experimental and there are so many more amazing community projects that I don't have time to do justice we've seen people porting Game Boy emulators GNU libraries digital signal processing and even entire operating system like Microsoft Windows 2000 now available in a browser tab which is if not a exactly pleasant experience definitely interesting you can check out all of these demos and much more at the forum at the demo setup for you to check out and with that I want to hand it back off to Alex to talk to you more about WebAssembly threads and how to use some of these features thank you Thomas one of the kind of big things at the conference here when we talk about the browser we talk about using the platform and quite often people think of the platform as the software stack that's inside the browser but I like to think of it a little bit different I like to think about the hardware the platform that's actually in your machine so here is like an artist's rendition of the inside of a desktop microprocessor this is what you see today if you take the plastic off the top so at the top we have the green bit which is interfaces to the PCI bus it used to be called the North Bridge on the left and right you have memory interfaces these little yellow boxes or integrated memory controllers and you see all these blue tiles here and what they are is cores so each of those is a CPU core in its own right so this may be a desktop microprocessor but even in your pocket if you have an iPhone or advanced Android phone you will have something like 6-8 cores in there ready to be used to do good computing work so when you write a normal web application you are looking at something like this you have one core running so you have all this silicon doing nothing so you are not really using the platform then we have seen people come along and do something like spawn a web worker to do the hard work and have the main thread for UI in that case you are running a double threaded application so you are using the platform a bit better but you are not really doing everything you could now since we have introduced WebAssembly threads you can do stuff like this you could actually use many cores for your application and as we saw with the Soundation demo there is a visible improvement in the user experience so I would really like you to start thinking about how you can adapt your application to use all these cores now when you create a web worker you have to understand that that is a threading primitive and they run concurrently so you can do compute in parallel and this is a primitive that we all know pretty well but one of the things about when we do something like this if we have an app on the left we have what we call the main thread we are all familiar with the main thread that interacts and talks to the DOM the worker that we generate is what we call the background thread so it is running in parallel but it does not actually have the ability to call web APIs and interact with the DOM and stuff like that but when you create workers when you create them normally with a JavaScript thing it creates an instance so these instances kind of sit on their own on the side, they are running in parallel they do not get to do anything with the DOM so they kind of run on the side so if you create one you get v8 hanging off the top of your main thread and you get an instance hanging off your worker so then if you go and create a bunch of workers you get a bunch more v8 instances now each of these instances consumes memory so that means that if you start spawning lots of JavaScript workers to make your application do stuff more complex it will actually consume more and more memory and you might run out on a phone or something like that but it will let you in on a little secret they do not talk to each other so you have separate bits of JavaScript running in all these parallel threads but they do not communicate they do not have any shared state they are literally another copy of v8 sitting in memory the way these things can talk to each other those with post-message post-message is kind of like sending a paper plane I will send it from this worker to that one I will sit around and wait for it to arrive and there is no predictability about when that will be delivered so it is not a great experience for a true multi-threaded application now when the team built WebAssembly threads they implemented something that looks a lot more like this so what happens is this is an example of having three threads so under the hood we actually spin up the three web workers but the difference here is that the WebAssembly module is shared between the workers so there is one copy of the code so you are not consuming nearly as much memory but more importantly they are shared state and the way they share state is through a thing called shared array buffer so if you are a JavaScript engineer you probably know what a typed array is like you use them day to day so shared array buffer is basically the exact same thing except that it can be shared across workers and so what that means is that the execution of the application is visible to any of the workers in parallel now if you farm off something into a pool of workers and you have it hanging off your main app it will look something like this you will have your main thread for your main application that can talk to the DOM that can use all the web APIs and it can see the shared array buffer and that shared array buffer is being manipulated by all the parallel threads in the WebAssembly module but by now you are probably thinking this is all well and good so I am going to do this stuff in my actual application so I will start with a very simple example we will do an example which is just like a little Fibonacci program that runs into threads so it will look a bit like this the main thread, the background thread the WebAssembly module all talking to the shared array buffer so we just take some source code which will be something like this which is just a bit of C code and what we want to do is we want to compile that into a form that we can use in parallel threads and the way we do that is with the Emscripten tool chain so it has a compiler called emcc and there are a couple of flags here that I want to point out the first one is dash s use pthreads equal 1 what that does is turn on the multi-threading support when you are compiling your WebAssembly module the second flag that is interesting is dash s thread pool size equals 2 what this does is tells the compiler that I want to pre-allocate two threads for my application so when I actually start the WebAssembly module it will pre-create two threads and get going with that now this is kind of visualisation what would happen if you set pthread pool size to 2 you get the picture on the left you get two workers and if you set it to 8 you get 8 workers and that happens at start-up time of your web app now you may be wondering why you care about the number well the thing is that you should really try and set it to the maximum number of threads if you say I only want two threads and then suddenly your application needs three or four you have a problem so what happens is that the WebAssembly module has to yield to the main thread before it can create that worker so if you are relying on all of the threads being there at the start-up you need to set the number high enough but of course if you set that number too high you are wasting memory so this is something to tune so in the sound-ations case they use five threads and it works really well for them so when you are tuning your apps you need to think about it so if you want to get out there you fire up Chrome 70 and navigate to Chrome Flags and search for WebAssembly thread support change the setting here from default to enabled and at that point you will have to restart your browser and then you can start building stuff and testing it locally now once you have built a working WebAssembly thread app you probably want to deploy it out to the public and the way you do that is by getting an origin trial token so an origin trial token is tied to your domain and you get it from us and you basically it is just a meta tag that you put on the page and that tells the browser hey these people are trying WebAssembly threads and let's use it so if you want to go ahead and do that and I encourage you all to do so just go to this short link and there is a form you can fill in put in your URL the reason you want to use this stuff and go start building something really cool now of course as developers we spend most of our time in DevTools trying to debug things and then you can go ahead and do some work to make sure it is not too much of a delay and then once you have the code released to stable you can single step instructions and that is WebAssembly instructions so it looks a little bit like this like not friendly as Thomas pointed out so you can see this little box up here showing you the threads so this is a two thread application running and this box is the actual WebAssembly disassembled code like that debugging experience so Chrome 71 brings source map support as Thomas mentioned earlier so source maps let you change what you saw before into something oops, next one something that looks like this so this is like the source code of the Fibonacci function sitting in DevTools and you can single step over instructions you can set breakpoints, you can do all stuff like that now if you want to actually do this yourself you can generate the source map you just need to set two flags on the EMCC compiler command line the first is dash G which says generate debug symbols and the second is dash dash source map base and that points to a URL where it will actually find the source map file in this case I'm using localhost so I'll be using it on my own workstation okay so I'll just recap on what we've talked about today just so you can remember what we talked about now the first thing is the streaming compilation the compiler is a compiler of the WebAssembly module as it comes over the wire which is launched now it's in Chrome 70 which speeds things up the second is lift off which is the tiered compiler so you get the first compile really fast so your app starts and then hot swaps in the better code a bit later on then we have of course WebAssembly threads shipping now in Chrome 70 and you can use it with an origin trial so you can ship it to your customers and they can use it and play with it and of course Chrome 71 which means that it's a lot easier to debug your code so I'd encourage all of you people out there to start writing using WebAssembly threads because it unlocks the power of the super computer that's sitting in your pocket right now thank you is it time for another quiz? another quiz yeah let's go for it right okay I think this is my favourite one actually oh yeah you had quite a fun researching this right is it? yes you're going to see a lot of names come up on screen yeah because there's Chrome right but there are other ones as well don't know if you heard some of them are going to be ones we've made up yep some of them are not some of them are going to be from the past so they're not all modern browsers you get a few seconds per one to guess so be fast here we go right so what we've got here oh yeah that one we already gave the answer to jelly cat timber wolf killer net download that one net wrangler firebird conqueror dillo what are you thinking for any of these firebird maybe oh eww fandango that one sounds good eyebrow cyber dog following the pattern of having something in an animal oh web walker web walker jake browser voyager and moffra oh that's how you pronounce it I think so I don't know oh people put it in front of a mosaic in our talk and then pretty sure the others are fake well bit split so yeah jelly cat we did make up timber wolf is real timber wolf is a gecko based browser for the amiga there we go killer net is a terrible british tv series from the 90s okay we'll throw that in there um oh wow people are pretty confident about this so the answers are so net wrangler is the search engine they use on the tv series dexter you know when they don't want to say google for probably legal reasons firebirds the original name of firefox didn't catch up too many people conqueror so that's been around for a long time it's where webkit started dillow yes that's a simple really low tiny little browser for embedded systems oh I didn't know that ew that's a real one yeah what is it it's an acronym and it stands for the emax web of course emax fandango i just made up and ibrows is the best name for a browser it's the one it's what safari should have been called but ibrows is another web browser for the amiga cyberdog is a browser by apple that was predate safari i didn't know any of this until i looked it up last one let's go i just made that up jake browser mothra is a browser for the plan 9 operating system which i had never heard of but it is a thing it's another amiga one i love the amiga there's only a couple more questions to go and it is really tight at the top of the motherboard be sure to join the game yeah don't have a question to go after this right so in this talk the whole reason i am a developer is that i want to be lazy and in this talk somehow he's going to tell us how to be lazy please welcome to the stage jasio hagnani hey everyone i hope everyone is having a great summit so far it's been an exciting day too i think we can say so my name is jason finniani and i work in chrome on things like web components polymer and lit html view and let's get into the next slide advanced todays the day yeah so we're going to look at how to do less be lazy and take breaks and end up with a better web application for it and when i say better i'm really talking about four overlapping goals here to input and changes. And more than just making fast apps, or making it possible to build fast apps, we want to make this easy. So easy, hopefully, that it's the default, because this is going to make our users happy, our developers happy, and happy developers will make better user experiences in the long run. So with these four kind of general goals in mind, I'm going to walk through several techniques for leveraging asynchronous programming for building better UIs. So we're going to look at batching work for better performance and developer experience, keeping our UIs responsive with non-blocking rendering, managing async state for a better user experience and developer experience, and then finally, coordinating async UIs once we have all this asynchronicity throughout our application. And this makes for a handy little talk outline right here, so I'm going to give some background and then jump right into it. But first a quick note, this is day two of the Dev Summit, so it's a little more future forward looking. And the stuff I've made here, the demos in helper code, is a little bit experimental, but it's only using current browser features. And so all of these techniques still work today. So now for a little background, I mentioned that I work on web components in lib.html, so we're going to use these things as the basis for the demos and the talk. So if you haven't used web components before, web components let you define your own HTML element tags. So it really refers to two specs here, custom elements and shadow DOM. And combined, they let you define your own tags with custom implementation and UI. So to create a custom element, you simply extend from HTML element, a built-in class. You add your implementation, and then you register your class with a specific tag name with the browser. And from there, you can use this element in that tag name anywhere you can use HTML, so in your main page document, inner HTML, document create element, even in other frameworks. So next, lib.html. So lib.html is a way to write declarative HTML templates in JavaScript. And we use tagged template literals to write them. This is a feature that came out in ES6. And these are strings that are denoted with backticks instead of quotes. And they can have a template tag in front of them. And we're going to use the lib.html template tag here, which is just going to allow us to process this template to make it more efficient. And then inside of our template, we can have expressions. And these are just plain JavaScript expressions. Once you have a template, if you want to render it to the DOM, you simply pass it to the lib.html render function and give it a node to render to, and it's going to make that DOM appear there. And the nice thing is that if you call this render function multiple times at the same template with different data, lib.html is going to take care to only update the expressions that changed. It will never update the rest of the DOM in the template. And then finally, if you take web components in lib.html and combine them together, you end up with lib.element. So lib.element is a convenient way to write web components. Because this is day two and a little more forward looking, I'm using some future JavaScript features here like decorators and class fields. And lib.element really gives you two features. One is the ability to declare observable properties. So these decorators here are going to create getters and setters instead of a field here. And the setters are going to recognize when this property changes and then tell the element that it needs to update. The other feature is that it lets you write a render method that returns a lib.html result. And so when the element knows that it needs to update, it's going to call this render method, take the result and render it to the shadow root of this custom element. And then finally, we give you a little helper here so you can use a decorator to register the element. So once you do that and you create your element, then you can use it anywhere you would HTML and it will render as you expect. So that brings us to our first technique here which is batching work. And if we go back to our element definition, we'll see that in the render method here, this is called force automatically by the lit element base class. But the question that comes up here is when is this method called? So to look at that, let's take a look at a little example here. We're going to use this element imperatively, but this also applies if you used it in the main HTML or if you used it with a framework. So we're going to create an element instance and then we're going to set a property. So the question is should we render now? We could render now, but we don't know that we're not going to set another property right after we set this property here. And if we did render after every property set, we would be rendering multiple times potentially for every element as we update the data. We don't want to do that. So instead we're going to enqueue a task and then in the future, that task is going to run and actually render the element. And so that we know when the element has rendered and when it's complete, we add this promise hanging off the element here called update complete. And this is going to resolve when the element has rendered and if you wait for it, you know that you have a fully rendered element. And the way that this works is we have an asynchronous update pipeline under the hood and lit element. So when a setter is called for a property, it's going to call this request update method. That's going to schedule an update task. But it's only going to schedule one if there isn't one existing. If there is one, we're just going to use that same task and that's how we get the batching. When that task runs, it's going to call the update method on the element and that's where the actual work is done to render to the DOM. So we do this for two reasons. One is performance, like I mentioned. And the other is developer ergonomics. So if we go back to the template here, we see that this template renders two different properties in the same template. And it's much easier to reason about these templates if we don't have to worry about the order in which these properties are set or whether or not they've both been set together or not. So we'd like to take all the changes that are incoming for an element, batch them together, and then let you write a simple declarative template to render your element. And so an interesting implication of this is that lit element rendering is always async. You never opt into being async and you can't opt out to being synchronous. And when we explain this to people, sometimes we get a question, won't the UI partially update? And the answer is no. And I built a little animation here to try to show this. So here we have a tree of elements. Let's assume these are all lit elements and they're passing data down the tree via properties. And so that's our component tree. And then right here we have the micro task queue. So hopefully in other talks they've talked about this. We have a queue of micro tasks that the browser runs through to completion before it will paint or handle user input. The yellow box here is our current micro task. And so if we have some code that runs that's gonna set a property on A, that's gonna cause its micro task to be in queue. And then when A gets to run, it might set some properties on B and C so their tasks are gonna be in queue. And B is gonna set some properties on D and E. C is gonna set some on F and so forth. And we get to run through this entire queue until it's empty. Once it's empty, then the browser can paint. And to show this with a demo, I made a demo here of a tree of elements and each one takes an artificially long time to render. And so normally, you might expect, if you don't know how the micro task queue works, that these might paint in individually. And we'll see here that if we click the render button here that they all snap in at once. So even though each one takes 50 milliseconds and the whole thing takes 750 milliseconds, we don't see the intermediate states. And this is great if your UI is painting fast, if it's not taking 750 milliseconds. But if your UI, if you have a very complex tree and your UI is rendering slowly, then we've just introduced jank, which we don't want. So this brings us to the next technique, which is non-blocking rendering to keep a responsive UI. So we just saw that we can have async rendering but still block paint and input. And we know we can have complex UIs that take a long time to render. And we know we need to render in less than 10 milliseconds to keep our 60 frame per second target. And one way to look at this is that we have all these micro tasks here in the blue rectangles and they kind of fill out a complete task and this task blocks rendering. And as long as the complete task fits within our 10 millisecond budget, we're fine, right? But as soon as the task exceeds the budget, we're gonna introduce jank. So our technique here is to break this up so that instead of having a whole bunch of micro tasks in one long task, we just give a task per component to render. And now these will hopefully fit in under 10 milliseconds and we will get smooth updates. So the way we're gonna do this is we're gonna tap into this asynchronous update pipeline that LitElement has and we're gonna customize the schedule update task step right here. So that brings us to our first experimental helper that we're calling for the moment, lazy LitElement. The way that this works is that under the hood in LitElement, there's a method called schedule update and by default this thing just waits for a micro task and then it calls validate which does the work of rendering. And so what we do in lazy LitElement is we override this and instead of waiting for a micro task, we wait for a promise that's resolved on set timeout timing. It's a very simple thing to do but it lets the browser paint and handle input before we render. So now if we go back to this demo here, we can turn on lazy rendering and now everything's gonna render on set timeout timing and you can see that we paint the intermediate steps here as we go and so we've reduced shank and by showing some intermediate state. And so a lot of frameworks have been working on asynchronous rendering over the years and especially React recently and they have created a demo that I quite like which is a Sierpinski triangle demo and the way this works is that you have a large tree of components here and each one of these has also been written to take an artificially long amount of time and they all have a label and this label represents data flowing down the tree. So to update the label on all the components is gonna take a little bit of time and while we're updating the label we're gonna animate the size of the tree and we want this animation to be smooth and it's driven from JavaScript so if we take a long time to update the tree we're gonna get jank. This is a nice demo because it highlights some subtleties that you need to take care of when doing asynchronous rendering. So we have an expensive subtree to render, we want this continuous script driven animation to be smooth and then on top of that we have these high priority inputs that we also wanna handle. So I implemented this here with regular lit element that uses the microtask queue and you can see that as the triangle updates in size we get some jank in the middle there and we wanna avoid that. So it's very simple to re-implement this just by changing the base class to lazy lit element and now you can see that we get a smooth animation even as we update the labels here. But next I mentioned we wanna have these high priority inputs so this brings us to the idea of urgent updates. If you defer rendering it's possible that you have situations where you wanna render sooner than you've scheduled yourself to be rendering and so with these urgent updates what we've done is we've created in lazy lit element we don't just override the schedule update method we add a new method on here called request urgent update and that's gonna be called make your element render sooner. And it's a very simple implementation I wanted to show it because it's a little bit interesting so instead of waiting for a promise that resolves with set timeout we still do that but we also store let me go back here maybe not okay well we restore the resolve function on the instance of the element when we request or when we schedule an update and then we can go back and we can call that resolve function which will cause our promise to resolve earlier than it was scheduled to so in essence we're kind of jumping from the task queue over to the microtask queue and we're gonna render as soon as possible. Okay so this is how you would use it so we have a partial implementation of our dot element here and these are some event callbacks that might be called from on mouse over and on mouse out and they're simply gonna set the state that our rendering is based on and then they're gonna call request urgent update to kick us to the faster timing and so once we do that you can see that we have our smooth animation our labels update and we get a fast hover over effect here by calling request urgent update. Okay and let me talk real quick about scheduling so in that demo there I did a very simple thing I said instead of using a microtask we're gonna use a full task and I actually didn't expect that to work as well as it did when I made the demo but it did work very well the browser ends up doing a very good job of executing as many tasks as it can before it has to paint a frame but it's pretty naive and it leaves off a lot of features that we would like like different priority queues the ability to cancel work the ability to coordinate long chains of tasks that are all related together so that schedule update method is exactly where we would like to plug into a native platform scheduler API like Shuby and Jason talked about earlier so the important point is that with web components we don't have a kind of overarching framework that can coordinate and schedule our components for us and we might get components from different vendors so being able to plug into a global platform rendered API for scheduling is gonna help us tremendously here. Okay there we go let's move on to talking about managing AC state so so far we've talked about being a synchronous on a per component level so yielding to the browser and letting it paint in between components but sometimes we need to manage data that itself is asynchronous and let HTML rendering is synchronous by nature when you give the render function a template it's gonna do all the work immediately to render that so what do we do if we wanna render asynchronous data inside of a synchronous template? So we can look at how we handle data normally here and if we have a string and just kind of a plain reference to that string it's pretty easy to use we just use it in a template and we get the rendering that we wanted and if we change this instead to load off the network it turns out that let HTML handles promises already and so what we'll get is a blank screen here and then when the promise resolves we'll render hello world so this is kind of nice we get some behavior that we might expect right out of the box but we might not wanna render a blank screen as our initial state so this brings us to the idea of directives and these are functions that are a little bit special and they're able to customize how templates are rendered by let HTML and one of the more useful directives that let HTML ships with is called until and what until does is it takes a promise and it will render the result of that promise when it resolves but it will render a placeholder until that promise does resolve so we can use that here and you see that in the template we call the until directive with our promise and the loading placeholder and that's gonna show first and then when it resolves we're gonna render our content there so this example is a little bit too simple because it assumes that we have this promise available already that we wanna use and a lot of times instead we wanna run some task when we need to render and we might have some operation that's dependent on some instant state so in this version of the example here we have a file name property and we wanna fetch some data based on that file name now we might be tempted to call fetch inline with the template so that we'll fetch the correct file and then render it and this does work but it has a problem where every time we render this template we're gonna call fetch and we might be rendering the template because some other properties change and in this case we're gonna flood the network with lots of fetch requests and we also might show an alternating kind of loading and resolve state in our UI but it's almost the mental model that we want so what we really wanna do is we want to be able to run a task that's dependent on some data only when that data changes so that brings us to the next experimental helper here that we call run async and what run async does is it performs an async operation but only when the data it depends on changes and it's actually a kind of directive factory so the way it works is that you give it an async function that takes some data and produces a result and it returns to you a directive that you can use inside your litage to mel template so if we wanna reproduce the previous example here using this fetch we can just create a fetch content directive by passing run async a function that takes a file name and calls fetch for us and when we go to use it we can just use it inside of our template and we pass it the file name here and then we pass it another template that's gonna render when we have successfully resolved that promise so this lets us get part of the way to our goal here we can render some asynchronous data but it turns out that asynchronous data can be in a number of different states for any async operation you can be an initial state which means you haven't started it you can be pending it can have successfully completed or ended in failure and so it really helps us if we model and think about all of these states explicitly so that we make sure we handle them and next slide here having some clicker problems do we have another clicker? today is the day of malfunctions there we go hopefully this works, do we wanna swap out? okay, I'll keep going until it breaks again and I can at least be as good at this as SIRMA yeah sorry SIRMA okay so here we can see that our directive actually takes templates for all of the different states that our UI can be in so we have a success template, a pending template, a initial state template and the error template and to make this a little bit more realistic I made a demo that searches the NPM package repository and this is a basic kind of search as you type live search result demo and it has a simple UI we just have a search box and a results panel down here and there it goes again let's see do we have another one? okay it's not just me okay all right, we'll keep going here so to build this demo we're gonna go in two steps uh oh Halloween is over this is not haunted anymore, is it? the screen went back did I do that? okay this is all gonna be edited out it's gonna be fantastic okay, so we're gonna build this demo in two parts, hopefully so first we're gonna define a search packages directive using run async and so here's our initial starting point for this directive our async task function here is going to generate a URL for the NPM search service here and then get the results by fetching it and then here we're gonna handle the response and just do a little bit of due diligence and make sure that we have a 200 response and return that otherwise we throw the message we got back and I wanted to make this task able to kind of trigger that initial state template and the way we do that in async is that we throw in run async we throw an initial state error so here I'm just gonna check to make sure we have a query we can execute and if not I'm gonna throw this error and run async is gonna render the initial state template and then it turns out that the NPM registry is a little difficult to get it to trigger an error usually it just returns empty results so to be able to show the error state template I just do some pre-validation here of the query and make sure we don't start with the dot or underscore and then finally to make this even more realistic if you're doing a search as you type UI you're gonna have a lot of requests that you initiate where you don't care about the results because you've typed in a different query by the time the result gets back and so the Fetch API is able to take something called an abort signal so that we can cancel the requests and so run async will generate an abort signal for you and then you get it in this options argument here and you can forward this on to the Fetch API and so this is our entire search packages directive here built with run async and so next we just need to use it here's a little snippet of the demo UI we have an input element here which just simply displays the query and updates the query on input and then down here we use the search packages directive and so we use it by passing it the query and then we give it a success template here we just iterate over the results and display little cards and then we give it the pending initial and error state templates here and so when we go to use this demo we see that we have the initial state template rendering sorry that's small says enter a search term when we type it turns into loading and then we get our results back and if we were to go back and enter a query that starts with an invalid character you're gonna see the error template there and that even updates as you type and if you realize your mistake and go in and type in a new term you're gonna get all the intermediate async state templates as you type so that's the demo and now you really did see most of the implementation there so it was quite easy to do with that directive and the key takeaway here is that we wanna explicitly model our asynchronous operation state if we do that we're more sure to take care of all the states that we can be in and if we build a UI for each state then we're gonna accurately let our users know what is going on with the application and they're gonna have a better user experience okay so finally once we have an application and a UI built up of all these asynchronous components we might need to coordinate them right so if you have a lot of async children in your page how do we ensure a consistent UI if you want to or how do we avoid a sea of spinners and so I added to demonstrate the sea of spinners problem I added to the demo a little feature here where when you search the cards are gonna do their own query to bring up the disk tag of your MPM packages and you can see though that we saw a lot of spinners on the page and this can be kind of a distracting UI so we wanna give developers an option to not have to deal with a sea of spinners and the way we're gonna handle this is that we're gonna coordinate the components here with events so what we're gonna do is we're gonna fire a promise carrying event and that promise is gonna resolve when some work is done so an async child component creates a promise fires this event and then resolves the promise and so that's gonna look like this A is gonna be our container up there and E and F are async children and they're gonna fire this pending state event that holds a promise the container is gonna handle the event and wait for that promise and then the children when their work is done they're gonna resolve the promise and finally when all the promises have settled we're gonna update the UI so that brings us to our last experimental prototype here called pending container and pending container kind of takes care of all this plumbing for you and it's a class mix in so you can apply this to a superclass like lit element and this has two features too we have a has pending children property so this indicates whether or not there's async work happening below you and when this property changes it's gonna cause a re-render of the element and then we have an event listener that listens for the pending state event and then triggers the bookkeeping so that we know if we have any extra work pending below us and to use it you can just apply this mix in to the superclass here to lit element and once you do that you get available the has pending children property that can use in your template and so now we're gonna add a spinner and this is a top level spinner to the UI that's triggered based on whether or not there's any pending children and so the run async helper is gonna fire these events for us and this container mix in is gonna capture them and so what we're going for here is a UI where we have a spinner and it happened again there we go okay might just be a faulty button here so what we wanna add here is a spinner at the top of the UI that's gonna be going whenever there are pending search results coming back from the server or we have children that need to update so now you see we get the spinner as we type we don't get the spinner on the children but we can see the top level spinner is still going and then the results come in and the spinner stops and so that's the UI we were going for and it was pretty straightforward to build with these directives so when you have an asynchronous UI there's a lot of different options that you have for how to handle this you could try to block the UI while you have pending work you could show the raw incremental updates you could have individual spinners on your page or you could try to replace that all with top level placeholders and spinners so what you wanna do kind of depends on the UX and the UI that you're trying to build but the key here is to provide the plumbing and the framework and the nice API so that you can build whatever you choose to build and that went two slats forward all right well this is my wrap up so we're very excited about some additional work that's gonna be done in this area so Jason and Shuby talked about the native scheduler API which we're extremely excited about display locking is a new proposal where you're gonna be able to lock an entire portion of your screen so that you can update it incrementally and then flip it to the new version and Gray talked about virtual scroller which is gonna let us handle large amounts of data as well and then our R and we're gonna be working on more libraries and examples of how to do things like lazy load components background rendering and viewport based visibility rendering so that things only render when they show up on the screen I have a few links here these would probably be easier to get to from the video and then I'm gonna be over in the ask Chrome area doing Q and A after this if you'd like to ask any questions okay thank you All right final big web quiz question of the day any users of web pack grunt, scope, roll up all of those things you're gonna love this one yep build tool is it a build tool or is it just a noise that a human makes that's very true it is very true I don't make a broccoli sound I don't even know what that would mean devices at the ready it's the final question yes you ready okay go all right full English okay it's a gore Rudolph Brunch I love a bit of brunch ooh it's all changing it's interesting the back end is deciding what it thinks the the confidences are there we are gobble SermaJS JakeJS quite possibly real parcel web pack I think I might have heard of that one Brussels sprouts it is reaching that point well it's broccoli-existent you know that makes sense nearly done aubergine uh that would be zucchini over here I think wouldn't it no eggplant eggplant see I've I've so I get so confused I'm so easily confused and build gates I saw build gates at the end there hahahaha wait all right let's see how you did full English is fake whereas brunch is real I would have put uh brunch as a fake gourd the second one gourd I've no idea uh wait that's a kind of vegetable isn't it yeah it's like a yeah pumpkin okay I think I've seen one recently and it was not a particularly appetizing looking thing um to me anyway Jake Jess you're saying you're saying that's fake are you uh it is fake oh no it's real it's real yeah I had to wait all day and it's actually real yeah I'm losing faith in all of this what are we doing as a community what are we doing where we don't have a surma-jess but we have a parcel I thought that would be a gobble yes it's a a ritualist one sure sure why not okay uh let's say here broccoli yes brussel sprouts fake webpack 100% 100% yep 100% if yeah I feel like I feel we we established that webpack is a thing yeah to be fair in my introduction I might have ruined that one all right last set here we go aubergine his fake and roll of course build gates again I'm sad on the inside that that's not a thing okay where are we at we got our final speakers over there cool it is time for the final talk yep give a round of applause for Dan Daskalescu oh I'm just so sorry it's not just me here I'm here with Stephen Barber who's an engineer on Chrome OS so good evening everyone I am Dan Daskalescu I'm a developer advocate at Google and tonight we'd like to talk to you about why Chrome OS is an awesome choice as a web developer platform and there are two main reasons for why you should develop on Chrome OS the first is that Chrome OS is an unprecedented convergence of technology stacks it brings together web applications because it's an OS that has a browser as this UI but you can also run Android apps and with full Google Play support you can install Android browsers that you can test your web apps on and starting with Chrome 69 you can install Linux and you can run your familiar development tool workflow there and this is a sneak preview of what's coming into the talk you can see here a terminal an IDE, a couple of browsers and of course a PWA so the second reason why you should develop on Chrome OS on target Chrome OS is that it powers a very wide variety of devices you might have seen Chrome OS laptops or Chromebooks from a variety of manufacturers and also you might have seen some convertibles again from various manufacturers and also all in ones like the LG Chrome base and small four factor PCs this is the Chrome box which was debuted by Samsung 2012 and then HP Asus and other manufacturers followed suit and this is the mini four factor PC it weighs three ounces you plug it into the HDMI port of a display and it turns that one into a computer you can attach mouse or a keyboard via USB or Bluetooth and then there are mega displays this is the Chrome box commercial it powers digital signage or QS displays and this summer we saw the first tab that powered by Chrome OS this is the Acer Chrome tab 10 and of course Google has our own lineage of devices this is the Pixelbook the flagship device which is at 75% off for you guys yeah and our latest offering the Pixel Slate which was announced last month so in one slide what are your Chrome OS because of a large and increasing market share you probably heard that we have a very extensive presence in the edu space Chromebooks are very popular there then if you optimize for Chrome OS you'll actually target a variety of these convertible form factors devices that have or have not got a keyboard or a mouse or a stylus or even a touchscreen so this could also future proof you from devices that haven't been invented yet though after I put this slide together Samsung actually released foldable screen phone that becomes a tablet so the future is here already so in again in one slide the reason is diversity you can develop apps on linux and test them on a variety of android linux browsers so Chrome OS brings together your own development workflow the one you're familiar with your own development tools a variety of form factors from mobile to tablet to convertible to desktop and browsers on android and linux and there are quite a few of them edge and samsung internet work on the pixel slate the others should be able to be installed on a google pixel as well and this is edge and uc browser and firefox running on the same Chrome OS machine then you can install some desktop browsers so you can test in full desktop firefox if you install the linux version of it this is firefox and this is epiphany aka nomweb and you can also install docker which i've heard many of you are interested during the forum this is unofficial support for it but there is a thread on reddit if you search for docker now working in the cross tennis lab reddit you'll find this thread try at your own risk but it does work so how does it work how does chrome OS manage to stick to its principles which are speed simplicity and security how can it run all these different web apps and android apps from play and linux apps like gimp while staying fast simple and secure this boils down to the container's architecture which i'll let steve tell you more about thanks dan so when we were bringing linux apps to chrome OS it was really important that we maintain all of the things that make chrome OS so simplicity was first it shouldn't feel like you're running a separate OS but instead have the linux terminal and gooey app seamlessly blend in with chrome and android apps and we've managed to do this while keeping things fast so android and linux support don't do any emulation by using lightweight containers and hardware virtualization support your code will run natively and of course security is always on the mind for chrome OS so christini uses both virtualization and containers to provide security and depth so to expand a bit on security we're starting from a secure foundation and we're working our way up with features from there so right now linux is pretty isolated from the rest of chrome OS but we're working on the ability to share files and folders with it and soon we'll be adding support for google drive as well so you'll be able to keep all of your dot files projects and other important work safe in the cloud so let's take a look into the hood real quick the first time you launch a linux app after logging in we'll start up a lightweight vm and container so this vm is actually providing the outer security boundary and gives you a real linux kernel and it's actually a minimum minimal version of chrome OS that was designed specifically to run containers and the container inside is where you do all of your work this container is very tightly integrated with the rest of chrome OS so things like launcher icons and graphical apps work just like any other chrome OS or android app and the most important thing of course is that you get a terminal so how does it actually feel what's it like and the answer should be like most other linux systems so christini is based right now on debi and stable because a lot of developers are familiar with app package management and debi and base systems and for now we're starting out targeting web developers because chrome OS is a web based OS and we think it's appropriate that you should be able to develop web apps on a web based OS so to do this we provide some nice integration features like right now we'll do port forwarding it doesn't seem like you're running a container you get local hosts to connect to and that's treated as a secure origin just like it should be but if you do want to treat your container like a separate system you can and we provide a pingwin.linux.test dns alias and we do want to support more developer workflows than just the web so we will be adding usb, gpu, audio support file systems in user space and better file sharing and upcoming releases and now dam will talk a bit more about using chromebooks for web development and show us what christini looks like in action thank you steve so we know how it works we know why it's awesome let's see how to actually use it for developing web apps the goal is to let developers do everything they need locally and the cross-tiny support is here in development but most things work as expected you can run editors, IDEs, databases like mongo or mysql local servers and pretty much anything can install with app to set up cross-tiny search for linux in settings and then you'll see this dialog once you tap install in about a minute or two depending on your network speed you'll have linux installed on your chromebook and this is a terminal so we have a terminal woohoo let's build a desktop web app for pixelbooks right a bit about how these apps are usually built a lot of development of desktop web apps is done with electron or node webkit but the problem with that is electron means chromium plus nodes so you ship a rendering stack along with your app and that might be useful if you have needs for low-level access but consider Karlo which is a Google project that is essentially a helpful node app framework and provides applications with chrome rendering capabilities so with Karlo you don't have to ship chromium or any rendering engine with your app it uses a locally detected instance of chrome and it connects to via a process pipe and then exposes a high-level API for you to rendering Chrome from your node script but if you don't need all these low-level OS features you can do something even simpler which is to build a progressive web app and this is what Spotify has done you can see here that I'm going to open open.spotify.com in a tab and click that install app button and once I accept the install prompt the tab becomes a PWA without a URL bar and it has its own buttons like close and you can find it in the shelf you can launch it from there and once you launched it there is no more install app button because it's an installed progressive web app and it's also accessible from the shelf so these system-level integration features are provided by Chrome and they are available on Chrome OS since Chrome 67 which is Asian by now and organics on Windows starting with Chrome 70 the current versions table and on Mac with Chrome 72 or if you want to give it a sneak peek check the enable desktop PWA's flag this is thanks to service worker support which has been implemented by all major browsers and they are also working on advanced features such as payment request Firefox is working on that Edge has push notifications and out the home screen now and Safari is also working on authentication APIs so okay we've talked a lot let's try and do a demo and see if anything blows up so I've set up Crostino already because that will take two minutes which I don't want to waste we're going to install Node which I have already VS Code NPM and then we'll check out Squoosh you might have seen it in one of the earlier talks it's an image recompression app we'll open that in VS Code to check out the code run the web server and the most interesting part is we're going to open Squoosh from an Android browser on the very same device and if things work we're also going to do some remote debugging so these are the instructions to install Node I've already run them because it takes a bit and I've switched to the demo station I'm going to run NPM install NPM build those take a while then NPM start to start the web server for the Squoosh app and you see that it tells you it runs at port 8080 it bound to all local addresses so let's run Chromium for Linux this runs in the Linux container and once Squoosh has started which seems to be the case let's go to a local host 8080 and there's Squoosh I'm not sure why it said failed but it certainly works you can open images or not this is a live demo after all the point here is that you have access to local host from the Linux container and now let's try running Chrom Dev from play and I'm choosing Chrom Dev here to be able to distinguish the icons looks like we need to update it hopefully the update won't break anything so I'm going to launch it before it gets a chance to update now local host here will not work and that's a known issue Steve is working on it we need to get didn't mean to put you on this passive we need to get the IP address of the Android container which is this one there is this command the IP address show which has some long ass output so I'm going to just copy that and paste it in Chrom Dev which I thought I launched somewhere it quick as it updated okay well I hope it didn't break anything call on well so this is Squoosh running in Chrome and now let's try something even more dangerous let's try to remote debug it with Chromium on the same machine I know it's called remote debugging but it's on the same machine because these are different containers so to do that we need to put a device in developer mode and then enable adb debugging here which I've done and then we need to run this command that fixed IP is actually documented on our Android setup page it's the IP of the our container and we set up an adb bridge to it so now if things are on my side you'll be able to go to Chrom Inspect and see a number of remote targets here and we actually see two of them so let's open the Squoosh one I'm going to click Inspect and this appears to work surprisingly well for a demo so I'm going to resize the window and try something really spectacular I'm going to scroll so this is live not an animated gif this is actually remote debugging and whatever I'm doing here whether this app works or not you can actually remote debug it with Chromium on Linux debugging on Android browser running your progressive web app does that make sense this is what I wanted to show and let's get back to the slides so these are the instructions for installing Node there's nothing special here you follow what Node publishes on their GitHub then you check out Squoosh using Git again your usual development workflow and oh something else maybe Steve wants to show this we can run VSCode to check out the code so until we switch to the demo this screenshot shows what we actually are going to do but great we're going to do it live now so Steve is going to double tap that after he copies it to the Linux container and in the Linux container if you double tap a deb file you are prompted to install it as a Linux app so Chrome OS supports that out of the box and once the installation completes you should be able to see visual code in the launcher and even at installation prompt we'll say find visual code in the launcher and this is not network dependent so we should be as fast as it was when we rehearsed though 58% is not terribly fast okay 91 cool so show us some code Steve all right wait one second or two seconds there it is head to search and here we go VSCode yeah I have a manifest that's why it's a progress it has a start URL okay so let's switch back to the slides for some best practices for oh no let's actually look at this once more it's really cool right how you can drag those in sync I had to brag about that so the way to set this up is not trivial which is why I posted a medium post this morning with a complete instructions it's about 17 steps you need to follow so check out bit slash bit.li slash cros dash remote dash debug or take a picture of this slide okay I see the phones down so next how to actually optimize pws for Chrome OS which is not really a topic it's more of a non-topic you shouldn't detect that you're running on Chrome OS you should use Lighthouse as you use for any PWA so if you only have five minutes to spend on optimizing your app check out Lighthouse the auditing tool that will give you a checklist of what to do and make sure that your app installs this is one bit that might be different on Chrome OS unlike on older versions of Chrome on mobile your users will not be prompted automatically at the bottom to install the app you need to catch it before install prompt and then save that prompt and call the prompt method and this is the code to do that so you add an event listener for before install prompt then you prevent default for all the browsers save the prompt in this deferred variable and then show your install button so here we just set the display to block and then in the click listener for that install button you hide it call the prompt method from that save variable and then you check the user choice property and particularly the outcome field to see if the user has accepted your installation okay so as I said earlier the answer to this question is no you have your app installed on Chrome OS but you should not do browser sniffing but do instead feature detection and the reason is there is a wide variety of input devices and form factors that Chrome OS can run on so you might have a touch screen or you might not some lower end devices don't have a touch screen there might be a trackpad or you might be the Acer tab 10 tablet that I mentioned earlier there might be a keyboard so if your app can use keyboard shortcuts it's good to have support for them there might be a mouse so I support for that if it makes sense it might also be a stylus useful for drawing apps also make sure to build responsive and take advantage of all the screen early state this is an example of an app that supports a large or wide display rather and displays a number of days in the weather forecast but also if it's resized to a font size screen it shows less information and it can even support a rolled up state if the user just wants to glance at the weather continuously if they have OCD but for for a media player that would be a more useful example you can have previous and next controls I actually have OCD and I do that often and this is an example from Starbucks they found that building responsive pays off because users would actually order on the desktop and use their mobile device to pick up their order so build responsive it also pays off to optimize your forms because nobody likes to fill in forms and we have some guidance at g.co slash amazing web forms that's an amazing URL and if you want to handle touch in an optimized fashion check out g.co slash web touch there are also pointer events and these are a unifying model for all sorts of pointer input touch trackpad mouse stylus and you have a lot of events that are supported in the Chrome Firefox Opera IE Edge and Samsung such as pointer move you simply add a listener for it or sorry you have pointer enter pointer down pointer up cancel out leave and so on more at g.co slash pointer events and this is an example of a code that distinguishes between the pointing device you can check if there is mouse or touch or pen or something that has not yet been supported by the browser okay so what's going to happen in the future we are working on improving the desktop pw support one improvement is keyboard shortcuts another one is badging for the launch icon so we don't have to notify the user for everything you can display a number of notifications just like for android address apps and then also link capturing which would make Twitter very happy they have a great PDA way but when you click on a link it's not captured yet so in the future we hope to enable this such that when you click on a link that you're up and handle your app will actually open and handle that link and for that you need to define the scope parameter in the manifest and the scope parameter is used to determine when your user has left your web app and needs to be bounced in a tab we are also working on low latency canvas contexts which are introduced in chrome 71 beta and these are very useful for highly interactive apps they use open gles for rendering for mastering and how it works is that your pixels get written to the front buffer directly so this bypasses several steps of the rendering process and chrome writes there in that piece of memory that is used by the linux rendering subsystem and is scanned out directly to the screen so this low latency context run the risk of tearing but if you don't interact with a DOM such as in a game or other highly interactive app it's useful to use it this is an example of how to set up a low latency canvas context you pass the latency parameter true and also it needs to be opaque so you pass alpha false and this is the last slide I had no idea what to put on it but I figured that I should add that Chromebooks are these converges machines that run linux, android and google play natively without emulation so they run very fast you should totally take advantage of the 75% off discount and please do explore Chromebooks and give us feedback we love feedback we have the chromium os dev group the google group and also the reddit the subreddit crostini if you find issues please check if they've been reported at crbug.com otherwise file them using shift out i and add the crostini tag and we are dan and steve and thank you this looks a bit strange seriously prizes yeah prizes prizes look at this you like this do you like this I mean okay yeah sure this is the big winner all right here yeah but you can put it on your desk and then just looking at it every day writing code and of course not overlooking fridge magnet yeah so that's that's first second third place quiz yeah yeah I would say it seems sensible way to do it right by size yeah I'd say so okay okay leaderboard yes I forget the big web quiz up there we can do the leaderboard oh the tension is palpable no it's not do you have any music for this no make your own music here we go we congratulate those three masataka and will staffer I feel like masataka won the two years ago surprise I feel like all I'm going to say is 200 points what yeah what you should hire these three people if uh you need someone with totally very exhilarating niche knowledge of web if there's a web pub quiz somewhere they would be perfect excellent yeah so those three come down to the front of the stage afterwards oh sorry I was just gesturing right yeah come to the front of the stage later on and we'll give you the beautiful prizes right that is the end of the two days that we've got for you made it survive if you've been here with us thank you so much for coming along if you've been watching on live stream thank you also for tuning in don't forget you can get forget try again paul don't forget you can get all the videos on youtube.com slash chrome developers subscribe if you haven't already yes well done the whole youtuber thing that's amazing yeah we will catch you oh well no we should oh whoa whoa steady on there's a lot of people to thank oh yeah for this so we'll just start you know getting a rolling round of applause going because we need to thank the caterers come on the sound crews stage directors Ainsley on photography the filmers the live stream managers the editors the forum staff the captioners the runners the organisers the speakers and the content reviewers Peter van Schratten on the music and anyone else we have forgotten like they made this look really easy and making it look easy is really really difficult and with that this time for real for you see you in 2019 you're in 2018 thank you very much what are the colours of the google logo in order there's an order let's talk about the first letter can you get the first letter well let's put them on the table we're working with green red yellow and blue so red no okay blue that was so confident I love it that is correct blue green also you weren't naming them in order I mean that's cheating here are some colours I feel like the first one is blue 100% so far blue blue that is correct blue yellow or something like that or less yellow I mean there's something like that but it's not that I want more people to get it wrong is it yellow no I'm gonna go with blue that is correct okay all right and then I'm gonna go with red and yellow we're doing well actually and then I'm gonna go with I'm just gonna say green do you get three out of six the correct order is blue red yellow green that's not actually even it see you can't even read who got all of the colours no one's got it no one's got it right the correct colours are take it away so am I blue red yellow blue green red I think I missed one letter actually