 Hi, good afternoon. I'm Alex Russell. I'm an engineer on the Chrome team. And to set expectations, this isn't the talk I usually give. So I hope you'll bear with me through some difficult content. These days, I usually put this slide up to start to talk about Progressive Web Apps, which Frances Bairman and I named last year. Progressive Web Apps are the culmination of multiple years of my team's work. I've been working with Jake on service workers for, I guess, four years now. And the team that I work on has been designing and building the core technology for Progressive Web Apps, the stuff that you've been hearing about for the last couple of Chrome Dev summits and all day today and probably all day tomorrow. And I apologize a little bit, but not really. But as you can imagine, building and maintaining all that stuff and working on the standards for it is a full-time job. But I'm not going to talk about Progressive Web Apps today, at least not directly. You see, for the past year, I've also been working with Tao and her team to partner with folks who are about to launch their Progressive Web Apps to make sure that they're really high quality. And this sort of incidental consulting work has given me a broad view into the practices of many of the teams that are building for the mobile web today. What I can say with only a few exceptions, exceptions like booking.com and the great work that the FlipPR team did, is that most of us don't really understand how hard mobile actually is. I haven't exactly been making friends in the JavaScript framework community by saying that sort of thing out loud, turns out. But as the Polymer team will tell you, this is basically the PG version of what I've been saying to them for something like a year and a half. We had some really tense meetings. I kept saying things like, it needs to be more asynchronous or, look at this trace, you really need to load a lot less JavaScript in 12k. Sounds good to me, actually. Or you really need to break up your scripts so you're not executing this entire long block. What the heck is going on here? And at some point, they said, we get it, we get it. Just stop telling us what to do and start telling us what goal to hit. And this was kind of a breakthrough in the conversation we've been sort of at loggerheads for a while. And so I kind of put my finger in the air and said, it would be really great if you could get me something that's interactive in about three seconds on a 3G connection on first load and interactive in about a second when I launched it from the home screen. So the polymers went off and they went through the stages of grief. We had some denial. I'll admit there was some denial. Anger, yes. Bargaining, absolutely. Depression, luckily they sit far enough away for me that I couldn't actually see them sobbing in their cubes. But I finally accepted the challenge and they came back with a purple pattern, which Sam is going to talk a lot more about. Meanwhile, the rest of the JavaScript community kind of hasn't internalized the same message to the same degree. So I'm here to let you down. Not easily. This may be a little bit hard. And I apologize in advance, but we need to get to the bottom of what mobile actually means. And when you see me tweeting things like this, this is actually kind of desperate. I have spent years of my life working in TC39 to make JavaScript a better language. I have spent countless hours persistently advocating for extensibility to give you more power when you're writing JavaScript in the web platform. I've designed features like service workers, the Jake and a load of other folks that are entirely predicated on JavaScript in the first place. Back in the day, I used to work on a JavaScript toolkit with Scott and Steve from the Polymer team. Not like I hate JavaScript or I don't like it or I think you shouldn't be using frameworks. I don't hate them. It's just that we're really in the midst of a crisis. And collectively, we don't understand how bad that crisis is. If we did, we would have already modulated our behavior. So what I'm seeing when I do reviews is almost universally bad news, specifically in the context of the RAIL performance model. A quick recap of RAIL basically are stands for responding to input in under 100 milliseconds. Animating at 60 frames a second, which means that because the browser has to apply the things that we hand it on every frame, we probably only have about eight milliseconds to do our work. When you think about WebVR, it gets 60 frames a second. 60 frames a second is hard, 120. When we're doing background work, we need to make sure that we're breaking up that background work into 50 millisecond chunks so that we can respond to subsequent input and stay under that 100 millisecond budget. And we really want to complete actions for the user in under a second, because over a second, research suggests that users lose focus on the task they're trying to accomplish. If we can't actually complete it, we definitely need to acknowledge that work in under a second. Give users the sense that we're doing something for them. So you've heard this number all day today. Darren had it in the keynote. But the double-click folks went and did a bunch of work to find out what are the bounce rates for sites and does performance matter. And the answer, of course, is yes. 53% of users bounce from mobile sites that take more than three seconds to load. You leave real money on the table when your site is slow. Yet most of the apps that I've traced over the last year have been performance travesties. My experience isn't an outlier. The last, that same report noted that the average mobile site takes 19 seconds to load. 19 seconds. Collectively, we're failing. Sam's got more data on this, and I actually don't have time to go into it because we're gonna talk about some of the reasons why. But I think one of the key reasons that we are not succeeding today, to put it kindly, is a lack of understanding and respect for how hard mobile is. I have faith that as a community, if we understood and respected the limits, we'd be doing much better. Nobody here wants to make bad user experiences. I think part of this is because we're the only platform in the world that tried to take all of our desktop stuff with us. You don't take a Java jar that was a swing application and run it on your Android phone. You don't take a Mac universal binary and run it on your iPhone, right? You don't take a Win32 app and run it on your Windows phone. Everybody else switched their tools when their form factor and their constraints changed. We didn't. We didn't make that switch, and the proof is in the pudding. It's in the traces that I'm looking at every day. But that's not the only reason. Why are most of the popular frameworks and the tools that we're using, the tool chains that we all wind up setting ourselves up with, unacceptably slow by default? Why are our tools producing such bismol results? It's not like we're bad people who want bad things for users. I think, in part, it's that you all aren't actually developing on mobile phones. Paul showed you some really great DevTools, they're awesome. So, but I wanna understand who uses Chrome DevTools to sort of get the responsive view and understand how things will look on a phone. Okay, now keep your hands up if you're using web page test for testing on real devices. Oh, I like you, some of you are liars. And keep your hands up if you use Chrome colon inspect over USB to do real on device debugging. Okay, who's done it more than once? Okay, that's what I thought. It turns out that DevTools emulation is nothing like real devices. Network throttling, CPU throttling, they're all kind of a fudge. They're better than nothing, please use them. Please set them on by default, please use them all the time, but they're not the real thing, not even close. Let me give you a quick example. This is the IO2015 website, which was a Polymer 0.5 progressive web app. It was super bleeding edge, it had push notifications, which I think we had launched in Chrome like the week or two before, there were bugs. It was amazing. On desktop, it felt super fast on this wifi connection. We get DOM content loaded and a meaningful paint at about 700 milliseconds. Onload isn't far behind and that starts a nice swooping out animation, which is smooth most of the time. Overall, we spend about a half a second in script, which is well under our budget for actually getting a good responsive experience. And we get interactive content in about four seconds, including that long animation. This is a really great experience on a desktop-class device, which is to say my MacBook Pro. This is the same site running on the same wifi network on a Nexus 5X. DOM content loaded doesn't show up until two seconds. Onload triggers at the six second mark, which is where that animation starts. Part of that delay is down to this huge honking scriptive valve. That's locked up the main thread for nearly two full seconds. Script execution balloons up to four seconds in total and for all of that work we still don't even get smooth animations. Look at all those long frames. Content doesn't become interactive until seven seconds. This is what TTI will tell you in Lighthouse and this is not acceptable. Ouch. So what have we learned? Traces from real mobile devices are a harsh master. And when I show folks their apps on real devices, most have the same reaction. And in fact, I do this a lot. They're shocked at how slow the median mobile CPU actually is, not the iPhones in their pockets. They don't really understand the difference between desktop and mobile disks in storage. And most of us are super ignorant about how crazy mobile networks are in the real world. I think we theoretically know at some level that they're bad, doesn't begin to cover it. They're so bad. It's important to understand the depth of the deficit though, so we can start to adapt. I've been letting out down a bunch of engineers for the last year, hard, soft. I've just sort of been easing them into this sort of grief curve and then hopefully we get out the other side. And it goes well, but you have to put in the work. But unless we do that work, unless we change the way we're working, that web won't work for the next billion users. Not practically speaking. So something I say in basically every meeting is that the truth is in the trace. And by that I mean dev tools and Chrome tracing attached to real devices. Nothing else cuts the mustard. It doesn't get you there. So this is what's sitting on my desk on a typical day. And aside from the Pixel XL that's on the right hand side, all those phones are less than $300 new. The folks at Conga report that the most commonly used phones in their market are in the sub $100 range new. I carry most of these phones around in this bag. This bag is in my bag, my other bag with me all the time. And these are some slow phones. And I carry them because I have zero faith that anything I trace in Chrome on the desktop is gonna be like the real world unless I've put it on a phone and emulated 3G. And we'll talk about 3G. I don't trust that unless I've done it on real hardware. And you know what I also don't trust is the marketing numbers from phone vendors. So here's some of the headline specs for the devices that are in my bag. All these devices have flash-based storage. Naively, there's no reason to think that script should run 10 times slower on my Nexus 5X than it did in my MacBook Pro. Especially if I looked at these headline numbers, 2.8 gigahertz versus 1.8 gigahertz. That's not a 10X difference. Naively, there's no reason for that, right? If we just looked at these numbers, we wouldn't really understand what's going on. It bears repeating. If you think the $700 iPhone in your pocket is what people are going to be adopting in the next couple of years at the median, you're delusional. The average selling price of smartphones is going down, not up, because the next set of people who are going to buy phones are not replacing their current phone. They're buying a new phone for the first time. And all the rich people already have smartphones? The next billion users aren't buying high-end devices. They're buying at the margin. And that margin is a cheap device. Worldwide, phones are getting slower. So is the average network connection. Your test device needs to represent that reality so that we don't wind up building what Bruce Lawson has called the wealthy Western web. So this is MotionMark. It's a benchmark that Apple put together earlier this year. And it tests a bunch of graphics performance, but it's various JavaScript bound in many cases. On apples to apples hardware with Safari and Chrome on running the same version of OS X, Chrome ties or beats Safari in most cases. Basically, this is not something that we're actually slow at. So here is the same benchmark on the same version of Chrome on an XS5X. The desktop version is 25 times faster. For a slower than XS5X, I was able to change just one thing and get the MotionMark benchmark running 15% faster. 15% for one change. What the heck did I do? Is it magic? We're all adults here, so I think it's safe to admit that magic isn't actually a thing. Instead, I used a little bit of science, and I added this makeshift ice pack to the bottom of the phone. I got this idea from my colleague Victor, who's been looking into optimizing this benchmark earlier in the year, and was seeing massive variants across runs. What the heck is going on here? Going back to basics, recall the computers are basically just a bunch of wires. Those wires have resistance, and voltage and power are running through them, which means they dissipate heat. And we dissipate some more heat every time a transistor flips from an on-state to an off-state or vice versa. That same process generates computation, but it also generates excess wattage in the form of BTUs. So a chip built on the same process, with roughly the same architecture, with the same number of transistors, that dissipates more power and turns it into heat, is the chip that does more math. And doing more math is how you go faster. When it comes to computing, power is literally power, and power equals heat. So these are the guts of a regular desktop class machine. The chip in there is a slower version of the one in my laptop. The square fins on the top of that thing, they're the heat sink. And the job of the heat sink is to evaporate or, sorry, dissipate the heat coming off of the chip. Now that heat sink is seated on top of a metallic top of the assembly for the chip, with a little layer of thermal paste between it. So there's no air gap. That air gap would cause a thing that would explode and break the chip because it gets so hot. There's a fan that's running over the entire box, extracting all that heat that's getting dissipated out. And the result is that a desktop class or high-end laptop chip like this can dissipate something like 60 watts under load. This is what 60 watts looks like. I don't know about you, but I haven't chosen to hold 60 watts in my hand dissipating more than once. And this is the key reason that mobile phones don't run as fast as desktops or laptops, even if they can include as many transistors or scale up to the same frequencies. These chips in these packages just can't dissipate 60 watts without burning your hand, as Taylor said earlier. So let's look inside the guts of one of these phones. This is the remains of the Nexus 5X that I used as my daily phone for a couple of years. It gave up the magical blue smoke and stopped booting a couple of weeks ago. So now I get to dissect it. And unlike desktop class machines with a GPU and the memory, it might be on different sections of the board or different boards entirely, the whole system on a chip lives under the other side of this, and that thing there is the power supply. On the flip side of the same PCB is the entire system on a chip. It's got an aluminum cover like this, sort of a heat spreader. So when we flip it over, this is what you see. There's no thermal paste, no fans. I took the shield off, but that's all. In fact, the CPU module isn't even visible on this board because it's sitting underneath that Samsung-made RAM chip. Think about that. To get heat off of this CPU, it has to go through another chip and then through the casing of that chip, then to air, then to a thin aluminum thing to maybe spread some of that, and then out what? The screen? The two layers of polycarbonate plastic in the back? Two layers, separate layers, I'm not even connected. Remember that polycarbonate plastic dissipates heat a thousand times less efficiently than aluminum. If you can't evacuate the heat, then you can't really generate a lot of it without the core temperature rising to levels that damage the circuitry. And then the magic blue scapes, smoke escapes and you phones out, spooting. I wonder what I did to mine. So chip designers saw this coming. They've been putting dynamic voltage and frequency scaling into chips for more than a decade. And more recently, they've started enabling features that allow OSes to turn off cores entirely. All this reminds me of this paper that I read a few years back from 2011. If you have some spare time, I recommend it. Well, perhaps not intended to be, it reads like a prophecy from half a decade ago about the experience that we're all carrying around in our phones where a huge percentage of the silicon in our devices isn't actually available to be used thanks to the power and thermal constraints. And the power thing is real. There aren't any heat sinks in your phone, although I'm pretty sure that you wouldn't want a heat sink and a fan in your pocket. But imagine if you could have one. Why don't those exist? Why can't I get a bulky phone? The basic reason is that this battery only contains 10 watt hours of power. Think about that in terms of the light bulb. You'd only be able to keep it lit for a couple of minutes if you could drain the battery that quickly, which you don't because doing that causes batteries to explode, so just don't try. Just FYI, not a fun experiment at home. This is why mobile phones are slow. We can't dissipate power because we can't carry power. That battery has to deal with all sorts of stuff. It has to deal with the CPU and the GPU and the Wi-Fi radio and the Bluetooth radio and the NFC radio and the cell radio and the screen and the touch digitizer. It has to power all that stuff and keep you satisfied for a day's worth of use on a single charge. On something that can't keep that light bulb lit for more than a couple of minutes. To keep from wasting power, there's a lot more complexity in modern phones. Most of them today use what's called a big little architecture and that means that they try to move work from high power cores to low power cores very aggressively. The systems that move that work around are called schedulers. All kernels have schedulers and the Android ecosystem schedulers are a bit all over the map. The big thing to understand about them though is that your phone probably isn't using what's called symmetric multi-processing. That is to say, not all of the cores respond up to the same voltage and frequency running at the same rate all the time. There are different levels. Most of the phones that you probably have have something called a global task scheduler which moves work around between those things. The systems in Linux that wind up doing that management that work management are notoriously hard to tune and they use all sorts of heuristics to do it. Some do something called touch boosting and what that means is when you put your finger down on the glass they power up the big CPUs in anticipation of you doing some work like animating something or flinging it around. Some of them have special heuristics for launching applications so that when you launch something from the home screen they power up the big core so that that thing launches very quickly. Now that looks nothing like the web workload. The web workload today looks like tapping on the URL and waiting for the network and maybe those cores got scaled down again and then your content comes in and then we start processing it. Basically the web is fundamentally not aligned with the way mobile phones have been optimized to work because our workloads don't look like their workloads. Lastly, remember that light bulb? The light bulb is why you shouldn't believe any of the numbers that really you see in benchmarking. The idea that your mobile phone CPU is as fast as a desktop CPU may be true in the limited case but not in the common case. You're gonna get scaled, you're gonna get throttled. Things are gonna move to a low power state as aggressively as possible. That's not how the world works. We don't keep things spun up the way we do on desktop. So here's that chart again but I added the details of the CPUs. It looks a lot different now, doesn't it? I don't even gonna get into the details of the huge differences that come from caches and pipeline depths and in order versus out of order dispatch and memory bandwidth but all of it matters. The TLDR is that you actually get what you pay for on a mobile phone. Importantly, the MacBook Pro packs 100 watt hour battery. That's the FAA maximum limit that you're allowed to carry in a single battery onto a plane, design constraint. But as a result of having that much power available and because the heatsink and the fan and my MacBook Pro I can keep those four cores spun up under load and they can dissipate something between 40 and 60 watts. All the phones in the chart are big little devices and that means that many of the cores are powered down most of the time. That Moto 4G at the bottom has eight cores but if you get more than three of them working for you at any point in time you are really lucky. Right, so mobile CPUs aren't exactly what you thought they were, I mean, since when would a hardware vendor ever use an opaque number to mask a major difference in performance, right? And okay, maybe memory pressure and smaller memory footprints on mobile devices don't allow us to on the browser side trade away space for speed as aggressively as we do on desktop and maybe the storage systems are roughly as fast though so maybe we could get something back there. I mean, they're just solid state flash devices on a Linux OS if you're running Android, right? Who has heard of the term MLC flash? Okay, I think it's like 10 people. That's about what I expected. MLC flash is multi-level chips. Basically they are chips on top of chips inside the same chip package and that is how you make storage cheaper. And it's a primary reason why my Nexus 5x gets 400 megabytes a second of read throughput and my MacBook Pro gets two gigabytes a second of read throughput. In SSDs, the way you get better performance is parallelism. In order to read or write a block, you want to distribute reads and writes for large reads and large writes to as many different chips as possible because the latency for getting data to and from each of those chips is constant so you have a controller in front, it's got some memory and that distributes that work out to those chips. Now, physical space is at a premium on mobile devices but so is power and so vendors have tried to consolidate those chips as far as possible and that means that they're using fewer and fewer, usually one chip for all that reading and writing and that means low parallelism, which means low performance. We don't get that benefit that my MacBook Pro does of having many chips in a row on a mobile device. MLC flash is also just now slower and file systems haven't really caught up. Basically what you should think about the median mobile phone having is spinning disk from 2008. Think of it that way, that's probably a pretty good parallel. Okay, that's kind of a bummer, right? Spinning metal. And if mobile disks make you sad, the state of mobile phone networks will make you wish that mobile disks were the problem you actually had. If you hadn't, I recommend checking out Illy Gregorix high performance browser networking. It's free, you can read it online and it goes over a huge amount of the total end-to-end networks stack that gets bits from your server to your phone. Highly recommended. If you spend some time with it, you'll get to where I got to, which is that mobile networks hate you. Cell networks are basically kryptonite to the protocols and assumptions that the web was built on. Where TCP and the web was built around the assumptions of a relatively stable underlying transport condition, cell networks gyrate wildly and from millisecond to millisecond. Where TCP assumes relatively constant packet loss and constant RTT times, cell networks deliver anything but transitioning from one network type to another or one subtype to another in real time. And where the web's model of hot linking sub-resources assumes reliable networks for the duration of the page, well, we kind of know how badly that breaks down in practice, don't we? At least those of us who use public transport or ride in cars or, you know, basically use phones. So this paper is actually a pretty good entry point and especially its references into why these networks hate you so much. And they do, they hate you a lot. When you dig in, it turns out that what's really killing you is the variance and the volatility and the underlying network substrate. Now, some of you might be thinking, isn't LTE gonna save us? Well, yes, and maybe. Here's last year's performance for US LTE users compared to the year before. These networks are actually getting slower. In fact, the variance in mobile networks is so massive that it feels like a farce to call something 2G or LTE. Some of the largest emerging markets have medium RTT times north of 400 milliseconds. When you open DevTools and you do network throttling and you put it in the regular 3G mode, it says the RTT does 100 milliseconds, which is part of the course in the US, but wildly wrong in other markets, especially when you think about how many carriers wind up throttling things even further down. The same network type may mean dozens of different things for your users. For the TCP Geeks in the house, thinking about what that sort of around chip time does to the bandwidth delay product can really bring you down. Channel capacity be damned. That sort of latency eats your transfer speed for breakfast. But of course, this is mobile, so it's worse than that. As Ilya said to me recently, a 4G user isn't even a 4G user most of the time. Cell radios are magical things, sure. They try to preserve power too, though, and they seamlessly transition between their high power states and their low power states across different radio types, different cell locations. They do a ton of work to make sure that we never see what's happening under the covers, but that creates variance. When users try to connect, their phones might be in a low connectivity or low power state when they weren't just a minute ago. In those cases, the Radio Resource Control Protocol that Ilya's book goes into some detail on determines how the connection gets made. For users in very low power states on 3G connections, it can take seconds to just start the radio handshake at the physical layer so that you can start transmitting data. If you want to get bits on screen in three seconds, you're in a really tough spot. You can't do DNS, TCP, TLS, or even start sending those HTTP headers down the wire until all of that is complete. Now, consider adding hundreds of kilobytes of JavaScript to the mix. That's not theoretical. The HTTP archive is showing that the top 1,000 sites put almost a megabyte of uncompressed scripts on their pages today. On those networks, on these CPUs, this is a recipe for disaster. No wonder users have the pervasive feeling that the mobile web is slow. I think it's only reasonable to be sad about all of this. The tools and techniques that we've brought over from the desktop era really aren't serving us. To make great progressive web apps, we need to do things differently. We need to load less code, we need to load it at the right times, and we need to let the browser do work for us whenever possible. Use the platform isn't a nice to have. On mobile, it's the only way to go. Sam's gonna go into a lot more detail about the depth of the crisis that we're in, but make no mistake. If you're using one of today's more popular JavaScript frameworks in the most naive way, you are failing by default. There is no sugarcoating this, except for the tiny club of fast enough by default tools like the Polymer App Toolbox and Preact with some good webpack foo. Today's frameworks are mostly a sign of ignorance or privilege or both. The good news is that we can fix the ignorance. So when we're armed with data, we can make better choices and avoid those slow by default tools. Now, I talked to a lot of teams who've gotten a long way into their PWA development story, and they've got very heavy client-side JavaScript apps. Their apps feel like Gmail, basically. They get a loading bar or something like it while a ton of script starts to execute, and then they get a fast UI. All the subsequent interactions wind up being fast because they've paid all that cost up front, but many developers find that this is kind of slow. It feels slow to use. Once everything is loaded, it's great. But as we saw earlier, JavaScript execution on phones makes this strategy kind of a loser. JavaScript execution is single-threaded. Sure, we can parse and compile off thread, but we can't use the preload scanner to grab sub-resources if they're embedded in that script. We can't speculatively build DOM or parse CSS or apply it. When you go with one of these tools that doesn't use the platform well, you bet the farm on a single core on a phone that might be thermally throttled or in a low power state. Good luck. So what we're seeing now is something called server-side rendering, aka isomorphic rendering, aka universal JavaScript. I haven't looked in a couple of months. Is there a new term for this? I'll take that as a no. Okay. So the idea is to run JavaScript on the server, the same JavaScript on the server that you run on the client and then send down a pre-computed snapshot of the HTML that you were gonna send. And then you load the jargantuan JavaScript bundle and hope that it all works out. On my MacBook Pro or an iPhone with a fast connection or my Pixel, something like that, this works out pretty well. But for the vast majority of users with less expensive phones or less good connections, you get this crazy uncanny valley. When the JavaScript arrives, the main thread locks up all the same. Until it starts executing and finishes, the content might be displayed, but it isn't meaningfully interactive. Now there's a debate that starts. Some folks think that because maybe you can start scrolling this stuff because browsers are magic and they do threaded scrolling and scrolling actually is an interaction. If I can't put my finger down and tap on your UI and have it start responding and doing work for me under 100 milliseconds, it isn't loaded. It's broken. Okay. All right. What we really want is progressive interactivity. And this is what the purple pattern that Sam will go into delivers. The insight here is that you should only load the code that you need right now if possible for the views that you're actually sending to users. And combined with service workers and HTTP to push, it's possible to achieve this without overbundling. Purple in the Polymer app toolbox represent what's possible once we take that mobile by default thing seriously. And it's night and day from where most popular tools are. So this is the shop app that you've seen before. The Polymer team released it at IO. You can visit it right now at shop.polymer-project.org. And here it is running on a desktop browser. We get to interactivity very quickly on Wi-Fi and it only takes a few hundred milliseconds of script overall, you know. So what? We've seen this story before. But what about mobile? Again, Nexus 5x, same Wi-Fi connection. Despite this lower CPU, the app sends down an appropriate amount of script so that we get interactive performance at under two seconds. There's nearly a second and a half of script execution overall, but thanks to the granular use of HTML imports and HTTP to push in contrast to major bundling, most of the components load with tiny execution slices, which means that the content that's already on screen stays interactive. This is what mobile first looks like and it's radically different in a good way. The other thing that the purple pattern adds is the service worker. You might be thinking that service workers are about handling offline. And while it does allow you to do that, that's not the primary benefit for most end users. Service workers matter because they let you deliver reliable performance because they can handle the top level resource and always return something from the cache. You can dramatically improve the performance of your apps when you use service workers this way. That huge variability in network conditions, it evaporates when you've done this. So this is a chart that Eric Biddleman gathered from this year's Io site. It's also a progressive web app, but with Polymer 1.0. But as you can see from the giant dark green spike, when the service worker is active, the distribution of load times moves hard to the left. And that's a good thing. This is very literally what faster looks like. I'm seeing a lot of teams try to add service workers as some sort of transparent pass-through thing using a network-first approach. Don't do that. Please don't do that. Use the purple pattern and make sure that your top-level app shell never depends on the network. If you do that, you can compete with native apps on the experience that you deliver. If you saw Darren's keynote this morning, you saw exactly that with the keynote tech, the CNET Tech Today PWA. If you don't do that though, you will never match their performance. Until recently, it's sort of been difficult to verify in an automated way that your service worker is installing that the rest of your progressive web app properties are actually met correctly. You've heard a lot about Lighthouse, but please put it in your CLI, put it in your continuous integration system and let it tell you how you're doing. So I think it's safe to say that mobile is much, much harder than we've collectively understood it to be. To make good apps in this environment, we need to change our outlook, our tools, and most of all, our priorities. And the fastest way I know to get in touch with that, that ground truth, is to test on real hardware. So please, if you don't already have a circa 2014-ish Android phone, go ahead and buy something like a Moto G4. If you can use one of these and Chrome Inspect on DevTools, you'll find yourself in touch with how it really feels to be at the median. If you can, get something worse. Like this is an Android one from last year. You probably can't buy one, but get something worse if you can. And if you can't afford any of those, please use webpagetest.org to select from the list of real mobile devices that are sitting in a rack at your disposal to test your URLs. And whatever you do, implement as much of the purple pattern as you can. Sam will fill you in in the details, so stay tuned for that. And lastly, Lighthouse and Chrome Telemetry are potent weapons in your ability to not regress on keeping sure that you're doing the right thing. I want to apologize for being a bit of a downer today. Usually, I'm telling you about how good PWAs are and how great the experiences that you can deliver. And that's all true. And there is good news, and that it's that modern web technology makes it possible to build truly amazing experiences. But it will require ditching or radically reworking the way that we're using these slow by default tools. Addy's got a whole talk tomorrow about what kind of elbow grease you really do have to put in if you've bought into one of today's major frameworks. But no, I think now, that the challenge is much larger than you probably thought it was. Now that you know, I'm actually very confident that the folks in this room and on the live stream are going to internalize this and use it to make really great experiences. So thank you.