 So I'll give everybody a minute to sort of read this. I can read it out loud, but I think I'll take the fun out of it. Hello, everybody. My name is Hussain Jurde. And today I want to talk about progressive web patterns, specifically the purple pattern, what it is, and how you can use it. So simply put, the purple pattern is not a specific tool or technology. It's actually a set of different techniques that you can apply to your web page in order to make it load fast and load reliably. First and foremost, you think of pushing and sending down your most critical assets to your users. And you do this in order to render your initial route as soon as possible. The next thing you can think about is trying to pre-cache all of your remaining resources, as well as lazy loading all of your remaining routes. Now, before we go into the details of how exactly we can apply these techniques in order to have the purple pattern to work, I want to take a step back and talk about mobile for a bit. I think it's safe to say that the vast majority of us in this room and everybody watching on the live stream own a mobile device on some sort. Now, what we do on our mobile devices can vary day to day. But year after year after year, we've been spending a lot more time on our smartphones and our tablets. Here's some data from ComScore that shows a comparison of unique visitor visits between native mobile applications and mobile web pages. Now, there's no denying that as consumers, we tend to spend a lot more time on native apps than we do on the mobile browser. But you can see that in terms of unique visitors by June 2016, we spent about three times more likely to land on a mobile web page than we are a native app we haven't seen before. And that's huge. That just goes to show that we spend a lot of time on the web even on our mobile devices. So how can we make sure that users who visit our web pages on their mobile phones or any device have a great experience? Well, there are a few ways, and we'll go through some of them in this talk. But first, let's talk about the web for a bit. When I open the browser on my mobile phone and type something into the URL bar and press Enter, a request is made to a server somewhere. And after a certain period of time, the server responds with the content that my browser needs. Now, usually, this takes shape of an HTML document. The next thing the browser needs to do is actually parse the contents of the file and find out what other resources that it actually needs. For every external resource it finds, it submits another request and gets a response for it. Now, the web sort of works in this sort of pattern. It makes requests and gets responses. These external resources can be CSS files for styling, JavaScript for dynamic content, or even static images, for example. Now, you can see with a typical web page, multiple round trips are usually needed in order to get the content that the user needs to see. So let's say this is the HTML that was first retrieved on the initial request. You can see that we have a style sheet being referenced as well as a JavaScript file. One thing we can do in order to help the fact that multiple requests are made is to leverage something called preload. Now, preload has a syntax that a lot of us are already familiar with. It's the link HTML tag, and we define the location of the file as an HREF attribute. But we also have a rel attribute where we say a value of preload as well as an as attribute that defines the type of file we want to load. In this context, we're trying to load script.js, which is a JavaScript file, so we're saying it's an as script. So what does preload actually do? Preload allows us to tell the browser... Preload allows us to tell the browser that some resources are so critical that you should try to download them immediately, because once the web page is fully loaded, the user is going to need to see this content without blocking the onload event. So the reason why this can be so useful is that it allows the browser to prioritize some critical resources, and once the page is loaded, it knows to download it as soon as possible. Now, you can use preload for resources like we did in the head of our HTML document, but you're going to get the most bang for your buck using it for resources that are going to be discovered much later. An example could be a font file tucked away deep in one of your CSS files. So you can use preload to preemptively fetch JavaScript files, style sheets, and again, it's the as attribute that defines what type of file you want to load. You can even use preload to preemptively fetch images, fonts, audio, video, and even more. Something else we can also leverage is something called pre-fetch, and it works very similarly to preload, but the only difference is that pre-fetch is actually for resources that are a different navigation route than what the user is currently on. In other words, we're just hinting to the browser that, hey, these are resources that the user might need in the future. Download your critical resources first and then try to download this later. So if you are building a simple static website, adding link preload or pre-fetch tags to the head of your HTML document is pretty straightforward. But if you're building a web application using a module bundler, things get a little trickier. If you happen to be using Webpack, there's a plugin, there's a tool that allows you to do this. And it's built by Adios Mining from the Google Chrome team. It's called preload Webpack plugin. And what it allows you to do is it allows you to add a few lines in your Webpack configurations file in order to inject preload and pre-fetch tags for some or all of your major bundles or chunks. So can we use preload and pre-load right now? Fortunately, Chrome, Safari, and Opera, all the later versions of them support preload. Firefox, as of now, has partial support where only cashable resources can be preloaded. So you can still use it for JavaScript files and style sheet files and so on. Edge, at the moment, is just currently under consideration. So another important point of topic that I also want to discuss here is something called HTTP-2 Server Push. Now, HTTP-2 aims to provide a number of performance improvements over HTTP-1. But the one we're going to focus on here is specifically Server Push. So let's revisit our original scenario again. When I open a browser and type something to URL bar and press Enter, we get that initial request to get the original HTML content. But what HTTP-2 Server Push allows you to do, it allows you to send down some critical assets to the browser at the same time you're sending the HTML file. In other words, we're telling the server, hey, give the browser these files before the browser even knows what it needs. The reason why this can be so useful is that it cuts round trips to the server. So one thing I didn't mention previously was instead of using link HTML tags, you can actually also use link HTTP headers. And what this allows you to do is it also allows you to do a lot of preload for some of your resources. And the format is kind of the same. We define the file location, rel preload as well as an as attribute for the type of file we're trying to load. Now, the majority of servers that support H2 Server Push, when they see a link HTTP header, they automatically try to instantiate pushing those assets down. A good example is Firebase, and it makes it very easy to do. There's a single Firebase JSON file where you type your configurations, and once you add your link HTTP headers for some of your resources, it'll automatically try instantiating pushing those assets down the wire every time you load the page. If you happen to want to use HTTP headers but not rely on Server Push and only use link preload, you can add a no push attribute. And that is essentially the exact same thing as just using link HTML tags. So here's some data from Jeremy Wagner. He wrote a very comprehensive guide into H2 Server Push, so I'll definitely suggest reading it if you're interested. But what he did was he ran a number of tests on a single webpage. I believe he ran each test 25 times with a slightly different set of variables. So you can see at the middle right, the tallest bar, you can see the page load times for what on his webpage with HTTP One without any enhancements. To the right of that, you can see how it gets reduced when he inlines CSS. And to the far right, you can see what happens when he inlines everything. Now we know inlining assets can cut page load times because you're cutting down trips to the server. Your browser doesn't need to go to the server to get those assets. Now to the very far left, you can see what happens when he just switches to HTTP Two but doesn't do anything else. And you can already see that he's noticing some performance gains. H2 provides performance wins because there's a few things it does outside of Server Push. To the right of that, you can see what happens when he pushes only his CSS file. And to the right of that, you can see what happens when he pushes everything. Notice how page load times are actually lower. The average is slightly lower full only when he pushes CSS and not when he pushes everything. So with HTTP Two, you can run into the problem of pushing too much. There's no right number of assets you should be pushing. It really depends on how many assets you're loading as well as the type of application you're building. But it's something to keep in mind that it's still slightly experimental. You don't wanna try pushing everything down the wire because the browser's gonna try to get all those assets at the same time. You most likely will get a better performance if you push just high priority resources. Another mistake you can easily do as well is pushing unused assets. If you're telling the server, hey, push down the CSS file and the browser doesn't need it at all, you're wasting bandwidth. And every single time the user loads the page, the browser's getting this unnecessary file it doesn't need. And what about the cache? Every time I load a page, if the server is pushing something down to the browser, what if the browser's actually cached that resource already? Does it need the server to send it down again and again? It kinda doesn't. We'll talk about a solution to this in a bit. So whether you're using link preload, prefetch, or H2 push or a combination, it's important to try to think of how you can send down some critical resource to your user because you can cut down page load times. So now let's switch gears a bit and let's talk about something that I think a lot of us are familiar with when we use Chrome or any browser and we don't have a proper internet connection. And that's when we see this little thing and that means the web page can't load. So one thing that can actually help with this is something called service workers. A service worker is a script that runs in the background of your web page when you view a browser, background of your browser when you view web page. So how do you add a service worker? One, you can create the logic yourself and you can create the file and write all the logic. Two, you can actually use a library that does it for you. One such library is called Workbox and it's actually built by the Google Chrome team as well. So if you actually wanted to use Workbox, you can install the CLI globally and then you can write a single command, generate SW, to create your service worker file. What happens is that it actually asks you a few questions. What is the root of your web app? What file types would you like to cache? You can say I want to cache all file types or just something. What should the path of your new service worker file be? And the very last question is, do you want to save these settings into a configurations file? If you say yes to the very last question, you're not going to get asked these configuration questions because it already knows to look at to that file. So now running those commands actually creates a service worker file wherever you've asked it to in your project, but we still need to tell the browser how to register that file. And one way to do that is adding a simple script tag into your index HTML. The first thing you can do is check and see if service workers are actually supporting in your browser. If they are, you can make sure to register after the window on load event. You don't want window on load to contend with your service worker being registered. Registering a service worker is as simple as just running navigator service worker register with the location of your file. And if you like, you can actually console log outputs to say it's been successful or if it's been failed. So we've talked about how to install a service worker using Workbox and we've talked about how to register, but we really haven't touched up upon what a service worker does. The very first thing you can think about is a service worker allows you to pre-cache your application shell. Now, like the name suggests, your application shell is essentially the shell of your user interface. It's the HTML, the CSS, the JavaScripts that make up your header, your footer, as well as loading icons or anything else that doesn't actually show actual content. How a service worker works is it acts like a middleman between your browser and your server. Once your app is loaded for the very first time, those assets are retrieved over the network. But your service worker now knows to cache those asset files so the next time your user loads your application again, it doesn't have to go all the way to the network. Your service worker is gonna act like a middleman and say, hey, I have these assets, here you go. So remember how the very last question of when you ran that generate SW command said, hey, do you wanna save these settings to a file? This is the sort of file you see. It's a very simple export object. Glob directory is essentially where your actual project lives. Glob patterns defines what type of files you want to cache. SWDest is where you wanna save your service worker file. You most likely wanna save it where your project is being deployed. A Navigate fallback is useful if you're building a single page application in order to say where your index HTML is. Now, using something like Workbox can be so useful because you run a single command and it creates a service worker file. But it can still pose a bit of an issue if we need to run that command every single time we make a change. If we add a file to our project or remove an image or anything, we'll need to make sure our service worker's up to date in order to cache the correct resources. So a better thing you can do is actually integrate it into your build process. So instead of installing it globally, you can install it as a dev dependency. And then you can add it as part of your build step. And now you know every single time your project's being rebuilt, a new service worker file, a fresh service worker file is being generated. If you happen to be using Webpack as well, there is a Workbox plugin for it that allows you to do the same thing. So we've talked about how a service worker can cache your application shell. But there's something else it can do as well. It can also cache dynamic content. And dynamic content is essentially also information being retrieved from a server. But this time this is information that can actually change. So let's go back to that configuration file we just saw. If you add a runtime caching object or array, you can actually add a URL pattern and define whenever that URL pattern is actually being retrieved or whenever API requests are being hit, you can tell your service worker to store that data as being returned in some specific way. You can see that we have a handler attribute that says network first. What network first allows us to do is say, hey, the service worker will automatically know to always get the content from the network, give that to your browser, but it'll always cache the latest data every single time. If your network happens to fail, let's say your server's down or you're having a bad internet connection, your service worker will just show its information instead instead of not seeing any data at all. Now, network first isn't the only handler you can use. There's also a cache first strategy, which is essentially the opposite. You always only want to get data from your service worker cache, but if that happens to fail, it then rely on the network. There's also the fastest approach where you don't really care where your data is coming from. If it comes from your service worker cache or your network, you just want the data wherever comes fastest. Now, it's going to make sense to use cache first and fastest for applications where you know the data won't change. In other words, you want data to your user as fast as possible. You don't care where it comes from. Cache only works where you only want the data from the service worker cache, and if that happens to fail, don't get data at all. And network only works where you only get the data from the network and never rely on the service worker. Now, I have a hard time trying to understand why someone will set up a service worker and then use network only because it's actually the same thing as just not using one at all, but I'm sure people have reasons that I don't know about yet. So, again, using the service worker allows for two things. It allows you to pre-cache your application shell as well as your dynamic content. And the combination allows for one, faster repeat visits. When you have an application shell cached, the assets that make up your UI or the most of your UI doesn't have to be retrieved from your server every single time. So when your user reloads your app multiple times, that is a lot faster. The loading is a lot faster because the service worker is feeding those files to the browser. It also allows for offline support. Now that we know the data that is not being fetched correctly from your server, your service worker can have older data and provide that to your browser. So can we use service workers right now? If you're using Chrome, Firefox, Opera, you can. The good news is Edge and Safari is currently under development in both platforms. Now, it'll definitely take some time for full service worker functionality to work in both Edge and Safari, but it's still extremely promising because we know that now all major browsers will eventually support service workers. So remember how we talked about H2 push earlier and how it's not cash aware? So this is where service workers can be useful because service worker stores information in a separate cache than the browser's HTTP cache. So let's revisit our original scenario one more time. I load an app for the very first time and I have server push enabled. What happens is the server pushes down some critical assets to your user and you can get a really fast first page load. Now, your service worker is acting as a middleman here, so now it knows to cache that resource if you ask it to. Now, if your user reloads your app for a second time, instead of the request being sent to your network, to the server, the service worker knows that it has the resource and will provide that instead. So not only are we gonna get a very fast first page load, we can also get fast repeat visits and repeat visits won't allow for server push if we don't need it to be. So whether you use a tool like Workbox or another library or you actually want to create the service worker file yourself, pre-caching your resources can be extremely useful. So now let's talk about bundles for a bit. I found a handy, handy gist that somebody actually set up and what he did was he created Hello World programs in a number of different JavaScript frameworks and libraries and some range from four kilobytes, the actual output bundle size, all the way up to 200 kilobytes. And now I know, depending on what tool you're using, there's reasons why there's a larger footprint. But it's something to think about because as applications have been building and become so much more clients heavy, we're starting to see more heavy and larger loads. So one thing we can do in order to help that fact is instead of sending the entire JavaScript bundle to your user on initial load, why not send them what they need for that specific part of the app? For example, let's say I open a social media application and I'm on the login page. Do I need to retrieve all the JavaScript that makes the entire app or only what I essentially need? Once I make it to the account page, why can't I just get the JavaScript for that page then? The concept of giving what the user needs in specific routes is known as lazy loading. If you're using something like Angular, lazy loading's actually built in into the routing system. So when you define your routes, you can create separate modules and if you define a low children attribute, you can define a separate module specifically for that route. What Angular now knows is that when the user goes to this specific route, it'll only get that bundle that makes that module and that specific page. So now you know you're not giving your user everything at the very beginning. If you're using React, for example, you can use something like React loadable and it allows you to do something very similar. It allows you to asynchronously get specific components and the bundles that make up for those components only when the user navigates to that page. Now, if you're thinking of applying code splitting and lazy loading into your application, you most likely want to keep an eye on your bundle and there are a few tools that make it easy. One is Webpack Bundle Analyzer. And what this allows you to do is it allows you to build, like, it's a simple command and you get a very interesting chart that shows how parts of your bundle look and compares to others. You can easily take a look and see what's larger, what parts are larger, which parts are smaller and then have an idea of what you need and what you don't need. Another very useful tool as well is Bundle Size. And way Bundle Size works is that it actually fits right into your CI workflow. You can actually add a threshold and every time someone puts up a pull request or a commit, you can see a message that says, hey, you're crossing that threshold or hey, your bundle is slightly larger than master. So whatever tools you use to actually keep an eye on your bundles, it's still important to try and think of how you can lazy load your routes in order to give the user what it wants only when it reaches that specific page. So again, the whole concept of purple is it's not a specific thing. It's just an acronym that defines a set of different techniques that you can apply to your web application. It doesn't, you can use any tool you like, but it's sort of a method of knowledge that you can think about in order to allow for faster load times, initial page load, as well as more reliable loads and faster repeat visits. Now, I know that a lot of us here who build applications, we most likely use a tool, a library or framework, in order to build our front end. So one thing that's important to think about is when you're trying to improve loading times, whatever tool you're building with, you wanna think of two sort of scenarios, one being first paint. Now, first paint means the actual time it takes for the user to see first meaningful content. The next thing to think about is time to interact. Now, time to interact is essentially the time it takes for your thread to settle down and user can get in contact with the page. Now, I've seen stats online that have said the average time to interact for mobile web applications is 16 seconds, and I've seen another one that says it's 19 seconds, and I feel like the more stats I see, the crazier it gets. I feel like if it takes more than five seconds for your web application to load, keep in mind a lot of your users might give up. So it's something to think about when you're building applications for users with different devices and different network connections. Now, I know a lot of us also, like I mentioned, build things with different tools and frameworks. Now, again, a lot of these frameworks are allowing us to do a lot more on the client side than we could ever do before. We're building more complex logic, and it's causing a lot of your bundles and slower page load times, but a lot of these frameworks are also trying to improve things in terms of progressive enhancements. If you're using React, for example, they've just recently, a few months ago, integrated service worker support right into the Create React tab. Angular, like I mentioned, has lazy loading built in into its routing system. They've also made steps with Angular 4 to reduce the bundle size, and they're actually building their own service worker library, so eventually, you don't have to rely on something like Workbox, you can use their built-in system. If you want to use Preact, it allows you to build UIs with very low footprint. The Preact CLI, which is also recently released, allows you to write a single command and get a Preact app up and running that is fully progressive and has the purple pattern baked in. View and Svelte are two other options that allow you to build complex applications with also relatively low footprint. And Polymer, for people who haven't heard of Polymer before, Polymer is a library that allows you to use web components and makes it easier to do so. Polymer, the Polymer team is actually the first team to coin the term purple, and their starter kit or the toolbox actually has the purple pattern baked in. So I hope you guys enjoyed this talk as much as I enjoyed giving it. My name is Hussain Jarday. I will be happy to answer any questions. Thank you, Hussain. That was fantastic. Thank you. I have a few selected questions here from our wonderful audience. So question one, one thing native apps have going for them is performance. Although you've explained how to get faster startup times, your web app is still running on top of JS, on top of a browser, on top of an OS. Won't that be an issue for certain types of apps? And that's 100% correct. And I feel like one thing about progressive applications or applying progressive enhancements in general is a lot of people make the connection to native apps. And it really depends on the type of application you're trying to build. If you need sort of native performance, if you need iOS and Android functionality, then that's perfectly fine and you should build a native app. But if you're trying to build an application on the web, you can also just add a lot of these enhancements and make sure users who use it on their mobile device as well as other devices get a good experience. So yes, I feel like you can't really compare it with how native performance works, but allows you to use tools on the browser to make your web application just faster and better. Awesome. Speaking of, do you think progressive web will replace native? Short answer, no. Long answer, I don't think it needs to. And again, I feel a lot of people that I've talked to who think about progressive web apps is they sort of see it as a replacement and it always depends on, one, your business use case. If you're thinking of building a web app or a native app and you're not entirely sure, hey, which one should I build first? And you feel like, oh, wow, progressive web apps can do a lot that I need, then sure, in your context, it works. But if you're trying to build something that users need with native functionality, then you have to build a native app. And I think one of the reasons why people always think that progressive web apps can replace native is that there's a lot of things progressive apps apply that allow you to do that I haven't talked about, one, being adding an icon to your home screen so it feels more native and you can load it directly with a splash screen and so forth. But again, it's still a web app and sometimes you think about that it's not trying to replace native, it's just allowing you to make web apps feel a lot more user friendly for users. Fantastic. How would you handle dynamic data with service workers? For example, changing prices on a web shop? So, okay, so it depends, I mean changing prices, I wanna assume if it's changing prices on a web shop that's coming from an external API. Again, if you're using a service worker, there are different handlers and there are different use cases of when you wanna cache dynamic data. If you feel like prices, for example, in this context is not gonna change much and you can cache that information so when the user reloads the app without a network, show the older price, that can work. If you feel like, hey, this is something I don't want the user to see unless it's 100% accurate, then you probably don't wanna cache that information. Awesome. From what you've seen, what is the biggest offender in terms of mobile web slowness? I feel like there's definitely a lot of factors. One thing that I've sort of noticed the most is people sort of, and it doesn't even tie in to lazy loading or code splitting, is a lot of people, a lot of applications that I've worked on include a lot of dependencies that it doesn't need. So even before you think of, hey, let me try lazy loading out and cutting down what the user sees in different routes, and I feel like if they automatically just take a step back and say, do I need a lot that's already in the app and can I just cut down what it's not being used, that could be a huge factor. So for me, I think that's the one thing I've seen the most. Nice. I have another question here. Do service workers have an impact on SEO? Do you know? So I don't know if there's a specific impact. Yeah, that's a very good question actually, and now I'm thinking about it. I feel like, again, a lot of the times service workers are being used for single-page applications, and a lot of single-page applications have issues with SEO on their own. I don't see service workers effecting that at all. If you are using server rendering to server render your content, it could improve SEO. So whether you have a service worker to cache some data and provide that to the user, I don't see that being an effect at all. Awesome. Thank you so much for saying, can I get another round of applause? Thank you.