 Hi, I'm Damian, a web ecosystem consultant at Google. In this talk, we'll explore how different companies are building fast and resilient experiences in the web. We'll use the Warbox libraries to show how to implement four different patterns in your site. But all of these features can also be implemented by manually writing the service worker code. Our first pattern is called Resilient Search Experiences and can be applied to any site that offers some type of search functionality. When a user searches for a topic in Google Search for Chrome in Android devices and loses connection, instead of the standard network error page, they are presented with a custom offline page, asking if they want to obtain four notifications. If the user accepts the permission, once the connection is back, they will receive a web push notification, informing that the search assault is ready. Clicking on the notification will take the user to the result screen. This is a great way of keeping the user engaged while letting them complete the task they were looking for. At the heart of this implementation is the Background Sync API, which lets you defer actions until the user has stable connectivity. In Warbox, this can be implemented very easily. First, you can define a network-only caching strategy for the search endpoint. So this request always go to the network. Then, you can pass a Background Sync plugin to take care of the offline scenarios. Let's see how the plugin looks like. The Warbox Background Sync plugin receives the name of a queue to store failed requests to be retried later. The plugin also receives an on-scene callback, which will be called once the connection is recovered. Inside the callback, you can retrieve any failed request, process them, and inform the user of their result. For example, with the notification. Before moving to the next pattern, let's take a look at an important detail from this implementation. You might have noticed that the notification permission is requested when the user loses connection. At that point, the user understands the value of the service and knows that the notification will deliver timely and relevant updates. This is an example of a good implementation of the web push permission. Our next pattern is adaptive loading with service workers and will allow you to provide a fast experience regardless of the network and the device. Terra is one of the biggest media sites in Brazil. They have a large user base coming from slow and fast connections. To provide a more reliable experience to all their users, they are combining service workers and the network information API to deliver lower quality images to users on 2G or 3G connections. Terra took this strategy to the next level. When users are navigating on slow connections, they deliver the AMP version of the articles, which are more lightweight and tend to perform better under these conditions. To implement this functionality in Wordbox, you can first apply a cache-first strategy to images. Then you can pass an expiration plugin to limit the number of entries in the cache. You can extend this strategy by creating a custom plugin that we will call adaptive loading plugin. Inside the plugin, you can listen for the request will fetch callback that will be called before the request is made so you can apply a transformation to it. Inside the callback, you can check the connection type. If it's a slow connection, you can create a new URL for a lower image quality. Finally, you can create a new request based on that URL and fetch the most appropriate image file according to these conditions. If you are using Cloudinary, there's a Wordbox Cloudinary plugin making this feature even easier to implement. Check it out. As you might have noticed, the first two patterns have some things in common. We have combined the functionality of runtime caching strategies with plugins. This shows one of the benefits of using Wordbox, allowing you to extend the standard features in a very easy way. Let's move now to the second part of the talk. Our third pattern is called Instant Navigation Experiences. And it's useful for any type of site. Performing a task in a website might involve several steps, each of them meaning a navigation request. Navigation requests, like requests for HTML pages, are normally satisfied via the network. This means using a cache control header of no cache or a max age of zero to ensure that the response is reasonably fresh. But having to go against the network means that each navigation might be slow, or at the least not reliably fast. To speed up these navigations, you can apply a technique called pre-fetching. In this example, Mercado Libre, the largest e-commerce site in Latin America, dynamically injects lean pre-fetched tags in listing pages to accelerate parts of the flow. But pre-fetching is not only useful for e-commerce sites. Italian sports portal Virgilio Sport uses service workers to pre-fetch the most popular posts that appear in the home page before the user even clicks on them. As a result, low time for navigation to articles have improved by 78%. And the number of article impressions has increased in 45%. Pre-fetching is commonly implemented by using a resource hint in your pages. Link pre-fetch. The tag tells the browser to fetch a resource at the lowest priority and keep it in the HTTP cache for five minutes. In the service worker side, you can intercept requests for HTML pages so that you can extend the lifetime of the pre-fetched resource beyond the five minutes window. For HTML pages, a stale while revalidate strategy is a good option. To respond quickly from the cache while simultaneously keeping it up to date. Before moving to the final pattern, there's a slight variation of this technique. If they're using resource hints in the page, some developers prefer to delegate pre-fetching completely to the service worker. For that, you need to implement a page to service worker communication technique. The Warbox window package allows you to do that. So if you are interested in following that route, you can check that out. We have reached the end of our talk. Our final pattern is app shell UX with service workers. And it's useful if you want to make multi-page apps feel like single-page applications. Dev has become one of the favorite platforms for software developers. The architecture of their site is a multi-page app. Their team was interested in the benefits of the app shell model, but didn't want to incur in a major architectural change. So let's see what they did. First, they created partials for the header and the footer of the home page. These assets are added to the cache at the service worker install event, what's commonly referred to as pre-caching. The content of the page is the only part that's actually being fetched from the network when navigating. But the key ingredient of this solution is the usage of streaming. Thanks to that, bytes can start being painted in the screen before the full response is ready. Workbox, you can start by creating a regular expression to match requests for pages. Then you can pass an array of stream responses to compose. For the header and the footer, you can use a cache-first strategy. For the content, you can use a network-first. All the streaming sources will be composed by Workbox and sent to the client. Thanks to streams, the header can start being painted as soon as it's picked up from the cache without having to wait for the full response. We have seen four advanced patterns for speed and resilience. As a complement of this talk, we'll be uploading guides and code labs so you can see them in more detail. Please check web-dev slash progressive web apps and web-dev slash reliable. Thanks for watching.