 Now, caching resources locally can be great for performance and network resilience, but when your app has to make a request, how do you choose whether to go to the network first or the cache, or maybe ignore the network or cache every time? I'm Sam Dutton, and I'm going to show you how to develop the best strategy for different types of resources. When service workers were first introduced, a set of common caching strategies emerged. A caching strategy is a pattern that determines how a service worker generates a response after receiving a fetch event. In this video, I'll take you through some examples, and at the end show you how to use Workbox to implement these strategies really easily. So the cache-only strategy, well, it only uses the cache. This might seem a bit weird, but it's ideal for anything that you consider static to a particular version of your site. You should have cached these assets in the install event, so you can depend on them being there. The code is pretty simple. You only respond to requests with matches from the cache. Now, of course, in a real-world application, you'd make a decision in the code as to which requests would only get a response from the cache. The opposite approach is network-only. Well, this is right for stuff that has no offline equivalent, such as analytics pings or non-get requests that require a dynamic response from the server. The code here essentially mimics the default browser behavior without a service worker. Again, in a real-world application, you'd use code to choose requests that would be network-only. The cache-first strategy is also known as falling back to the network. The service worker goes to the cache first, and if the resource isn't found, then it goes out to the network. If you're building offline first, this is how you'll handle the majority of requests. Other patterns will be exceptions based on the incoming request. This gives you the cache-only behavior for the things in the cache and the network-only behavior for anything not cached, which includes all those non-get requests that can't be cached. As you can see from the code example, the service worker checks the cache first, returns a response if available, and otherwise makes a fetch request to the network. The network-first strategy you can also think of as being network falling back to cache, and this is ideal as a quick fix for resources that update frequently, no matter what the version of your site. For example, articles, avatars, social media timelines, and game leaderboards. This means you give online users the most up-to-date content, but offline users get a cached version, which is potentially older. If a network request succeeds, you'll most likely want to update the cache entry. However, this method has flaws. If the user has an intermittent or slow connection, they'll have to wait for the network to fail before they get the content already on their device, which might be perfectly acceptable, but this can take an extremely long time and is a frustrating user experience. The code example is pretty straightforward, but take a look at the next pattern, cache then network, for a better solution. This approach goes to the cache and to the network, uses the cache response first, and then updates the page and cache once the network responds. This works well for content that updates frequently. The service worker makes two requests, one to the cache, one to the network, and the idea is to show the cached data first and then update the page when, or if, the network data arrives. Sometimes you can just replace the current data when new data arrives, for example a game leaderboard, but that can be disruptive with larger pieces of content. Basically, don't disappear something if the user may be reading it or interacting with it. The code in this example fetches the resource and then updates the resource in the cache and returns it to the page. Note the use of response.clone to allow for efficient memory usage. You can only read a response's body once. Clone is used to create additional copies that can be read separately. This service worker code starts a spinner and initiates a network fetch for the latest data. If the network request returns, the page is updated and a flag is marked, which is network data received, to indicate that the latest data has been received. The code shown here in main.js uses the cache API, since that's available from the window object, to display the existing cached data while waiting for the network data to return. If there's no cached data or the network has already returned, indicated by the network data received flag, we use the network data. If everything fails, an error is displayed. And finally, the spinner display is stopped. The so-called stale while revalidate strategy is ideal for frequently updating resources where having the very latest version is non-essential. Avatar images, for example. So if there's a cached version available, use it, but fetch an update for next time. The code is similar to cache their network, return the cached response immediately and update from the network. However, with this strategy, you don't update the page every time data is returned from the network. You only update the cache, so the new resource will be available on refresh. This could be used for stuff that doesn't need to be immediately up to date, but should be kept relatively fresh, such as third party libraries or avatars. If you fail to serve something from the cache and or the network, you may want to provide a generic fallback. You can use this for default imagery, failed post requests, for example, or to display fallback content when the user is offline. The item you fall back to is likely to be an install dependency. If your page is posting an email, your service worker might fall back to storing the email in local storage out box and respond, letting the page know that the send failed, but the data was successfully retained. The example shown here responds with a fallback page for any request that doesn't get a response from the cache or from the network. Workbox can be used to implement a lot of these strategies really easily. Workbox is a library that bakes in a set of best practices and removes the boilerplate every developer writes when working with service workers. For those of you who might have used SWPrecache and SWToolbox, Workbox is the replacement. Workbox makes writing service workers easier as part of your build process. It abstracts away common patterns and handles a multitude of corner cases. Workbox covers all the strategies I've already mentioned. Now, rather than implementing these strategies by hand, which is difficult and error prone, you can configure them very, very simply with the Workbox library. To use Workbox from your service worker, you first need to import the library from a CDN. Workbox has a built-in router which takes care of responding to requests when certain criteria are met. Here we use a regular expression as the criteria for our route. Workbox has built-in support for common caching strategies so you don't have to write or copy and paste your own response logic. That logic is ready to use right out of the box. Workbox also goes beyond the basics, allowing you to customize the built-in strategies with powerful options like specifying an expiration policy for a given cache. Workbox will take care of cleaning up old entries automatically, instead of them being saved indefinitely on your user's devices. Now, if you want to use the strategies in your own fetch event logic, you can use the strategy classes to run a request through a specific strategy. And this makes Workbox really flexible for implementing your own custom caching logic. You can find out lots more about Workbox and caching strategies from the documentation linked to from this video. And from the accompanying lab and workbook.