 While this API was intended for service workers, it is actually exposed on the window, so it can be accessed from anywhere in your scripts. The entry point is caches. You are responsible for implementing how your script, your service worker, handles updates to the cache. All updates to items in the cache must be explicitly requested. Items will not expire and must be deleted. You are also responsible for periodically purging cache entries. Each browser has a hard limit on the amount of cache storage that a given origin can use. The browser does its best to manage disk space, but it may delete the cache storage for an origin. The browser will generally delete all of the data for an origin or none of the data for an origin. Make sure to version caches by name and use the caches only from the version of the script that they can safely operate on. We'll outline a few common patterns for caching resources. We'll look at caching files on install, on user interaction, and on network response. We can cache sites' static resources in the install event in the service worker, and we can cache the HTML, CSS, JavaScript, any static files that make up the application shell in the install event of the service worker. It's important to note that while the install event is happening, any previous version of your service worker is still running and serving pages. So the things you do here mustn't disrupt that. Event.wait until takes a promise to define the length and success of the install. If the promise rejects, the installation is considered a failure, and this service worker will be abandoned. If an older version is running, it'll be left intact. Caches open, and cache add all return promises. If any of the resources fail to fetch, the cache add all call rejects. There are plenty of ways to do this, but in this example, we iterate over the list of keys from the cache and then delete any caches that don't match the current cache name. If you're making your app offline first, this is how you'll handle the majority of requests. Other patterns will be exceptions based on the incoming request. If the resource exists in the cache, this code will return it from there. Otherwise, it will send the request onto the network. We can intercept the request in the service worker, cache a clone of the response, and send the response itself to the page. This approach works best for resources that frequently update, such as a user's inbox or article contents. This is also useful for non-essential content, such as avatars, but care is needed. If a request doesn't match anything in the cache, get it from the network, send it to the page, and add it to the cache at the same time. If you do this for a range of URLs, such as avatars, you'll need to be careful you don't bloat the storage of your origin. If the user needs to reclaim disk space, you don't want to be the prime candidate. Make sure you get rid of items in the cache you don't need anymore. Now, to allow for efficient memory usage, you can only read a response request body once. In the code above, .clone is used to create additional copies that can be read separately. When the user clicks on an element in our page, we can add the element to the cache. If the whole site can't be taken offline, you may allow the user to select the content they want available offline, for example, a video on something like YouTube, an article on Wikipedia, or a particular gallery on Flickr. Give the user a read later or save for offline button. When it's clicked, fetch what you need from the network and put it in the cache. The Caches API is available from pages as well as service workers, meaning you don't need to involve the service worker to add things to the cache. We create a cache with a name corresponding to the specific article. Then we fetch the article and add it to the cache. Let's look at some different ways to serve files from the cache. There are several approaches. Cache falling back to network, network falling back to cache, cache then network, and generic fallback. The request is intercepted by the service worker. We look for a match in the cache. And if that fails, we send the request to the network. We return the response. If you're making your app offline first, this is how you'll handle the majority of requests. Other patterns will be exceptions based on the incoming request. If the resource exists in the cache, this code will return it from there. Otherwise, it will send the request on to the network. The request is intercepted by the service worker. We send the request to the network. And if that fails, we look for a match in the cache. We return the response. In the above code, we first send the request to the network using fetch. And only if it fails do we look for a response in the cache. This is a good approach for resources that update frequently that are not part of the version of the site, for example, articles, avatars, social media timelines, game leaderboards, and so on. Handling network requests this way means the online users get the most up-to-date content, but offline users get an older cached version. However, this method does have flaws. If the user has an intermittent or slow connection, they'll have to wait for the network to fail before they get content from the cache. This can take an extremely long time and is a frustrating user experience. The approach we show here is called cache then network. And this is a better solution. Requests are sent from the page simultaneously to the code in the main JavaScript, not the service worker. This is a good approach for resources that update frequently that are not part of the version of the site, for example, avatars, social media timelines, and game leaderboards. Like we say, this approach will get content on screen as fast as possible, but still display up-to-date content once it arrives. This requires the page to make two requests, one to the cache and one to the network. The idea is to show the cached data first, then update the page when and if the network data arrives. In the above code, we are sending a request to the network, and in the next part, the code is looking for the resource in the cache. The cache will most likely respond first, and if the network data has not already been received, we update the page with the data in the response. The code for this is shown next. When the network responds, we update the page again with the latest information. Now, sometimes you can just replace the current data when new data arrives, for example, the game leaderboard again, but that can be disruptive with larger pieces of content. This code looks for slash data.json in the cache. This will most likely respond before the request to the network and update the page if the network hasn't already responded. If the network responds after the cache, it updates the page again. If getting the response from the cache fails, it tries the network again as a last attempt. If the request is not found in both the cache and on the network, respond with a pre-cached custom offline page. If you fail to serve something from the cache and or network, you may want to provide a generic fallback. This technique is ideal for secondary imagery, such as avatars, failed post requests, unavailable while offline pages, and so on. In practice, you'd have many different fallbacks, depending on URL and headers. For example, a fallback silhouette image for avatars. The item you fall back to is likely to be an install dependency, for example, cached on the install event of the service worker. Once a new service worker has been installed and a previous version isn't being used, the new one activates, and you get an activate event. Now, because the old version is out of the way, it's a good time to delete unused caches. During activation, other events, such as fetch, are put into a queue, so a long activation could actually potentially block page loads. Keep your activation as lean as possible. Only use it for things you couldn't do while the old version was active. It's important to remember that caches are shared across the whole origin. There are loads of resources available where you can learn more. You can access these from the materials that accompany this video. So now it's your turn. Go to the lab for this video, and in there you'll be able to practice caching the application shell, intercepting network requests, and lots more.