 Often, when we think about service worker and caching, we usually associate it with providing offline support. After all, one of the main achievements of service worker was that we finally could get rid of the offline down Azure and send him to the well-deserved retirement. But apart from that, service worker can also be a great tool for improving the performance of your online site, especially for your returning users. When used right, it can give you a serious boost in terms of speed on a repeated visit. That's right, and on the other hand, when used incorrectly or without proper analysis, it can actually hamper a site's performance or even derail the whole experience altogether. So here's how the performance of cash for strategies break down in different scenarios. The most common case is going to be the one at the top, which is very fast when there's a cash hit. But this is not the only possibility. There could also be a cash miss. There could be a slow cash. You could have a timeout. Remember, there's also the possibility that the service worker isn't running, and so then that could delay it as well. All of these bad cases could happen at the same time. So while it's definitely possible to have a cash for strategy that's faster than not using a service worker, look at how many of these examples end up being slower. And because the strategy requests less content from the network and because it can read from the cash in parallel, it's typically quite a bit faster than the no-service worker case. Of course, the actual speed differences will depend on the size of the content area relative to the entire HTML page, and that will vary from site to site. But in general, streaming cash content with network content is one of the fastest ways to respond to navigation requests. So by now, you've probably seen this chart many times. You understand that the service worker boot up time can extend the navigation requests. So with navigation preload, what you do is you just do these requests in parallel. When we think about cash management, we usually want to achieve the following. We want to store the right resources at the right time while controlling the overall size of our application. We definitely want to prevent quota overflow because as developers, we do have quite a bit of storage space on users' device, but it's not unlimited, so we need to stick to that. And we also want our resources to be as fresh as possible, which means we need to have efficient updates. First of all, when working on performance, never assume the environment you work in is representative of your whole user base. For example, you should always throttle your network to 3G speed when testing to get a more realistic feel for your app performance. Secondly, keep in mind those underpowered devices with little storage and really control the size of your app. Remember that the overall size of your app might grow over time if you use runtime caching and plan accordingly. Also, sometimes there are actually explicit hints from the user that you can use in your decision-making. For example, you can refrain from speculatively pre-caching future resources if the data saver mode is turned on. When user enables this feature in Chrome, the saved data header is being sent with each request so you can detect it and, for example, refrain from aggressively pre-caching a lot of future assets. Similarly, you can use the Network Information API effective type method to differentiate your strategy based on the current network condition of the user. Finally, you can also consider scenarios where you give the user the full control over the experience. For example, you provide save for later button where user can explicitly opt in and decide to get something stored for future use.