 Wel, gwaith ymlaen i ni, sy'n gweithio i chi'n gweithio i Drupalcon yn y dyfodol. My name is Eric. I am a freelance web developer. I live in Brighton on the south coast of England. I've been involved with Drupal for probably about seven or eight years and I've been using it to build both traditional HTML websites but also to manage content for native mobile apps. I've spent a bit of time over the last few months researching this issue of offline and how native apps handle it, how the web handles it and see if we can learn anything from those two sides. I want to start by looking back to 2007 for a moment. I found a website from 2007 from way back for Drupalcon in Barcelona. Was anyone there at that time? I show you this because this kind of website we would have used from an office or from home on a fixed internet connection and you were either online or you were not, you were offline. But also in 2007 the first iPhone was released and since we've used smartphones we started to carry around the internet with us. You can have Wikipedia in your pocket. So we use the internet now in all sorts of different places. We use it outside and we use it in conditions that are very different to what we used to. So we use it in an inherently hostile environment. We don't have the luxury of a fixed cable that we know is going to work. So when people started writing apps for phones this kind of scenario was a given right from the start. People were aware of this that there may not be a signal, there may not be able to get online and we designed apps to cope with this and we can learn a lot from the way that's been done in about being resilient to network failures. There's an expectation with an app on my phone that when I press one of these icons it will work even if I'm out and about. But sadly we can't really say the same about the web in reality this is kind of what we expect. I see a lot of apps that have been built that feel like they've been built purely to show the content that's on a website but to cope with it offline. This conference in fact has an app that people are using. There's nothing wrong with it, it's a very nice app but I can't help but feel that when we do this we make duplication of effort. We're writing a web front end and we're possibly writing an iOS and Android front end as well. Also downloading an app can be quite onerous. If you've got a slow connection and you need to download a 20, 30 megabyte app or if you're roaming it will be a lot easier if we could just use the website. I'm going to look into this a little bit more detail this is all the same hardware. It's also using the same network with all its frailties and we're also downloading the same information. The only thing that's really different is the software. Browsers are now starting to follow the lead taken by native apps in providing some functionality to handle this. We'll look at that today. I think that going forward a lot of apps that are in this kind of... that are here to provide offline they might start to become obsolete. I want to show you a little demo site. This one is for a conference, for a Drupal camp or something like that. We've got a home page, a schedule and a few other pages. It's very basic. It's the most basic site you could have. No login, no forms, just a brochure site. There's a good candidate for seeing if we can improve it. The question I want to ask is can we take this site and make it so that when someone visits it we can say they can go back to it later even if they don't have a mobile signal. Can we provide something that a delegate could rely on and how far away are we from being able to do that? This is the horizon's track. I'll say now that this stuff is quite new, quite raw. There's limited browser support. I'll come back to that in a moment. It's very interesting to see where things are going. This service worker is a big key part of the puzzle. This is a relatively new JavaScript API that we have. It is essentially a client-side proxy, and it gives us a programmable layer between the document and the network. You can think of this as a little mini web server running on your phone. As for what it can do, it will intercept the network requests. Like a man in the middle type thing, it will handle everything that you request from the network and everything that you get back. It also gives us a programmable cache and a couple of other things that are quite new, push notifications and background sync where you can send and receive content even though your browser is closed. Those last two are a bit newer, a bit less finished. I want to focus on the first two for now, the network and the caching. If we fast-forward to today and look at the state of things at the moment, the browser support, so as things stand, Chrome, Firefox and Opera all support the stuff that we are going to show. This has been in development in Microsoft Edge and it is not yet ready in Safari. That actually includes a whole of iOS. The other browsers on iOS are actually using WebKit and mobile Safari under the hood. This website is ServiceWorker Ready. That is a good indicator of the state of play. There are a number of different components to this and it lists them all and tells you exactly what is and isn't supported. But don't let this stop you. Don't think, oh, I can't possibly use this if it doesn't work in Safari. We can add the ServiceWorker to the website without. We can add it as progressive enhancement. It won't diminish the experience for anyone whose browser doesn't support it. They'll just get exactly what they had before. So what do we need to get started? First of all, we need to be running HTTPS. Secondly, we need to start adding a little script to our code. In our HTML file, we're going to register a ServiceWorker. I'll put this in a condition which says if the browser supports it, then we'll start it. If not, just ignore it. We pass it the name of another script which we will use to tell the ServiceWorker what to do. We'll come on to that in a moment. This scope parameter is quite interesting. What that means is that any subsequent visits we make to pages within that scope will now be handled through the ServiceWorker. And also those pages will use the worker for all of their requests. So images, fonts, things used from CDNs, Ajax, Google Analytics. Everything will go through that worker. For now, we'll just leave the other script that configures the ServiceWorker empty. So by adding what we've seen, we have a worker up and running now. It doesn't really do anything, but it is there. We can look at this demo site and we can go into the browser's dev tools and see it running. We can see all our ServiceWorkers here, and then we can go in and inspect it in the same way that we might inspect the document. So the ServiceWorker runs independently from the document window, so they don't know about each other. And it will stay there, even if we close the browser. The ServiceWorker can go to sleep and then it will wake up when you come back to the page. In order to customise it, we need to start thinking about the events. ServiceWorkers are event-driven, and we can write JavaScript to run when the different events occur. There's quite a few events, but I'm going to focus on two today that are most interesting for what we want to do. The first one is an install event. The very first time the ServiceWorker is run, this code runs. Typically in here we'll download things that we need. We always want available. CSS, fonts, JavaScript, that sort of thing. The second one happens whenever a page makes a request. This is the fetch event, and this one is where we do most of the work for handling offline. We can put pretty much whatever we want in here. We will get given a request, and we need to pass back a response to this event. That's probably the most minimal example. That will just return a fixed string. Typically in here we put some logic to say if it's a particular URL, go and do one thing, if it's a different URL, do something else. It shows you the structure of what we're dealing with. To make that more useful, I need to show you a couple of other things that we want to use. First thing I want to talk about is promises. Much of what we do here is asynchronous. Things that can take a long time to complete. A lot of APIs will use these things called promises. They could be widely used in the functional programming world, but they can be a little bit hard to get your head around the first. What we're trying to do is ask a question that will take some time to answer. We need to get back something in the meantime. We need to get back this promise that the question will be answered. The promise will either resolve if it's successful, and if it does that then the then block, the then function will be run. If it doesn't resolve it will reject and then we'll get this error handler working. Secondly, I want to talk about the, we get a new fetch API. Just to look quite familiar if you've used jQuery to do Ajax's requests. We're giving a request and asking the browser to fetch it, and then we're doing one of two things if it comes back successfully or it doesn't. So we get a promise of the answer to the fetch request. I also want to mention a cache API. It looks a bit like this. This is quite different to the inbuilt browser cache. It sort of sits on top of it and allows you to access the inbuilt browser cache programmatically. We can do things in here that we can't do with extra things that we can do, like putting, getting stale content out. We can sort of guarantee that if we put something in here it will be there later on. This idea of getting back stale content is something that comes in handy later on. In the browser dev tools we can go in and inspect the cache and we can see what's in there. That's the demo site I showed you earlier. We can see that it has put all these files in there. So let's have a look at the site when it's offline. I'm going to load it online first, the very first time. Now it will be downloading various pages and putting them in the cache. Now I'm going to put the phone into flight mode. Now when we go and view the page it's there. That's because we fetched. We put those pages in the cache first and then we got them out of the cache later on. If I close down the browser and start again and come back the pages are still there. I'll go over a few techniques that I used to make that work. Starting with a network-only strategy. This is kind of the status quo. This is what happens if we don't do anything different. This is what a browser would normally do. It would say, okay, I've asked for a resource. Go and fetch it. If the fetch from the network fails then we can't do anything. We just return an error. It might succeed if on nine it might not. We can improve that a little bit by falling back to the cache. What we do is in this fetch we add a catch block that will run a function if the fetch fails. That looks a little bit like that. We're going to look in the cache now. If we couldn't get it from the network we'll go to the cache. If we couldn't get it from the network or the cache then we'll have an error. One more thing we need to do in there is get things into the cache in the first place. What I've done now is to add a function that says if the network succeeds go and we're going to return the response but before then put a copy of the response into the cache so that we can get to it later on. Now we have these two functions. That's quite a nice improvement. It works quite well for offline. It works in this scenario when we're offline but maybe we need to rethink a little bit about that because it's not always this clear cut as it turns out. We've covered this scenario now. We talked about a situation where we are definitely offline. There is no question about it but we haven't really looked at some other scenarios. What about this when the connection comes and goes? Well this scenario when it's a very weak signal or this scenario when we have some Wi-Fi that may or may not work. That one didn't. It turns out that browsing the web on a mobile device is often something like this. Let's look familiar. That did work but it took a long time to get there. You could say it worked. You could say it doesn't. Even if it hadn't worked it would have taken an equally long time to get to this. We can't really do much about this because it's sort of by design. If we think about how the internet works we have TCP IPs. It's a reliable protocol but it's built on top of this unreliable network. We've got all these physical connections none of which we can control and any stage in there something can go wrong. When you view a web page the browser starts talking in HTTP and the operating system will use the computer's hardware to send packets to a router. The router will send them on to another router probably several others in between. Packets then get reassembled by a server which passes it back to a web server process running on that machine. We've done all that to get the packets to ask the request then the whole lot has got to come all the way back and only when the response comes back do we get an answer. If something goes wrong and the packets are going out into space we don't know where they're going but they're not coming back. All the browser can do in this situation is sit there and wait and it will hope that they come back but it doesn't know and it has to give an answer. The only way it can do that is by waiting a while and if it hasn't heard anything by giving up that's the browser time out that you get. We can learn a couple of things from that. Firstly, it takes time to determine this online-offline state. Perhaps more importantly our users of our website may not wait for that time out they might decide before. In that video is quite a long wait and I could easily have just closed down the tab and thought no it's not happening. So maybe we need to rethink our strategy a little bit. That website from 2007 that I showed earlier was a fixed width. Nowadays we don't assume that we don't write websites as fixed width. We don't assume that there is enough space for a multi-column layout. So we changed the way we do design so that we design for mobile first then we add layouts if we can. So what if we did the same thing about network connectivity? What if we assumed that we were offline first and then treated the presence of a network as an enhancement? If you think that sounds a little bit crazy then think about the layout a few years ago. If you said to somebody then actually layout is an enhancement it's not an intrinsic part of the site, it's an enhancement. You probably thought that was a bit crazy too. So we talked about caching pages and if there is no cache to start with why don't we show them to the user while we wait? Here's an example from the native Twitter app. Now when I file it up I see tweets straight away. I know the tweets that were there last time we used the app and after a few seconds I get this little bubble that pops up saying there's new tweets. So what happened there was that I saw the content then the new content was fetched and then I saw the notification so I could start reading those tweets I could start interacting with them before the data arrived. What I wasn't doing was seeing a blank screen with a spinner saying please wait while the tweets are being loaded. And if the network had failed I would just not have seen that bubble and maybe I'd seen a message that said sorry you can't fetch new tweets. That can't be helped you're not always going to be able to... We can't do magic we can't get something if there is no signal. So that's really the whole idea behind going offline first. We want to give the user something straight away even if it's old. Worry about whether it's changed later and we'll see whether it's changed and then tell the user. That is usually better, not always there are certain cases where you really don't want to give out stay or content. Maybe live sports scores would be an example. But very often someone would rather see something that's a little bit older, a bit out of date and know that it is than just see nothing. And if we really want to get a good user experience we can try to... try to decouple, try to separate out the user's interaction on their phone when the user presses something from these slow network requests that take a long time. So that when somebody presses a link they get something straight away. And in order to do that we need to start by loading some things in advance. So let's look at that that install event and see if we can add some code to handle that. And it's quite simple that all I need to do in here is really use that fetch API and that cache API and get some URLs and put them in the cache. And that means that you know, once I visit the website those resources are downloaded now they'll always be available going forward. The fetch handle is a little bit more interesting. So last time we added a fallback for when the network failed. We need to turn that on its head a bit. We need to go to the cache first and then fall back to the network. So it looks a little bit like this. We try the cache that will give us back a response or it may not give us back a response. If it does we just give the user that. If it doesn't we go and fetch it from the network and we then put it a copy of it into the cache as we did before. So now add one more little thing in here. If we did find something in the cache then what I want to do is fetch another copy of it. So there's no point leaving it in there because then we'll get stuck we'll always show whatever page the user first downloaded. So I want to fetch it again and then when that fetch comes back I want to replace what was in the cache with my new updated content. But what's interesting here is this purple line that's highlighted we're not waiting for that to finish. Although straight on to the second line which is return the cache response that gives us our kind of instant response. The fetch is asynchronous at some point after the user has seen the page. Now this works great for things like CSS, things like images. It doesn't matter too much if I got previous version. The next time I hit the page I'll get an updated version. There's no real cost to doing this in terms of neural requests. I was going to make that network fetch anyway. I want to handle HTML content a little bit differently. There's a slight risk that it will show the user the user will open up their phone and see a page that was a bit old. I kind of want to have some way of telling them that. In that fetch handler we get two responses. We get the first one from the cache and the second one that comes down later on. When that second fetch comes back remember the user is already looking at a page. If it is old you need to tell them somehow so we need to be able to compare the pages. If they're different I want to tell the user that hey this page has changed. If they're the same we just won't do anything. This is nice. We've kind of got away with it as it were. We've given the user the page and told them this is it and then thankfully the page didn't change. But if it had changed I need to send the message. The best way to do that is using this post message. Service workers can't communicate with the DOM. They sort of exist in it's a bit like having a separate thread. It's not completely like having a different thread. But you can't access the browser the page. I need to do this with this asynchronous messaging. This is just a message from my worker saying there's some updated content and I'll tell it what URL has been changed. And then on my document side I'll listen for messages coming from the service worker and then put up something in the DOM accordingly. And the way that I had to get this to work was by putting in a proxy in between that could use and could put in a header called eTag. The eTag is a hash of the actual content. So if any content changes you'll get a different hash value. And that's quite a nice way you can look at the headers from the two responses and say are they the same, are they different. So if I now look at my if I now go back to the demo site again what I want to do is carry on browsing through this site and I'll go down and find one page. Now the browser is making a second request and when that request comes back it's changed and I send the message and now I can handle it like this and give the user the option to reload the page. Most of the time that won't happen but it's quite a nice way of dealing with it when it does. And they our user won't be any the wiser if it didn't happen. They'll just think oh I got my content straight away. So I'll wrap up by talking about service workers in the context of something that people are calling progressive web apps. I mentioned the progressive enhancement these things are kind of built on top of the web as it already is. We're not replacing anything here we're just saying you have this extra functionality available. We've talked a lot about the offline capability but also I can make a progressive web app installable. I can create a file this is just a text file a jason file called a manifest which will contain a few things like the name of the site the icon and various other things like the colour of the tool bar at the top. Just by putting this in certain browsers namely Chrome on Android will give you the option to put this on your home screen. But lastly I've said these are these are still linkable. The building blocks of the web are URLs and links and we've left that alone. Each one of those pages had a nice a nice URL that I could share or post on social media. I didn't have to go and download an app and then start again navigating within the app to get to where I was going to go. If the browser didn't support service workers it would just see the same page as I had before. There's been a bit of work on this done in Drupal. There's a module that is quite interesting. It attempts to automate building the service worker script. It's very early. It's quite experimental. I used it for the early part of the demo and then I had to do a bit of extra stuff for the later one for the updates. But it's definitely something that I would say is worth keeping an eye on. I'll leave you with a few resources that I'd recommend that you take a look at. The first one is this talk, the notes from this talk and that has all of the others. If the first one is lists everything you want. There's quite a few other videos from the especially from the Google IO conference. They did a whole series of sessions on this that's very useful. I actually had to listen to quite a few talks about service workers before the penny dropped, so to speak. It does often take a few a few goes, as it were, to understand this stuff. Thanks for listening. Just a couple of things if I could ask you to evaluate the session and give some feedback, that would be very helpful. Also there are contribution sprints on Friday. I'll happily answer questions if you could use that microphone so that the questions can be recorded as well. That would be great. Thank you. With the service worker, can you have an event which triggers that reload process even after it's been reloaded once? So can you have content as it's being updated, kind of pop up that notification? Oh so if the service worker is sort of like if it was polling the page to when you update your content let your users who are on that page know that it's been updated without letting the page. You might be able to. I think there is something coming to do sort of serve a push. That's probably a little bit further away. So I don't know anything. You could, yeah, you could that was the point was I think you can start with a network first and I think you've maybe put you can have a race where you say I want the network to go for a certain amount of time and then if it doesn't come back within that time use the cache and do something like that. Did you do anything about the cache headers that you get from the server? I didn't get that you are taking that into account. You can get cache expressions header on Apache or different servers. You can get yeah in the get request. When you do the get request you get a cache expression header you can program that to have different kind of expression headers so depending on the kind of data you could decide if you do something first go to the cache because this expires in two years or go to whatever it's online and then fall to the cache. Yeah so the the thing with the cache headers is so the service worker kind of operates in the same way as a browser would if it requests a page and if you serve a page with a long expiration time it will go into the browser cache if you fetch that from the service worker it will get it out of the cache even if you say network because it has it in there for a certain amount of time it knows it doesn't need to guess it again so you still need to implement sort of cache headers in the same way that you would and say this content is valid for so long Yeah it can get a bit confusing I think probably the easiest way is to think that if you're serving up some content and you say this is valid for one hour there could be any number of intermediate proxies between the server and the client and each one of them could hold on to the data for that time and that's what the server said that they don't come back within this time If I think if you think of it like that then the last step of whether the service worker then asks the network for it it doesn't necessarily need to know where it comes from it just knows that I'm potentially making a request I think that's right Okay I'll repeat it Yes But you need to test the service worker too Yes I think you can in Browse of DevTools I think you can disable them Okay Okay thank you Thank you