 Hello, everyone. Welcome to the offline panel. Most of us as web developers, I think it's fair to say that we declare game over when there's no network connection available or when it's faulty. But if we want to compete with the cream of native apps, then having the network as a dependency is simply not good enough. But the five brave souls in front of you today, they laugh in the face of zero bars of connectivity. And they do not want your wallet to be drained when you are roaming. And they do not want to give up on you when your ISP has. We have Matt Andrews from the Financial Times. We've got Craig Cavalier from Liquid Frameworks. And we've got Calvin Spielman from Cactus Consulting. These guys have been building stuff with the current crop of inadequate APIs that we have on the web. But they've suffered through that. So give them a hug afterwards, because they've been damaged by that. But we also have Arnavon Kestrin from Mozilla and Alex Russell from Google. And these guys have been working on the future APIs that are going to solve all of our worldly problems. Is that fair to say? OK, that's good. So we're hoping that in the future, making a site work offline is not going to be an act of self-flagellation. And on that note, Alex, I believe you are going to introduce a topic. Are you good to go? Sure. As Jake says, I'm Alex. Do we have slides? Did I fail at this miserably again? So good at this. So yeah, we've changed the name of the new stuff a couple of times recently. It kind of started off as a navigation controller, but it's been expanded. It spent a little bit of time as a vent worker. And now it's service worker. But it has been pointed out by Lady Ada King on Twitter is that that service worker does sound somewhat like the oldest profession in the world. So I don't know. We kind of thought maybe service controller, but does that sound like a pimp? I don't know. But an app pimp, that's pretty cool. We all want to use an app pimp. So maybe we'll go with that, you know, pimp power apps. That could be cool. So yeah, we're going to have to find a name for it that doesn't kind of evoke that kind of negative thing. But how are you doing, Alex? Because I'm running out of filler. Yeah. Is it because it doesn't work offline? Is that your problem? That might actually be it. Yeah, so as Jake says, I am Alex. And I am rescaling the slides as we speak. So I spend a lot of time offline. I most recently lived in London before I returned to San Francisco. And they did something in the tube for the Olympics, where they said they were going to put Wi-Fi in all of the stations, which technically speaking is accurate. They did happen to put Wi-Fi transmitters and receivers in all of the stations. But one of my most frequent tube journeys was to and from airports, because I spend a lot of time in airplanes. And it always seemed really punitive, because you would get to a station. And for about 20 seconds, while people were loading and unloading, your wireless Wi-Fi radio would be attempting to connect. Frantically, to the thing. And you'd be trying to log in a little bit, so you could just download the next. No, fuck, we're moving. So I got really good at figuring out how to turn off the Wi-Fi in my device, because the network just wasn't reliable. Even when I absolutely wanted it. I wanted to see the news. I wanted to get the latest times, or Guardian, or whatever it is. OK, I've added myself as a lefty. Fine. And the end of this journey, of course, ends you up in an airport, where you are at the whim of whoever happens to have the crappy Wi-Fi and whatever bit of hell you've ended yourself up in, with a captive portal and all the rest. So these are hostile network environments, and they mediate our lives. I think the work environment and the home environment are special cases of how we interact with the stuff around us. And it's in those environments. The airport, the tube, the ones that are really stressful, where you're trying to get someplace. You've got a thing. Where the heck was I going? What's this thing I'm trying to do right now? I don't remember. I look at my phone and it's that environment that we, as a technology stack, fail at miserably. Largely because we have built ourselves the assumption that what is remote is cheap. Because everything is remote. The web is designed around the idea of the URL, the thing that is over there that's cheap to do. And that little tiny anchor tag makes it look like everything is connected to you all the time. Ahref is the most innocent thing you can possibly imagine. And at the same time, the implications of Ahref are massive. It says nothing about whether or not you can get to the network. And the tube gives us a good window into what it means to know whether or not you're online. So we keep getting requests for a bandwidth property in the browser. Tell me what my bandwidth is. Tell me what my available bandwidth is. There's no way to actually know. So if I show up at a conference and there's a captive portal to go get me my thing, my bandwidth to a thing which does nothing I want might be infinite. Whereas my bandwidth to all of the services I actually care about might be negative. So we find ourselves in this situation a lot. And it's very difficult from the browser's perspective to say anything meaningful about your connection. Saying something other than, I can't do anything because there's no radio turned on and it's not connected to anything. It's a very difficult call to make. So we've put ourselves in a situation on the web where we've architected an entire stack of stuff around the idea that the thing that's over there is really cheap, or at least as cheap as everything else is. And the thing that's local is unreliable. Caches are things that get evicted. They're things that you can't trust. They're things you can't rely on. And this isn't how we build anything anywhere else. I would never build a browser on top of this technology stack. And I don't mean that in the sense that there are things that you can't do. I just mean that you'd never put up with the idea that the closed tab icon might be evicted. That's not a thing that you would ever put up with. So let's think through how we got here. So imagine a little website that's a little tiny, like two-man agency, two-person agency, two alien agency, two beings. And they're trying to put together a website. And it's the simplest thing that could possibly work. And if you deconstruct this layout, it tells you everything you need to know. This is really, really simple. This is static content effectively. And it probably will bit rot. They're probably not going to change their team website if they're even moderately successful because it will outstrip their ability to actually build websites for themselves because they'll be doing it for clients. So this is going to be effectively un-maintained. And if you look at how it's built, you can get a sense for what it takes to get started on the web. You have some resources. You have some HTML files. And when you really start out, it's copy and paste. If you're going to get advanced for 1995, you'll eventually configure Apache to do server-side includes, and you'll change these to .shtml. Thank you very much. And you'll have your server-side includes for the navigations and the footers at the top and the bottom. And you will have whacked all that in. And so now we're starting to see that there is a repeated bit of content around some of the things that we're addressing with URLs. So I have a blog, right? Please don't read it. We'll save you time and embarrassment. But that blog is much more sophisticated than that little tiny layout for a static website effectively, like a brochure or a site. Largely because the location of my domain in frequently.org has nothing to do with an individual bit of content. It's not addressing some bit of content. It's addressing a database query, a reverse chronological order query across these tables. And when we ship this stuff down to the client, we don't ship those tables. We explode a bunch of that stuff out. We denormalize it entirely, serialize it through templates, and then spit it all out. But if you go look at any individual article, it corresponds to one item in one of those tables. And then we take all the whole machinery again, which is the shell around that little bit of content that we're addressing with the URL. And we explode it all out. And we ship this huge bundle of content down, which effectively wraps the stuff. And if you look at the HTML, that's actually what happens, right? It's just document and then tons and tons and stuff around it. And then this thing that we were actually putting in the center of this template thing, which is what the URL is kind of actually pointing at. When we talk about meaningful URLs, we tend to be meaningful with regards to the thing that's the primary thing you as a user would focus on, right? It's a very subjective view, because it's about what the user is experiencing about that location, about that state. So if we think about it, what is the URL actually talking about? We start to get this idea that what we've done for a couple of decades now is to take server-side graphs of state and pre-explode them for digestion by the client. And that pre-explosion means that HTML is something that's incredibly powerful and effectively display only. And it gets really confused as soon as you start talking about doing dynamic stuff. Because there's that big model over on the server, which is written in, at least in the case of WordPress. It's PHP and MySQL, OK? These are two technologies which, without extraordinary acts of inhuman and inhumane technology, are not going to be running in my browser anytime soon, right? I'm sure there's a scum VM that would run something like this, right? But today, we're not doing that. And the latency characteristics haven't been set up to do it. So let's imagine the next leap from little tiny browserware thing to blog, which is a moderately complicated CMS, to the biggest thing that we can think of that's still effectively static content, infrequently updated static content. I submit Wikipedia. So the question becomes, if you're building Wikipedia as an app, or if you're trying to make it possible for me while sitting on a tube, 20 seconds at a time of connectivity, if I'm lucky, to go and read an article and click on a link to go to the next article, what is required for us to think about making that a meaningful thing to do? It turns out the thing that you really require is the help of the user. Because it's not possible to think through this whole system and say, I'm going to go grab all of it. Like my blog is relatively small. You could maybe fetch it all. But you couldn't fetch all of Wikipedia. I've tried this. Decompressing this takes about 20 minutes on an i7, a really ballsy laptop. Downloading it took a couple minutes, too. So think about Gmail. What am I indexing when I go to one of these URLs? Or when I go to a music site, could I possibly download that entire library? My Gmail is really, really full. I don't want the entire contents. In fact, I want some subset of that. And I, as the user, I'm willing to help you go figure out what that is. And as the media size, the atomic size of the media gets bigger, and the more of it that I've got, the more that pressure builds up on us. We don't really have a way of thinking about it. So the way I've phrased this now is that there's an application that you'd like to think about, which is what helps the browser render content. And then there's content. And in the old days, quote, unquote, old days, we tried to conflate the two as much as possible. We would smoosh the shell into the content, with server-side-includes and CGI. And as we get more sophisticated and more dynamic, we have to serve the shell and think about the shell differently to the content. They're different entities. So it turns out the offline problem is actually about making applications that can bootstrap themselves. It's not about this, how do I transparently turn the thing that I've got into that other thing over there. It's about the data model and where it's split. Who owns the data and where does it live? Am I doing synchronization over a thing that I've synced part of to the client? Or am I shipping a series of pre-exploded pages? That's kind of the core problem that we have with offline. So a service worker is an attempt to get us through a couple of these problems by saying that we don't know the answer for your application. But what we do know is that you don't have enough control right now. We spend a lot of time looking through it. And basically, it's an in-browser proxy that you can install from the perspective of your origin. And when you do, at some point later, when it finishes installing, you'll get notified and the ability to proxy, do whatever you like with the response for any request those documents make. You get to own the request. Cross-domain, same origin, you name it. And it gives you programmatic control over a series of caches as well. So it's relatively simple. You say, I would like to register for one of these things. And it gives you a promise back. Yes, promises are coming to the DOM. You can thank some people later. And you can actually compose stuff from third parties. You can pull in resources to use with these cache objects. And then this add event listener for fetches actually gives you the ability to go and construct. You see that e.respond with new same origin response there toward the bottom? That's actually creating a response out of whole cloth. You're just programmatically saying, this is the content that you should send back to this document. That's the idea. You've never been able to do this before, except on the server. What if we could do this on the client? And why can't we, by the way? You can imagine another couple of ideas. I would like to, when I'm offline, serve some fallback. Or if I've got pictures of cats, and I would like them to be responsive cats, I would like you to, through some magical API, figure out what we've already stored about the client, what we know the device pixel with, the aspect ratio that we probably are displaying at, the kinds of things that a document could tell us while it's populating the service worker. And then have the service worker respond with the correct one as opposed to trying to figure it all out preemptively up front and make CSS do the heavy lifting for us. I'm actually getting to the end. So there are a bunch of big implications for this design. What does it mean to have real URLs in these applications when we think of a shell that boots up, that then mediates content? What does the URL mean? It means something that's interpreted by this shell. Can we build transparent caching proxies? The answer is yes. It's the simplest thing to possibly do. You actually just populate a new thing in this cache object every time you get a request. And then suddenly, things mostly work offline. Is that good? Is that bad? We're not sure yet. We don't know. And we don't really have good synchronization technology today. There's some work happening. I know the meteor guys are spending a lot of time on operational transform. And the Ember guys are looking at it. But synchronization technology is not our lingua franca. This is not how we have thought about building the web. And it's the sort of thing that we're going to have to get good at if we really bite off this idea of building applications that are shells that boot up to immediate content, and not simply documents which happen to be fully formed things sprung from the head of Zeus. So there's a bunch of open questions. And I guess it's time for us to talk about it. And that's us out of time. Thank you very much. Goodbye. No, the underground example is a very good one. When I was building lanyards, the underground was the most convenient place to go and test a real world offline situation. Because turning things onto airplane mode is not quite sufficient. But I remember those with five mobile phones, and one of the staff came up to me, and he just stood there staring at me. And I looked behind me, and I was standing in front of a poster, which essentially said, if you see someone with more than one mobile phone, they're probably a terrorist, and you should deal with them. So first up, before we go to the first question, did any of the panelists have anything to say about what Alex was talking about there in the intro? I had just one comment, which is I do wonder how many people working on the service navigator controller, whatever the eventual name is, how many are the same people who built Google Gears, which I think had almost exactly the same mechanism for dealing with offline, just insert a proxy into the browser? Is there a connection there? So it's a good question. So Michael Nordman was on the Gears team, and he's one of the people who's helping us implement the service worker. We've tried, at least twice before, from the Chrome perspective, before there was Chrome, it was Gears, and then inside of Chrome, with the application cache, to give you tools to make things work offline. And we have failed miserably twice. And Gears wasn't what you thought it was. So if you go back and you pull the documentation out of the internet archive, what you'll find is that Gears has a thing called local server, isn't it? Yeah, local server. And the local server wasn't a server that you could script. You couldn't ever give it a bit of code and say, please go run this whenever I request a resource. That never happened in Gears. You could give it a manifest, a declarative format, which is exactly what we did in the application cache, too. And in both cases, they were, I think, like Jake put it at Edge in London, like playing a game where you can see the effects of what you're doing a couple of turns later, but you can't actually control any of it. And as a result, we don't have a lot of data about what you actually need. We've never given you the primitives that you need to go build the stuff you want and evolve and iterate on that. And I think there just hasn't been a lot of exploration because it takes a lot to get going. And there hasn't been the ability to explore these patterns. I think Gears was essentially the precursor to AppCache rather than the precursor to this thing we have now. I think we should go to the first question. And if I'm reading the right document, that's from some chap called Andrew Betts. Never heard of him. Where are you, Andrew? Can someone get Andrew a microphone? There he is. I've got the phone line. OK. Currently, offline data may be evicted at any time by the user agent. Can we have storage persistence guarantees or at least provide hints to the user agent to tell it to prioritize app logic over content data? Yes. This is a very interesting question because the whole thing about using it at an airport, I mean, if we make a guarantee that, yes, as a user, your data of when your flight is going to be, will be available to you offline, the spec does not currently let us make that guarantee it can chuck all of the data under a bus whenever it feels like it. This is really important for us as a business. To give a bit of background, I work with field engineers and they work on oil refineries. And they write up their invoices and their equipment tracking. We can't have their data just disappearing on them as they're writing out these new invoices. We don't want to lose that data. So that's a really important problem for us that we need to solve. Is this a problem that you actually encountered in the FTE, or is this a problem you encountered disappearing like that, or is it just a theoretical problem? It's a theoretical problem at this point for us. We have real experience in particularly, I can't say there's some record, damn it, IOS 5. We suspect strongly that occasionally wiped our apps in certain situations, but not sure. Can't prove it. But we have strong suspicions. I'd like to add that I think the most important aspect of this is how the user understands where the data is and when the browser may evict it. We can't tell the browser that they can never evict something. Only users should be able to say that. We have ways of installing an app in an implicit way that the user is invisible to. And the data should be tied to that. So the user might say, this is an app I want to keep around, and therefore they're opting in to keep that data. And then it would be safe to not evict. So I think a bigger problem here is not so much the APIs we have, but the way that the browser presents that to the user. So to say that the user has to be responsible for the removing of data, they have to be responsible for the adding of the data as well. Yeah, and if the user hasn't said, I want this app to stay available, and it's added to stay available, then it's safe to evict. And so they need to opt in to this. I know Alex, I'm looking at the wrong one when I say that. So we've been looking into this for Firefox OS as well, because the problem comes up there as well for apps. And it's really hard, because once you get to persistence, it mostly becomes a UX problem. So we have this sort of concept of you have temporary storage, which we can just ground to everyone. We'll just grant you a slice, and you can use it. But then, and the other concept would be persistent storage. But that is much harder, because the user sort of has to be aware of it. But then most users are not really aware of storage and how that is allocated through apps and how that works across their device. It's not a problem on a phone, though, right? Because if my phone tells me I'm low on space, I can go to the apps menu, and it says, oh, this game you don't care about is using 200 megabytes, and this thing you really care about is using less than that. So I can throw one of them out the window. Right, yeah. So I think we have to sort of evolve it into that direction. And yeah, so it has to be. But the thing is, do you ask the user right upfront? Or which apps do you grant space, do all apps have to ask for persistent storage or not? Whenever you sort of hit the UX side of things with standards, it becomes a lot harder to get things done. Is this something, is there an API or a standard that this kind of work fits in? Is it the quota or API? Is that the plan? Yeah, so there is a quota API, and it does have this idea of persistent storage and ephemeral storage. And right now, persistent storage is only available in Chrome for certain file system calls. You can't in a granular way opt some bit of your cookies in your local storage and your indexed DB into persistent storage and then say this other stuff I don't really need. And there's no clear way in the UX to communicate to users about what they're doing. It feels to me like Anna was saying, there's a very clear missing moment of intent where you give users the power to say, no, I really want to keep this thing. I like this a lot. I would like to get back to this easily. I would like to bless it with the ability to keep all of its data around. And we don't have that moment yet. Like we are missing what is effectively a very small manifest and then some UI to let users say, yeah, I like this. I want to keep this. And once users have said they like it, I don't see why we should ever be second guessing them. I mean, I trust myself to be mediating the content of my phone in a lot of cases, right? And in many cases, that means throwing out an application. And I like doing that, not like getting so annoyed by an application that I uninstall it. But I like the ability to do that and reason about that at that level. Don't we sort of have this already with web apps on iOS where you add the web app to your home screen? So that's kind of similar to saying, I want to keep this app around. And on Firefox as well, I think you can, it asks you when you install the, when you load the web page when it's got an app cache, whether you want it to stay around offline. And you can say no, the website will continue being a website. I mean, isn't that equivalent? Yeah, I think that's a great point. And I think that's kind of the direction that we all need to figure out a way to get to. And someone asked about the Quota API. The Quota API is probably something that we definitely need to integrate into the service worker design. We've talked to the folks who are in charge of the Quota API about this, because it should also be the case that when applications get under pressure, you should be able to give an application the ability to say, hey, I'm about to ask you for this much storage, you need to figure out where it's going to come from. And if you don't do that, maybe we evict you entirely. But one of the nice bits about the service worker design versus the HTTP cache today is that it's not HTTP cache, right? Once resources are there, unless you pull them out, they're there for the lifetime of the thing. It'll get evicted as a bundle, but it's up to you as an application to make smart decisions about what you prioritize. Yeah, and it's a little bit harder. Like you can put it on the home screen, but it doesn't mean like how much storage you grant it then. Like it might need two gigabytes. It might need four. It might need most of your hard drive. And is the user comfortable with that? Does the user even know what that means? What if you, yeah, like these... One of the problems we had when we were developing the HTML5 app for the Economist was, in IE 10, you can actually configure exactly how much space you can give to your web apps. You can make that one megabyte. So for us, that's far, far not enough. So we have to detect how much space we have. So I think too much control can also be a bad thing, quite limiting. So if you did this installable app thing, is that going to replace the number of toolbars we get for each permission? What's that going to replace it with? Are we gonna get the sort of Android solution where up front you get the list of permissions and you just ignore it and click next? Is that the kind of model that we see working on the web? If I had my druthers, it would be a system where the user can always say no, right? And to the extent that they've said yes, they've said yes, but they can change their mind later. So I mean the Android model of a bundle of permissions which can't be split apart from each other sort of implies APIs that aren't allowed to fail, right? There's only a return true. There's never some sort of a failure case. And I think the web has gotten a long way on the basis of APIs that can fail. Like that's how we adapt, right? Is the ability for the environment to just not be there or for feature detection to help us understand and live within the contours of whatever the runtime is. So I think it's about time we moved on to the next question and this one's from Natasha Rooney. So if we can get Natasha a microphone. Hi, thank you very much. So should network information APIs be extended to include triggers for user having less than optimal connection speeds and not just offline or online triggers? Could this help developers code for content caching? So this is a question that came up at the last EdgeConf and this was voted up quite high in the moderator, but I also had some people coming up to me and saying this just really went on too long and was really boring at the last EdgeConf. This kind of debate whether we should be offering developer information about the connection speed. So as a compromise, I'd like each of the panelists to answer the question making only a sound. So in a sound, should the developer be given details on the current connection speed? Meh. Doesn't bother me. That's not a sound. You can leave now. Error callback. That's not a sound. Eh? Eh. OK. That's a good percentage of the panelists, kind of. But we should have a serious discussion about the, we've got these online offline event triggers. And so how should we be using them to make something work offline? Should we build, assuming the network is there and then fall back using this method? Or should we build, assume offline is there? And you had an example, Alex, with the using offline detection. The online thing is a lie, right? I think that's really what it boils down to. It's a dirty, nasty, terrible lie. And the only thing that you'll ever be able to do that's meaningful is to ping your service, right? You'll be able to send a request out to your service. And if you get a response back under deadline, let's say you've got a performance guarantee that you need to meet. And you get a response back under deadline from your service. Great. So it's over HTTPS. It's from your service. Things look legit. Any other case is failure. But it's just a giant panoply of potential failure cases that would all be online under some strict definition of, I connected to the radio. And I got a TCP socket warmed up. Great, Calvin. Matt, did you use the dot online in anything you were doing? Or did you want that kind of extra detection around the network? We've made use of the online and offline. We don't have so many requirements for the gray area of not having a decent amount of connection. So for us, we tell the user whether they can sync or not based on whether they have this online or offline. But as you said, in a lot of cases, online is a bit of a lie. But for our users, when they have a connection, they have a connection. It's often that they've come back off of a rig and they're in a hotel. And they're making use of the connection there. So for us, just having the yes and no is enough. I think we'll take a question from the audience from Shabas. Just to add, I think it will also get easier once browsers and OSs get more capable at detecting captive portals. And I know, like, Mozilla, we're going to add this to Firefox in case the OS falls through and figuring out if you're in a captive portal and then tell the web pages they're offline. And I'll give them a false impression and stuff. When caching app logic rather than content, is there a way to cache pre-jitted code, A, so that you don't need to jit for faster startup, and B, as jitters get more intelligent, so that you can reuse what the jitter learned about how your app behaves in the last five hours of usage, both for offline and for online scenarios for faster startup? Does everyone want to take that? No. That actually was an issue that came up, I know, in the assembly.js project. There is a very bad problem there with the time it takes to compile, especially large demos, even the actual function parsing and compiling is all synchronous and blocking. So that's the issue that came up there, not necessarily being offline or reused, but just the time it takes. And could it be asynchronous? And so I know from that that right now, the only thing is parsing it from the raw JavaScript and compiling at the time. It would be, I think, a really important thing to look into if they're going to look into an asynchronous API to also have a way to then reuse that result. So back onto the kind of network state stuff. I mean, if I'm asking the browser, am I online? It's going to say, yes, and what we're going to end up with is it could just have one bar or a kind of intimate bar of data, which eventually is just going to take five minutes to try getting something from the network and then ultimately fail. I know you're saying that's something that is solvable as a problem? That might be harder, in the case where the Ketv portal gives you a timeout, but you would have to do periodic checks or something like that. I think it might be that forever until Ketv portals actually start returning different status codes. There is some kind of RFC that makes Ketv portals actually part of the network stack instead of just a hack. You would still have some logic in your application that detects it as well. It's hard to get away from, given the legacy. I think a more analog kind of quality of connection over the past X period of time will be more useful. But really, I think most of the time, as long as you're implementing error callbacks, you can always try and fail gracefully or offer something to the user first and then fall back to something else. I think that's a much better pattern. I think we've got a follow-up question from Natasha. Yeah, sorry. So just to give a proper use case from the reason why I asked a particular question, which might be able to help, and actually subscription models is probably the best one to go on, which I know is important for any publications like FTE and other such newspapers. What if a user who is on a subscription method, and that meant that they capped out, say they read 10 articles. They read that on the tube, which I'm also a tube user. So they will read that on the tube. They consume that, and then they violate their subscription they've gone over. So they shouldn't be able to read anymore. So some trigger has to happen at some point for it to go back to the server to say this user has gone over their limit, don't give them anymore. So my real idea is trying to understand how that can happen because that has massive implications for some markets. Like I'm talking videos as well as publications. So this is offering the user a different experience if they are on some kind of metered connection where it's going to disappear after a certain point or potentially ruin a financial situation. Does the phone even know if the user is in that situation? A lot of times I found that it's not necessarily the technology that has to care about that. There's usually a business process around this. So for instance, if you're in a banking system, you're going to a cash point, right? And you withdraw some cash. If your spouse has gone to another cash point and withdrawn that amount as well, and that would take you over your overdraft, they've got this concept of overdraft in banking. So they've got a set way of handling that. And usually in these business scenarios, there are business processes for handling those cases. So for instance, in the case of your videos, I've watched a couple of videos offline. Someone else is off. Use my account to watch a few more offline. Then there must be some kind of consolidation, some kind of process to pick up those pieces and then trace that back to the billing. Are we collectively happy with a mobile site or app sending us into our overdraft? I think we're collectively unhappy about that even being a possibility. Like the whole bandwidth cab thing is like bullshit and should be gone at some point. I've been in situations where my phone knows that I'm roaming, but I've actually managed to get a deal with the provider where I essentially have the same quality and amount of data that I have at home. I guess if an app at this point was to serve me lower quality content, I'd be a bit annoyed about that. I mean treated as a second class citizen when I'm not in that situation. Do we need an option to, is this gonna be a site per site option to say can you be easy on the data or should sites just be doing easy on data by default? I think there was a billing issue, not so much about the data that Natasha was bringing up, but about the, there's some non-packet based metric for use, right? So like the question I always ask is, what is a Wikipedia article? Is it the text? Is it the history? Is it all the images that are part of it? Is it the set of things that it's linked to? Is it at the URL? It's actually probably a bundle of actual binary resources and then some ephemeral metadata around it. And so you're talking about consuming one of those at a time, right? So like if I'm on the FT or the New York Times or whatever and I'm clicking through, how do I get to my limit and then how do I enforce that? Not to go design a solution for enforcement for anybody, but you can imagine a relatively straightforward sort of soft landing for users where there's a bit of code on the client that's watching usage and then attempts to enforce some soft limit with a cap with one or two extra. And then at some point later, if you haven't synced in a number of days, then maybe the content becomes inaccessible. I mean, those are the sorts of solutions that folks who have to implement DRM are doing today for things like my Sync Spotify offline data, right? You have to show that you're willing to be part of the ecosystem for some period of time otherwise the device itself decides to stop playing. I think we'll take another question from the audience. Hendrik, whereabouts are you? Can we get that man on microphone? Just throw it. That's a good question. Can we get him on microphone? So this is kind of back at the permission models. It seems like at the point you're going offline, you're making a fundamentally different type of app than a website. There's a distinction between a site and a website at that point. It's very clearly an app and you're talking about requesting permission for storage, but that's a perfect time to ask for permission for other things as well. So there's this kind of this whole installable model that you're seeing a little bit in Android and Chrome installed apps as well. But is there like standards efforts to really unify that experience? And for example, if I'm going to do video conferencing, why shouldn't I be able to ask for that upfront as a mandatory requirement? And if they say no, then sorry, I can't install the app there. I mean, whatever. That's just an example. But unifying that, standardizing that process and seeing that as the permission point where you ask for everything that you need. I feel like I could sync the rest of the panel into a discussion of do apps actually exist as a term or is it just a bit of marketing fluff like web 2.0? That's not better. We're not going to go there. I feel Alex already answered this question earlier. Like he said, on the web, you want to feel gracefully for each of the things. So you don't want to bundle and then get the user to give up his location data, all his storage, and all those things. You want to let the user be in control, which sort of argues in the way of doing a per feature grant. I don't think so. So just to answer directly, I don't think that makes you a second class to send to the installed applications. For a couple of reasons. One, it leaves it up to the system to mediate the length and breadth of that grant. So you can imagine that iOS versus Android running the same application. As long as the API is the same, they can make different choices about how often to prompt someone about a thing. Secondly, you can imagine this being a consent and review kind of a system where users are always able to see the list of permissions that are currently granted to a thing and maybe choose to give it forever or revoke it entirely. And as long as the API allows revocation to happen, you say you demand this thing. Well, I think it's not reasonable to demand a thing at install time. I think it's reasonable to try to explain that behavior in context of use. I click on a link to go do a thing and you say no? OK, great. I did a thing and then we had a conversation about it. And then you can tell me why you need me to do that thing for you. It's not foolproof. And it may train users the wrong way. But it's at least a start at having a conversation about why you want permissions. I can maybe see in a way how it's second class from a developer's perspective. But I think from a user's perspective, it's a way better deal. Can we get the microphone to Carl Simpson for the next question? First, Matt, you, with the Financial Times taking that stuff offline, some of that stuff was quite heavy due to imagery. Did you deal with that in any particular way? Did you ask the user within your app before doing that? Yeah. So when you first loaded up the app, originally it was designed for iOS. And the limitation in iOS is you can have five megabytes of web SQL storage without any permission. So we first give them five megabytes of data, which is the articles, basically. And then once they've used the app a bit or pinches the home screen and actually using it properly, we then ask for more. So I think you can definitely design your applications so that you have a certain number of mandatory requirements and then add things on. Or I mean, it starts off as a website. You can always fall back to being a website. All this stuff is extra. I just caught myself on the screen looking like I'm advertising the thing I'm drinking. So I'm going to bounce it out by saying it's actually quite disgusting. We've got a question now from Kyle. Kyle's over there. So usually I am the one who's screaming for more APIs, more functionality to be given to us. But I'm going to flip the table here and say, going with the theme that browsers are generally better at a lot of these tasks, as a developer of apps and sites, one thing I'd really not like to deal with is the idea of online and offline. So one idea that I would just want to bounce and see if you think would be possible. We've talked a lot about persisting the actual app, the files, that make the app run. But what about all the network requests? Could the browser proxy the request that I'm making for me? And then when I come back online, prompt me and say, do you want to still send these tweets? Do you want to still send these emails? Because you requested that a few minutes ago. That way I, as an app developer, don't have to think about those details. I just make the app assume online and let the browser take care of it. I think there's actually two areas there that need to be addressed differently, which is things and things you do. To keep being the data locally first and allowing synchronization to happen separate from your app. For example, CouchDB has a PouchDB layer that will be implemented inside the browser and you deal with all your local data. And the synchronization happens in a separate process. Synchronizing API calls like tweets and things, I don't think it makes sense to do separately or to queue up in the background. Because the user needs to know that that hasn't happened yet. You can't just make the tweet and say, here's my API call. I hope the browser did it sometime later. Maybe you have to tell the browser or the browser user this hasn't sent yet, this hasn't completed. They have to know what's going on. There's a related issue, which is, what does it mean to get success back from an API? Like, if I get a 200 response and the return content is error, how do I cut the browser into the conversation so that it retries? So I was under the assumption that a browser knows that it doesn't have connectivity. It tries to talk to the radio and the radio doesn't give it any sort of response back. But also there's plenty of UIs that we do this already, like when I'm downloading files, there's UIs that tell me the status of my downloads and let me unpause it. So I don't understand why the same sort of API couldn't present to me the user that here's the request that I've made and I want to say, yes, these are okay to make now that you're back online. No, that tweet is not one I want to send out or whatever. It sounds like a difference between adaptive requests and predictive requests in terms of what we expect to get from the network. Is that something that comes into this? Is that, are we gonna have APIs around that? I think there's a fundamental difference between the download case that you just tried to outline and these other cases, which is that in the download case, we're talking in terms of a single resource that the browser knows everything about. The browser knows the entire protocol for downloading a file. And in fact, it's the thing giving you the UI for downloading the file. This is a situation in which we're asking the browser to coordinate with the application that it's hosting to talk about mediating that conversation about content. It's running the application and then saying, hey, application, you're doing stuff. The user intends to do a thing. You're gonna tell me what that thing is and then tell me how that relates to the rest of the world. And we don't have any higher level semantic equivalence. There's no way to tell the browser, oh, this thing is sending a tweet. Browsers don't know what tweets are. These are just different data types. One of them is a composite data type and one of them is a primitive and browsers are okay at primitive as they understand. But when it comes to composites, we have to give applications control. I think we're gonna move on to the next topic. The next question is from Nick Molnar, where we're about to see you hiding over there. Okay, microphone over here. Oh, that's good. Someone was just going to the toilet. I thought they were asking a question, no mind. Should ask them a question anyway. It's a bad omen. I'll serve them right for getting up. All right, so users only expect offline behavior from apps downloaded from their app store. Will initiatives like iOS's startup image and touch icon ever be expanded to include a full metadata set suitable for web apps to finally be included in app stores? I think this is a very important question because even if we solved all the API issues, we still have that user expectation that they do not think they can go into the browser, type in URL and expect it to work when they know they have no connectivity. So the guys who've been building stuff using the current set of APIs, what did you do to let the user know that, yeah, this is going to work? Well, I first started looking at these offline apps in Firefox 3. So my choice then was I'm just not going to use this because there's no solution. And the point of that was it's been a long time we should have solved this by now. We do have a mechanism for users to keep the access to an app or a website and to indicate to the browser that it's important to them and to remember to come back to it. It's the bookmark. And we have this and it's barely used and it makes a lot of sense to connect that into the permissions, into quotas, into all these things that we need to remember about the website. So Craig, you've built a kind of specialized app for a particular set of people. How did you communicate to them, although this is a browser, this is going to work? This is going to work for you? I mean, we go about this in different approaches depending on kind of the different requirements for hardware. So one of the nice things about iOS is you can add apps to your home screen and that becomes kind of difficult in some of the other browsers. I think you can do add to home screen in other browsers as well. It's yeah, it's a bit shaky. So yeah, I mean, it would be nice if there were some way of saying, hey, I want to be able to install this. I know Firefox have made some progress in that direction with the manifest file. So you can specify things that your application needs when you install. I think having some standardization around those kind of processes is going to help companies like us make that work. So Matt, you were, your site was open to all users. You had to explain to them that this was possible. How did you do that? Well, iOS, as you say, is easy. The best of them we just don't. And unfortunately, this is one of the reasons why we don't see that many users on Chrome for Android even though the app works absolutely brilliantly on it. And the Chrome Web Store, you said there was two ways of making web apps work offline, the Google Gears and AppCache. There's also the Chrome Web Store offers an alternative approach. And you can use that to get your app icon on the dot called start up menu. And I think that'd be really nice to make that more open and accessible to other places. Oh, is there anything in the spec that's going to make this better? Or how can we communicate this better to users that the web will work offline? So we've been looking into the manifest thing and making that better. But I think a lot of the standards, how they were faulted is like in small incremental steps. So first, we need to solve this offline thing and get it right. And then people can start building apps. And then we can start thinking about how to improve bookmarks to make that work better. And then long-term, hopefully we can obsolete the concept of app stores entirely because you can just browse the web and bookmark things through your home screen and not look or need those. Can we get this right before we have the mechanism for users to realize it works? Because we don't know we've got it right until we've got users using it. And we see the behavior of patterns and we see the bits of the APIs that aren't working. Chicken-ack, I guess. You have to grow into getting there. You can't really just design the whole thing up front and then present it and then it just works. When you say you're getting offline working right, you mean the navigator? Well, the terminology you formally calls navigation controller, right? Yeah. Well, I'm hoping service worker is the answer. We'll have to find out. We have to test. We have to incrementally evolve. We're at prototyping stage now, so it's early days. But from the user's perspective, as much as AppCache is a bit low, it does kind of work. I mean, it will give you an application that works offline. Can we not just fill in that little step to get the icon on there and then it'll be compatible with both? The problem with AppCache is that we've got like a sort of universal know-back from developers. So people don't really want to experiment with it and roll it out on their sites. So we, yeah, like, right. And if there's no adoption, then we can't really further experiment with other things. So Alex, it seems like the sentiment of the rest of the panel is that Chrome for Android is doing this very badly. We're installed home screen is great for iOS. Where is it? That's a great question, John. Are you prepared to answer this, John? We'll get a microphone to John anyway, and he can at least tell us that he's not allowed to answer. If I had my drivers, we would absolutely have that. I mean, I think that's a reasonable thing to want to have. I mean, my personal view, and I'm not speaking for Google here, is that the manifest destiny of web apps is that they can do everything that apps can do. Everything that your system application should be able to do. You should be able to write as a web page first and then you should be able to transition to a world that's much more powerful. And that just depends on getting users to a point where they believe that that's a safe thing to do. And if that's saying no to some permissions, to get others, that sounds good. Or asking users to install or bless a thing, that seems reasonable. But yeah, I mean. John, do you have a comment or a no comment? I'm with Alex. I think we have a lot more to do in this space. To make it web apps more capable, we're pushing for it internally. We'll see what happens. Well, we'll take one more question only from the audience. One more thing to add on that. I mean, being fair for Chrome for Android, the navigation control, sorry, the app, installable apps is only part of that. And we found that, actually, Chrome for Android has the best support for things like IndexedDB, which we don't get on other devices like Safari on iOS. So we need to have that problem solved consistently across browsers, because at the moment we have to rely on things like Shims to fall back to WebSQL and with iOS 7 and the 5-meg cat that we have for the user prompt that now. Yeah, it's really important to get that piece of the puzzle solved. Thanks for that. We can declare one all between iOS and Android now. That's brilliant. We're going to take one question from the audience before we move topic as Eric Shepard, where I'll tell you a microphone over here. This is sort of a follow-up, the flip side of the payment question from earlier for the number of issues. There are a whole host of other things that you may want to know about, that if people start using the web offline in large numbers, we're going to lose. We want to know third party networks want to know things for tracking purposes, for remarketing purposes. We want to know what people are looking at so that when we're giving more and more customized recommendations in e-commerce or article recommendations in publishing, those are things we want to know about too. Are any of the ad networks or anyone looking into possible APIs for how to deal with this kind of stuff? I haven't been talking to them, and probably should be, with regards to the service worker design. But I can only talk about the thing I'm working on, which is that. And it allows you to, a thing I've put on the screen also briefly, was the import scripts API, which allows a worker, these things are just workers, a worker of any kind to just go out and import some other script, like a require call in AMD, something like that. So you can compose behavior from multiple origins so that if you're working with a third party, you could include their offline handling code too. And because the service worker sees requests for all origins, including those third party ones, you can write your handlers in such a way that well-behaved service worker scripts will only pay attention to the requests for their origin and do the right thing by them. And you can import scripts from other origins to do things like handling those kinds of tracking and counting when offline. And they get the ability to run when you're online too. So they'll be able to do their own synchronization. It does require kind of the global coordination thing that we love so much about JavaScript until we get ES6 modules, but I think that's kind of going to be the state of the art for a while. It's possible. It will require care. OK. We're going to move on to the next question. And the next question is asked by Jake Archibald. Not that guy again. OK. Well, this is a bit weird, so I probably asked the question. Oh, John's back. Hello. Is this on? Yes. We are actually working on adding another homescreen feature to Chrome for Android to the public commit, so I can talk about it. Excellent. Coming soon, hopefully. I think that we added an about flag recently. It's in master launch, probably. Declassified. Excellent. OK. Oh, and I think Vivian was asking, what did you want? I'm interested in knowing if developers want to have a similar API like iOS, allow you to have the API access to prompt the user to install on the homescreen. Yes. OK. I'll take that as the whole panel's opinion. So onto the question from this Jake Archibald guy. I'm going to try and read it in a different accent, so it doesn't seem super weird. Maybe that'll make it seem even weirder. I'll sit over here, slowly. So I'll do an American accent. I'll try and do the Oregon accent. So currently, if a post fails, I can stick it in index DB and post it later. But if the user has to visit the site later to actually send that data, how can I do that? I mean, is there a better way for that? That's a very good question, Jake. Yeah, we're going to put that to the whole panel right now. So yeah, we don't have anything to do synchronization. Is there a sync API coming? Can we do this when the user's not actually sitting on the site? So I think that there are background needs for APIs added in Firefox OS for things like push notifications, and that there is a definite possibility of integrating the need for those that happen in the background whenever you don't even have the tab open to also be built upon to have some kind of synchronized process or cron jobs for your websites. So Craig, you had this specific problem, right? Going back to the question of having it running in the background, it's kind of tricky because, I mean, firstly, you get situations like, actually, does the user really want to be syncing when they don't know it? I mean, we could be ealing up all of their bandwidth for roaming, for instance. So in some cases, it makes a lot more sense for it to be an explicit question thing. But in other cases, also, and particularly for business data, you have this process of making business decisions when merge conflicts happen. So for us, we do something that's kind of similar to a Git rebase when we're synchronizing. So we actually can prompt the user and say, hey, we've detected that these two events happened, and they conflict with each other, and you need to take some action. If that's running in the background, that becomes kind of tricky. So, Alex Arnaud, you guys are from the future. What's in the future for us? You have to do this in his accent. I'm not sure I can do that. I think push notifications will have to be obtained. I think background updates for apps, like for your caches and stuff, we can do if the user is connected to wireless. I guess in that case, we can sort of assume there won't be a cap, although it's not universally true in Russia and stuff. They still have quite low caps, even on those kind of connections. So there might have to be a preference of sorts. Like, do you want to allow apps to update in the background? And it could even be a preference that the user wants his apps always up to date, even on 3G or 4G. And the browser will just every now and then wake up the worker and give it the event to start updating. And then the worker does its updates and then shuts down again. And then when the user next visits the site, he'll get all the latest stuff. So we're going to go to the audience for the next question. Jeffrey Burtoft, whereabouts are you? Down here. Yeah, I guess this ties in a little bit more with the last question. But so over with a lot of developers in a Microsoft platform who are developing for both our store apps, which are HTML5 based, and then for the web apps. And trying to bring those closer together, one of the questions I hear a lot about web apps is about the safety of the data that is stored, which I think tends to push us more towards developing in the native space. What are we doing to make sure that that data can be trusted, can be safely encrypted, and meet needs of different type of developers? So if we're storing something like credit card data or people's contact information, what can we do to make sure that other apps can't sneak in there and get that data? Is that something we do already? I think there is a certain amount of that. I mean, you have to be on the same domain to be able to access things like local store. So there's a certain amount of security in place. There's only so much you can do, I think, with these kind of devices. I'm not sure. Is the question about what the origin can access, or if someone has physical access to the device, they can access the local store data because it's not encrypted. So in that case, it depends on the security characteristics provided by the OS. And I think in a lot of cases, browsers default to what the OS provides and don't have additional layers on top because if the OS is not secured, then you're compromised anyway. It depends on the threat model, obviously. But there is an API called WebCryptoAPI, which Microsoft is heavily involved in, working with Netflix and eBay and Google and a bunch of other people on this. And one of the key aspects of the WebCryptoAPI is that it allows you to have wrapped keys, which are to say hardware tokens or tokens protected by the OS, which are not owned by user space code. So you can't actually see the key material. And you can do operations with that. So in that world, as long as you trust the OS, you should be able to get encrypted content stored and allow that to be trusted all the way through the search chain back to the server. And I think that's a pretty deft answer, actually. I think it gives you most of the control that you want without giving you most of the complexity that you would like not to have. And I think that takes us out of time. I mean, I know the bathrooms at Google UK are completely out of Wi-Fi and cellular data. So it sounds like if we can get all this stuff sorted, I can be a lot more productive than I am. I'm currently in my spare time. But if you are interested in any of this stuff and want to look at the spec as it is currently, it's github.com slash? Slightly off slash? Service worker, right? Service worker, yes. Service worker. Excellent. When will we get that in the browser? So I can't speak for Firefox, but I know that there's prototypes happening on the Mozilla side. And on the Chrome side, we've built a prototype, which does work. And we're in the process of implementing now. Excellent. Thank you very much, everyone.