 So I realize that you're all probably quite sick of me now. I've been around all day. But this is actually only the second talk I've given a Chrome Dev Summit. And the other one was the very first Chrome Dev Summit back in 2013. And it went a little bit like this. And the new thing is the service worker. Actually, I think this is the first talk on it. There's nothing to play with in the browsing yet. So this was before anyone had ever written a service worker. There was nothing in the browser at all. But now we have two fully independent implementations in Chrome and Firefox. And that means we get the other Chromium browsers come along for the ride, things like Opera and Samsung Internet and others. Microsoft, they're working on their implementation now. It's a high priority. And bits and pieces are starting to land in their insider builds as well. Safari still haven't made a public commitment. But they have been given implementation feedback on the specs. So they've been looking at it in a lot of detail. And they've been implementing the Fetch API as well, which is a big part of it. It's a prerequisite if you are going to implement service workers. But thanks to the progressive enhancement, we've gone from having nothing in any browser to hundreds of millions of page loads handled by a service worker every day. And that's just in Chrome. And I'm not talking about service workers that are just there for push messages and things, because there's loads more of those as well. I'm talking about service workers that are actually handling Fetch events in page loads. So that means that today, which I couldn't back in 2013, I can stand here and talk about actual shipped things. Because in 2013, I basically made stuff up for 30 minutes. I mean, this slide in particular is a total work of fiction. It's great. But I don't know. Look how happy I look there, not wearing a suit. Thanks, everyone. Oh, to anyone who's watching this in the video in the future, they voted that I had to wear a suit for this. And it's horrible. Thank you, everyone. Anyway, but this talk, I enjoyed this talk. It was a bit of a laugh. So I'm going to do it again, because there's a lot of stuff we're starting to implement or starting to think about in service worker land. And I'd like to share it and sort of see what you think about it, which things you want in the browser right now, and which things not all that bothered about. I probably should have called this talk seven things that don't so much exist right now, but I'm pretty excited about and you might be as well. It's going to be a journey to the future. This is a real FAQ page for a trained company in Wales. And it's just this one question that says, can I buy train tickets for future travel? To which their answer is, yes. Just that. I've been to Wales before, and it definitely feels like time travel, maybe not forwards. So what have we got coming up? OK, so we've got streams. I love streams. And there's a lot of streams already in the browser. You can fetch a URL just by fetch and wait it, like we saw before. Get a reader for the readable stream, and then we can sort of set up an infinite loop, and we can call reader on the reader. And this gives us an object back, which is very similar to what iterates return. There's two properties. It's done and value. If done is true, we're done. And otherwise, we've got the value. And I think this code could be nicer. I always get very nervous about kind of while true code. I mean, this works, but I don't know. It makes me nervous. And that brings me to the first future feature that I want to talk about, async iterators. Now, I have learned from my mistakes in 2013. So this is the vagueness graph. And I'd say async iterators are about this vague. But do bear in mind that this graph is itself about this vague. And that's quite vague. I hope that clears everything up. Async iterators, they're being specced right now. They're at stage three of the ECMAScript process. So we can expect some implementations pretty soon. So how do they actually work? Well, instead of this while loop and getting a reader, we can just do this. It's much simpler, for a weight value of stream. And it works just the same way that our while true loop worked before. And when these land in JavaScript, we'll start to see DOM APIs updated to use them. And so thinking about things like the cache API, you can have an iterator to go over caches, or over items in caches as well. I'd love to see this added to index DB cursors for going through an entire data set. If you want to know more about async iterators, that is on the TC39 GitHub page, I will tweet out all of the links I show in the talk. But if you can't wait for that, you can play with them today using Babel. This is it running here in the Babel REPL. I'm only showing you this because I have an excuse to say Babel REPL, which is very satisfying, Babel REPL. I really love the way we name things in the industry. We just don't care. I mean, look at this. This is a totally legitimate sentence in our industry. My tiny Yelp clone, built with Redux, is now up on Ember Twiddle. My tiny Yelp clone is now up on Ember Twiddle. I love it. You should have put it on Babel REPL and completed the set. So when you stream values from fetch, each value is a hue into eight array of bytes. But often you don't want bytes. You want some of a format like text. And you can actually do this today using the text decoder. So I'm going to get the new text decoder there, loop over the stream. But this time I'm going to pass every value through decoder.decode. Now instead of logging bytes, it's going to log strings. But having to call decode on each value, I don't know. It's a bit of a pain. It'd be nice just to have a stream of text. And that's going to be a lot simpler thanks to the next feature, transform streams. Transform streams, I'd say they're about as vague as they think to rate it. It's maybe a little less vague. They're still being specced. There is a sort of JavaScript implementation of proof of concept. And some implementation is happening in Chrome right now. So before we introduce the decoder, we are streaming stuff from the network straight into our log. No? OK. Thank you. Transform streams become this little bit that sits in the middle that takes the fin and puts something else out. In terms of code, they look like this. New transform stream. And then you pass in an object of methods like start, called straight away. Transform. And that's called every time a chunk is received. And then flush, which is when the incoming stream has ended. What you get back is an object of two properties, which is a readable stream and a writable stream at the input and the output. And this works really well because you can pass just one of those bits to another piece of code without passing on the whole transform stream. So we want to create this text decoder as a transform stream. We'd start off by creating a function that's going to return it, set up our decoder, the internal implementation, return a fancy new transform stream. We only need the transform function. And in there, just every time we get a chunk, we're going to do controller.onQ, which is passing a chunk out. And we're going to call text decoder.decode and pass that chunk through. So if we go back to our fetch code from before, that was logging out bytes. We can change this around about here. And we take our stream and we pipe it through the decoder we just created. And this, the pipe through connects the sort of writable, the sort of readable house put into the transform and returns the readable of the transform. So now all these, the logs will be text at this point. Now, like async iterators, like once this lands in the browser, we'll start to see them appear in the DOM as well. Sort of APIs will be changed. Things like compression and decompression. There's a lot of that in the browser already, like gz, et cetera. Image encoding and decoding, they already exist too. They're just not exposed to developers very well. And they'd be perfect for transform streams. But the first DOM API that is going to become a transform stream, and we've wasted our time by recreating it, is going to be text decoder. So that's going to be changed backwards compatible way to be a native transform stream. And once you do that, you'll get stream out of the stream. So if you want to dig into streams a bit more, check out the spec. And that's where you'll find the JavaScript implementation. I'm really excited about streams landing in JavaScript in case you can't tell. I think it's about time, because streams have been behind the scenes of the browser for like 20 years. If a page is well built, you'll see it render gradually. And this is because the browser streams the content from the network and passes it through the HTML parser, which supports streams. It can process it as it's arriving. Wiki offline, this is a Wikipedia PWA. And it makes good use of this ancient browser feature. On a low-end device, a 3G connection, emulated in Chrome anyway, with an empty cache, the HTML takes around about just under five seconds to download. All the while, that's happening. The parser is processing what it receives. And that means we get a first render. It's sort of less than half a second. I think Chrome's throttling is actually quite kind here on a real device that would be a bit later than that due to SSL setup. So at this point, we're just displaying the top banner, the title. We haven't got the full page of content yet. But at least the user feels like something's happening. And then at 1.8 seconds, we get the first page of content rendered. And rendering continues as most is received. As an experiment, I also built Wiki offline as a single-page app, which is a popular pattern with JavaScript frameworks. So here, I'm just going to return this, a little bit of HTML, and then letting JavaScript handle the rest. This actually changes the story quite a lot. The HTML fetching and parsing is way quicker, because there's not a lot of it. And then here, we get the first render, just the shell. So at this point, performance is neck and neck. But while this is happening, JavaScript is downloading. And that needs to execute. And then it fetches the actual content it needs for the page and inserts it. Now we get to content render, almost two seconds later than the server rendered version. And I'm being kind here, I think. We regularly see single-page apps taking a lot longer than this to get content on screen. It's a little bit of a misleading graph, because it looks like the single-page app completes everything a lot sooner. The reason for this is, in the server rendered version, as it's downloading the HTML, it kind of discovers things. It discovers things like style sheets, images, fonts, all of that stuff. And it starts going, oh, actually, some of this is important for the top of the page, so I'm going to devote bandwidth to dealing with that. In the single-page app version, none of that can happen until that content is parsed, and that happens right at the end at that render there. So it's load slower. What can we do about this performance problem? Well, we can bring in a service worker, and we can store the actual page in the cache, and so that makes that a little bit shorter. That download time goes away. We do the same with the script as well. But the page content still comes from the network. We can't cache all of Wikipedia. The problem we have here is that JavaScript initiates the content download, so we have to wait for the JavaScript to run before we can start fetching the content. We can avoid this using link role preload, which we saw earlier. So doing this means we can sort of run those two things in parallel. But so what? All of that optimization later, like the service worker preloading caching, was still slower than the empty cache server render. Just to let you update for everyone, the screen with my notes in just went off for three seconds. This could be happening again. But we're still slower than the empty cache render there. And that's because we were spending all this time downloading content. And then not doing anything with it until we have all of it. So we've traded this gradual rendering model here for one where we just display nothing until we have everything. And it's just because there's no API that can take a stream of HTML and inject it into the page. And we really need that. I hope we get that one day. But until then, we shouldn't be breaking performance by using a single page app, then just trying to limit the damage. We should be taking the well-performing server render and then making that even better. And streams combined with service worker, let us do this. So like we saw before, this streams. The same is true if we put a service worker in the middle. It doesn't really change anything. If the content is coming from a cache, it will also stream, which is still important if it's like a large video file. You still want that to stream from disk. But ideally, we want a mixture. So we want to serve a single HTML response where parts come from the cache, the static parts like the header, but the dynamic parts come from network. And you can already do this in Chrome. In a service worker fetch event, I'm going to get three parts of the page. I'm going to get the start from the cache, the middle from the network, I can sort of include. And then I'm going to get the end from the cache as well. Then I'm going to get readers for all of those because we're going to process those streams. I'm going to create my own readable stream and I'm going to make a response using it. So I can just pass the readable into new response and off it goes. Unfortunately, populating that stream is not so easy. It's like this. It's a big bit of code. I'm not going to talk through it. It's quite ugly. And it involves passing every chunk through JavaScript and dealing with it and processing all of those streams in order. This is actually going to get a whole lot easier thanks to identity streams, which is the next of the 2017 features I want to look at. I would say these are more vague than transform streams, mostly because the API changed less than two weeks ago. So things are moving around. But I think it's pretty stable now. To use this in your service work, a fetch event, just as before, I'm going to get those three parts that I'm going to display. But this time, I'm going to create an identity stream. An identity stream is just a transform stream that doesn't do any transforming. The input just goes to the output. So I'm going to respond with that readable part to transform. But then before I do that, this is how we deal with the writable. I'm just going to do something asynchronously. So I'm going to have a self-invoking async function there. I'm going to, for each of the responses, the promises that we have, I'm going to pipe the body to the writable. And I'm going to say prevent close here, which is just saying, hey, once all of this stream has gone into that stream, don't close the other stream because we've got more to do because we're going to do for each stream. And then we can close it out. And that's it. And not only is this code simpler, it's also faster because we're no longer passing every chunk through JavaScript. Because the browser can go, oh, hang on. The stream that we're receiving is from behind the scenes. It's either coming from the network or the cache. And then the thing receiving the stream is the HTML parser, which is also behind the scenes. And it can just do the whole thing in the background and save a whole lot of processing time. So now we're getting the best of both. We're responding quickly from the cache, but streaming the rest of the data from the network. And the result of that, so here's where we were before, we can optimize our server rendered version with the service worker in streams. The parse starts earlier because it receives that big lump of content straight right at the start from the cache. And this means our first paint happens much sooner, but the important bit is the content happens way sooner. So we get that quick offline first cache render, but still the benefit of the streaming render for the uncached content. So it's now over a second quicker for content than the hacky single page app. And with a model like this, I'm actually kind of happy with full page reloads when it comes to navigating around. So on the left here, I have a single page app. So every time I click a link, JavaScript is going to fetch the data and put it on the page. On the right, it's just a web page. You click a link, it's going to reload and it's going to load that data. So I set them off at the same time. You can see that with all the complexity I added with making this a single page app and using push data, et cetera, it's still slower than full page reloads, especially when they're supercharged by a streaming service worker. Your mileage may vary. It can depend on the amount of content you've got. But I'm not making this up, although this is a demo. I actually got hit by a real world case of this only a couple of days ago. On Monday, I was at Heathrow Airport browsing GitHub on airport Wi-Fi, which is not so great. Now, GitHub will use push states and it will use JavaScript for all of its navigations. Unless you're in a new tab, then it will do a server render. So what I'm going to do here is I'm going to click a link on the left here and then I'm going to paste the same link into an empty tab. So here I go. Click the link, paste it, off we go. And we can see that the server render wins by a country mile. It's way faster. And this is not throttled or anything. Well, not artificially. This is just airport Wi-Fi. And this is because on the left, it has to download everything before it can show anything. At GitHub here, they've written a lot of JavaScript to make this quite slow. Unfortunately, all too often, I hear people say that a Progressive Web App must be a single page app. And I am not so sure. You might not need a single page app. A single page app can end up being a lot of work and slower. There's a lot of cargo quilting around single page apps. And I know what happens when you just sort of copy someone else without really understanding the situation. You see, I went out for a meal with Paul Irish. That's right, I've had a meal with Paul Irish. He wants to touch me. Anyway, I watched Paul taste some wine. And this was amazing. He swilled it around in the glass. And he took this huge sniff, like huge sniff. And then I said to him, I thought, wow, Paul is so cool. Like, and he really knows what he's doing. This is amazing. And anyway, a couple of months later, I went back in England out with some friends. And we were at a restaurant. And we had some wine. And I thought, I've got this. I know what to do here. I've seen this done. So I took the wine, I swilled it, and I took a big old sniff. But I took the wine glass just a little bit too far and dipped my nose in it. Don't know if you've ever snorted wine before. It is not pleasant. I just kind of sneezed it out everywhere. And my friends were just staring at me covered in a wine mist. And I'm like, Jake, why didn't you just drink it with your mouth, even so much easier than that? The moral of the story is you might not need a single-page app when it comes to, you know, there's a link there. Server render might be enough, especially when you've involved a service worker. And of course, if you're using a client-side framework, server rendering is an absolute must. I mean, React, Ember, Angular 2, Web Components, they all let you get something on screen in a streaming manner before JavaScript fetches. Just make sure you're not displaying things that should be interactive but aren't. So things are looking pretty good. However, Facebook had been prototyping with this stream stuff and identified a problem. If you're serving from a service worker, there is the startup time of the service worker to consider. And that's zero if it's already running. But the service worker shuts down if it hasn't done anything for like 30 seconds to preserve memory. Depending on a user's device or other things going on, that startup can add, in the worst cases, a few hundred milliseconds. And that delays the content fetch just by a little bit. And we are looking to reduce that startup time. But it's always going to be more than zero if your service worker isn't already running. Are we just going to live with that over my tiny Yelp clone we are? So we're going to introduce a navigation preload. Now, I would say this is a little higher on the vagueness scale. We have an implementation in progress, but the spec is still kind of moving around a little bit. So take this with a grain of salt. Our goal here is to start the HTML fetch in parallel with the service worker startup, which you can enable just using this one line here. And you can do that whenever you want, but the service worker activate event is a pretty good place to do it. And this means for navigation requests, the browser will make the request to the network while the service worker is booting up. And that response appears on the fetch event as a preload response. And that's a promise, and that will resolve if undefined if it's not a navigation or if the feature isn't enabled. So it's always worth looking in if it's falsely just do a normal fetch if that's what you're wanting. Now, what you do with this is up to you. You could respond from the cache and fall back to the network. But given that this preload can happen pretty early, it becomes realistic that the network may beat the cache API. So why not race the two of them and see which one comes back first? I'm going to pick up on a point Soma said yesterday, because he was very right, that promise.race is not your friend for doing this at all. When you give promise.race an array of promises, it takes the result of whichever one ends first, not whichever one succeeds first. Like, take this race. There's a race. I'd say this race was in progress, because no one has won yet. Promise.race, on the other hand, would say, oh, she fell over. Don't care about anything else. The whole race was a failure because of her. Promise.race is a dick. So you will need to write your own racing function here. You wanted to resolve with the value of the first promise to resolve with a true value. It's a few lines, but that's what you need. But what about our streaming code from before? A straight-up preload wouldn't work here, because we're not fetching the same thing that would be fetched if the service worker wasn't there, because we just want the middle of the page, just that middle bit, because we've already got the top and the bottom in the cache. Thankfully, this is not a problem, because those preload requests are sent with a special header, this header here. And if your server sees that, you can go, oh, OK, I'm just going to serve the middle bit, because this is going to go through the service worker, and it knows how to deal with it. So back in that code, we can deal with that. Just right here, use the preload response if it's there, otherwise falling back to fetch. And that means for navigation requests, that will happen at the same time as the service worker's booting up. And this is something that we can improve on even more. With this feature, we can potentially look at doing it as the browser is booting up, which is particularly good for progressive web apps added to the home screen. We hope we can get there as well as soon as the user presses it, just as the browser is booting up, we can have that request started well early. If you want to dig into this a little bit more, there's a huge thread on GitHub about it. I'll say I'll post the links up later on. What else have we got? So the current way the service worker works is that request from your page will go via the service worker, your service worker. And that happens even if the request is to a completely different origin, like a font service. Your service worker decides what to do. And this is by design, because it means you can cache things like images and fonts, even if the destination server hasn't even thought about how that would work or how to do that. Downside to that is many sites may end up with a similar logic for font caching or analytics and can end up storing the same thing independently. And in the future, we could look at ways of deduplicating that inside the browser. But the logic is still being duplicated. So to the rescue here comes foreign fetch. So I would say this is a little vaguer still, only because I'm pretty certain parts of this API are going to change. But there is a version of it in Chrome Canary already, which you can actually test with real users. I'll put a link up on how to do that in a minute. So what is it? With foreign fetch, the font service has its own service worker and storage. And if you make a request to the font service, it first goes to your service worker, and you get the first shout of what to do. But if you send the request on to the font service, it goes to its service worker. And they get to decide what to do, which could be to get the stuff out of the cache and send it back. So that means now if another website makes the same request to the font service, it can get that caching benefit the same resource that the font service has cached. So if you wanted to do this, if you wanted to be the font service and make this work, in your service worker, you listen for this new event, foreign fetch. And this will be triggered when another origin requests something from your origin. And from there on, it's a little familiar. Respond with, what are you going to respond with? However you want, let's look to see if there's something in the cache of a wise fallback to the network. And then return the response. And this is where things get a little bit different. Rather than just returning the response or a promise for the response, you return an object which has a response property. Now when you do this, the destination server will not have scripting access to the content of that response. It won't be able to get the text of it. But it will be able to include it as a script tag or as an image element or something like that. The same way cause works today, this is like a no cause response. It just won't be able to get the text or the pixel data of the image. If you want the server to have the access, you add the origin property. And you set it to the origin you want to have access. So here I'm just passing through event.origin. So I'm saying like, if I have visibility to this resource, I want them to have visibility to it as well, which you need to think carefully if that's what you actually want. Otherwise, you can set up some kind of whitelist or something. You could even get this information from index DB. You're coding, you can do what you want. So this is kind of a representation of cause, but with JavaScript. So you can do a lot of different things. But we talked about font and image APIs and analytics. But you can use it to create whole REST-like APIs that work entirely offline. One detail is missing though. How do we get this service worker installed on the user's machine? Because if it's like a REST API or a font service, like where the fonts come from, the user's very rarely going to actually go there. And that's when the service worker is installed. So to fix this, when you actually give a resource to a page, you can also serve it with this special header which tells the browser about the service worker you have, and it will then go and install it. If you're keen on foreign fetch, article by Jeff, who was speaking earlier, he covers it. And he also covers how you could actually use this on websites today as part of an origin trial. Oh, yeah. So earlier on, background sync was mentioned, which is a feature we shipped many months ago. It allows you to defer single tasks until the user regains connectivity. So say the user updated some setting in their profile or sent a chat message when they had no connection. Background sync lets you queue that work. And now the user can navigate away. They can close the browser. And later, once they have connectivity, the service worker can wake up and send that stuff to the server. And this is shipped in Chrome, like it's done. And it's great for small bits of data, like profile updates, sending a chat message, that kind of thing. The problem here is while the sync happens, the service worker has to be awake the whole time. And that's bad for privacy and bad for battery. So we're not going to do that. What we do now is if a sync runs for too long, we just kill the process. But for large uploads and downloads, we're working on something else, background fetch. Now, it's quite early days for this one. So it's pretty vague, vaguer than the vague graph itself. So it's quite vague. All we have right now is a kind of API sketch. And we're starting to explore the issues and get a feel for how it could work. It's a cross browser effort. So here's the idea. From your page or your service worker, whichever, you get hold of the registration and then call background fetch dot fetch, give it an ID, and then give it some requests. So for a movie, this could include let the video resource, but also some metadata or something, poster image, whatever. And that's it. That fetch will now happen in the background, even if the user closes the page or all the browser on mobile. And once the fetch completes, you get an event for it. And that will give you information about it. You can start having a look what's the tag. I'm going to actually cache this stuff. So I'm going to open the cache. And then event dot fetches will be a JavaScript map of requests and responses that arrived. So you can do what you want with that. Of course, if you're uploading photos, you don't want to cache the results. You'll just maybe show a notification. So you've got the freedom there. And during the fetch, the user will see a notification. And that will show the progress of the download. And because of this high visibility and it being easily cancelable, we're hoping that we can deliver this feature without any sort of permission prompts or something like that. We just need to make sure that the privacy aspect is correct and make sure it isn't too abusable. If this is something you're interested in, you can take part on GitHub. I will move that repo somewhere a little bit more neutral, like the YCG, the standards thing. Oh yeah, earlier on, I showed you this thing here, like the full page navigations being significantly faster. But I know why people go down the SPA routes because they want the ability to do a nice transform from a transition from one state to the other. And it makes me sad because I've seen developers introduce large frameworks just for basic transitions, which is a little bit of a shame, especially to have to re-implement the entire navigation stack just because you want a nice fade from one thing to another. And that's why we're going to take another look at navigation transitions. I mentioned them yesterday. And I really want us to have a good plan for this in 2017. But right now, the idea is very, very vague. In fact, we have to scale the whole graph down just to sort of see the top of it. So take this with a big bag of salt. And it's not the first time we've looked into this. Either Internet Explorer 5, you could use this meta tag to specify a kind of enter or exit transition from a set of configurable presets. So with this page in Internet Explorer 5, the user would click the link and Internet Explorer would crash, is what it usually did. Oh, that was my experience anyway. But in 2014, Chrome Dev Summit, we pitched this transitions idea. We showed demos. I didn't really pan out. Mozilla have a proposal as well. But they're both solutions that kind of live in CSS. And they're limited by what you can kind of declaratively say sort of upfront. I don't think they're expressive enough. Stuff like this should be possible. And that would be a full page reload, utilizing the full navigation stack of the browser and the streaming HTML parser. So because when you do this, you get the back and forward buttons working for free. If we actually take a closer look at this transition, the first part, we can do that without any additional data. We already have the image. We know where it's going. And we have that title already stored. We can do that bit. And we can improve the perception of performance by doing this bit while the actual fetch is happening. And then we can bring in the content once it arrives. And if it arrives while we're transitioning, we can bring it in earlier and sort of make it part of that sliding transition. The transition out is a little bit different. And we actually need more data to do that transition because we need to know where we're sending the clock back to, which depends on layout. But also, scroll position, because when you use the back and forward buttons, it will try and restore the scroll position. I really think we need an API that allows this. Something like a navigate event that fires when this page is going to be changed. And you can say, hey, I'm about to do a transition. So keep this document around for a bit. And at this point, you can start doing the very first part of the transition. Like you're getting everything into place where you think things are going to be. Get hold of the new window object, which will represent the page that's coming in. And that will resolve if undefined if it's a cross-origin navigation. I would like us to look at cross-origin navigations as well, but they have to be pretty restricted for security reasons. But once you've got this new window, you've got scripting access to it. You can start doing what you want. By default, I think the new window will draw on top, but the transparent parts will show the page underneath. So here, you can start looking at where elements are, what the scroll position is. Here, I'm just going to set the opacity to 0 of the new document. Wait for document.interactive. And then fade that document in. So that's a simple fading animation. This is a simple example, but it's as complex as you want to make it. So with this, you'll be able to do these expressive animations, but retain all of the features that the browser gives you for free in navigations. If that's interesting to you, the details are on GitHub. Once again, I intend to move this repo somewhere a little bit more neutral. The term Progressive Web App is just over a year old, but the work has been happening for years on this stuff. And we're not done. I think you've heard over the past couple of days how much we love the web and where we want it to go. But now it is over to you. We want your feedback on this stuff, be it in GitHub at the very early stages or playing with this stuff in Chrome Canary. So you'll come and talk to us about it. Basically, I can't put it better than this shop window sign. We're your not-till-not happy, wait. We're not happy till you're not happy. No, that's not it either. Till, oh, no, I don't know. Anyway, thank you very much.