 Hello, everybody. I'm Chris Joel. I built Polymon and Meetscope. And today, we're going to talk about monster apps. So a few months back, the Polymer team flew to Sydney to hang out with some of the Chrome developers who like to work upside down. And you may have known this about Australians. I didn't really know it. But Australians are really great collaborators. They love to collaborate. I would get tired of collaborating. And I would turn around. And these guys would be looking at me expectantly, acting like we were going to collaborate some more. And I just really couldn't take it. I have to admit that at a certain point, I had collaborated so much that things just crossed the threshold. And I don't know. Things got a little fuzzy. And then this happened. What? Oh! Chris, what are you doing? Entertain us. Don't do it, man. Don't do it. Chris, Chris, Chris, Chris. Watch out when you collaborate. It's dangerous. So as I sank into the darkness, into the cool Sydney coastline that evening, I had a few moments to reflect. What I had been doing with my life. Had I gotten so caught up in this collaboration that I'd forgotten why I got into web development to begin with, I wanted to change the world. I wanted to build apps that would touch people's lives, that would make the world a better place. Thanks to this moment of introspection, I had the inspiration I needed. Selfie gifts. The internet needs selfie gifts. I mean, everybody loves selfie gifts. They combine the two best things on the internet, selfies and gifts. So I did the next thing that any self-respecting web developer would do. I went out and I built a web app. OK, actually, the first thing I did was I registered a really cool domain name. I called it Meetscope. I registered Meetscope.camera in case you didn't know you can get a .camera domain. Then the next thing I did was I went and built a web app. But when I first got started, I wasn't really sure if this was something I could even do. I mean, I needed to get access to a camera. I needed to process image data from that camera. I needed to store that image data in my web app. Was this stuff that could even be done? I mean, this is supposed to be a phone app. Can phones even do this on the web? Well, this is a story about how the web platform had my back and how I used the web platform to deliver selfie gifts to everyone. So we start with the Media Capture and Streams API. This is a really cool API that's available in pretty much every browser that lets you access input from camera, input from microphone, and record it, or display it to the user. Well, it's not quite available in every single browser, but we'll come back to that in a minute. So what does it look like? It's a really great API. It's got all these great, promise-fied methods. Most of them are hanging off of navigator.media devices. Here you can see I'm calling enumerate devices. And this is basically a promise-fied API that gives me a list of all of the input devices available to the user. And here you can see I'm actually filtering by device.kind that matches some kind of video string. And this is how I get my list of cameras. Another really useful method in this API is getUserMedia. So using getUserMedia, I can select in a fuzzy way or precise way exactly what I want access to. And it returns a promise that resolves with a stream of input from that device. Here I'm specifically requesting access to one camera. And when you put it together with the HTML video element, you get something cool. You essentially get a video element that's displaying stuff coming out of a camera on your device. Now, the HTML video element is cool for two reasons. One, it lets your user see exactly what's going on, what you're taking images of. The other reason why it's really cool is you compare it with canvas, draw it to a canvas, and get image data out of it. So here I'm taking a canvas. I'm measuring the width and the height of the video. I'm drawing the video to the canvas. And then I'm using a method called drawImage. DrawImage is a little misleading. It says drawImage, but you can actually pass a handful of other elements to the drawImage method. And also the dimensions, the width and the height. And then voila, there's pixel data. All right, here's a demo. So on the bottom right, I've got a HTML5 video element that's showing stuff from my webcam. And in the back, I have a canvas. So if I need to make my selfie face, I can go like this. And you guys can stare at that for a few minutes. And so what are we doing here? We're using the web platform to capture image data from the webcam. And if you're lazy like me and you like web components, I've wrapped a lot of this up in a series of web components I call Meatscope Elements. So from top to bottom, I've got Meatscope Devices, which basically accesses the enumerate devices method and filters for all the cameras and reveals one of the cameras as selected camera. I'm binding that to Meatscope User Media, which takes in a device and then spits out a stream. And then the stream gets bound to Meatscope Video. And Meatscope Video just displays whatever's in that stream in a full bleed video on the device or in some smaller container if you prefer. So the next thing I needed was a GIF encoder. And thanks to the vibrant JavaScript ecosystem, there's like a million GIF encoders. But I picked this one. It's really great. Pretty simple to use. It looks something like this. I create a new instance of a GIF. I can configure it a little bit. And then for all of the frames that I've captured for my GIF, I add it into the GIF using the Add Frame method. And then I call GIF.Render. Pretty simple. And in order to actually make a GIF, you have to record multiple frames and then add them in in sequence. So it looks something like this. Here I've got several canvases next to each other. And each one has recorded some picture of me doing something. And then you add them all together. And it makes it look like I'm really like, ooh. So each frame when I'm recording a GIF, I do something like this. I draw the video to a canvas context. And then I push the image data from the canvas context onto a frames array. This turned out to be really, really bad. Basically, during a very sensitive time in my user experience, when I was trying to record GIF image data, I was spending 50 milliseconds just reading out of a canvas. You can see that most of the time is spent at the bottom there where I'm calling get image data. So that was bad. But the worst thing was at the very end. You see, this red box on the left was me recording the GIF. On the right is 40 seconds I spent rendering the GIF. And this was all happening on the main thread. This was actually on desktop when I went into my first pass. And frankly, it was totally unusable. It's like, you record the GIF. You drop a bunch of frames. And then your user has to sit around for 40 seconds waiting for a bad GIF to render. And meanwhile, they can't even use the app because the main thread is blocked. So what do we do? How do we make this better? Miptia. I don't know why that keeps happening. So I started to think about, what could I do to optimize this whole experience of recording GIFs? And the first thing I thought of was, when I get the final GIF, I've scaled it down pretty significantly compared to the source data. So what if I could do that in advance, instead of trying to pass a bunch of big image data down to a GIF encoder and have it do the scaling itself? It turns out that the Canvas 2D context is able to do this for us. The draw image method will allow you to pass target dimensions for drawing the video. So here I am doing the same thing I did before, except I'm passing in the scaled dimensions that I want to draw into the Canvas. Now this is nice because it means that when I go to read the image data off the Canvas, I'm going to be reading far fewer bytes than I was before. But I still thought I could do a little bit better. Every single time I was recording a frame, I was actually doing something like this. I was writing stuff to a Canvas, and I was reading it back, then writing it and then reading it back. Could I do better than that? Well, as a matter of fact, I came up with a nice strategy that I think balanced reading and writing pretty well. So it looks something like this. So what's going on here is it's the same demo as before. I just have one Canvas, and I have one video input. And as I'm recording frames, each time I increment a counter and I know I need to draw the next frame a little bit offset from the first one, it creates kind of a film strip effect. Now the result of this was pretty awesome. This thing really wants me to print. Literally 100 times better. So remember before it was about 50 milliseconds per frame? Now it's half a millisecond per frame because I'm not actually reading any of that Canvas data anymore. This is so much better because it means I'm not going to be dropping frames while I'm recording a GIF. Also, I had this problem still. I get to the very end of the recording, and it still takes almost a minute for me to get my rendered GIF. The trick here is you really have to keep work off the main thread. It's not OK for you to lock up your user experience for almost a minute just to do some heavy duty work. So how do we take care of this? Well, it turns out the platform has another solution for us, Web Workers. So most of you probably know what Web Workers are. How do we use them here? Here I'm creating a new Web Worker. This is just a basic Web Worker. And I'm passing in my GIF encoding library. And then I'm creating a Message Channel. I like Message Channels because they're kind of like a two-way walkie-talkie between the main thread and the worker thread. I pass the port from the Message Channel into the worker. And then I listen for messages on that port so that I can do communication. Now, if that looks a little verbose, the platform actually has a really cool kind of worker called Shared Worker. Shared Worker basically works the same way as a normal Worker with a few exceptions. So a shared Worker, when you create a new instance of it, if an instance has already been created, it will share the same worker thread with the new instantiator of a shared Worker. Also, Shared Worker comes with a Message Channel kind of built in. So when you get your Worker instance, it already has a port that you can communicate to the Worker with. Now, Shared Worker isn't available everywhere. And in fact, it's possibly on the chopping block for the Web Platform. But it's a pretty nice class to use. And if you like it like I do, we've built a class called Common Worker for app storage, which works almost exactly the same way as Shared Worker. So you can create a Common Worker. You'll get a Worker instance back. You'll have a port that you can use to talk to the Worker. And it also shares the property of Shared Worker, where it only creates one Worker thread for every... No matter how many Common Worker instances you've created. Now, inside the Worker, what we do is we import our GIF encoder. We listen for messages on the port that we get from the client every time the client instantiates a new Common Worker. We encode our GIF, and then we post the GIF back to the main thread. It's that simple. This was an awesome improvement. Moving stuff off the main thread meant I went from 40 seconds of encode time down to about 100 milliseconds. Now, if you look at that and say, well, 100 milliseconds is actually still quite a bit of work, it turns out there are some hard limitations to what you can do with HTML Canvas. If you get a bunch of image data out of a sufficiently large canvas, you're going to incur some cost. But I'll take 100 milliseconds over 40 milliseconds any day of the week. It's also worth noting that there are standards at work to get work or, sorry, Canvas that you can use off of the main thread. So look forward to that coming in the web platform soon. So this is an example of what I got from all of my performance optimizations. Here, I'm recording multiple GIFs one after another. Each time I record a GIF, I start another one. And those GIFs go, and they encode off of the main thread. And meanwhile, the main thread is nice. Running 60 frames per second, everything's good. So your users, when they're recording the very important selfie GIFs, can end up with results like print dialogue. No, this. So I have my selfie GIFs. But what do I do with them? I mean, I don't want to just throw them away. Well, it turns out there's an element for that. At Google Iow 2016, we released a bunch of experimental elements that we call app storage. So app storage is based on this idea that spawned out of Firebase elements where we saw the value of creating declarative elements that give you access to storage layers. All right, so what do they look like? Roughly, they look like this. We have documents, and we have queries. Documents reference specific blobs of data by ID. Queries let you construct ordered limited queries so that you can get lists of data and iterate over them. App storage elements let you synchronize storage and state in your app within the app and also between your app and the cloud if that's something that you want, but not necessarily. And today we have two bodies of app storage elements that you can use. Many of you are probably familiar with Polymer Fire. There was an awesome talk by Michael Bly earlier today. Some of you might not be familiar with the fact that we have a series of app storage elements built around PouchDB. So I really like PouchDB a lot. It's nice because it lets me have access to IndexDB, which is a piece of the web platform that is available whether you're online or offline. It's a database that's right there in your browser. And I really like to use the web platform. So I ended up choosing PouchDB for Meetscope. Now, what does using PouchDB even look like? Well, in JavaScript, it's something like this. You create a new instance of PouchDB and pass in a name for your database. And any time you want to store data in it, I mean, that's really all you have to do. Any time you want to store data in it, you just call db.put and pass in some object. And you get to store that in PouchDB, in IndexDB, in your browser. And you can see in this example, I'm actually storing binary data because IndexDB can store binary data, which means I can just throw all of my GIFs into PouchDB. Now, I just want to stop and talk about how awesome this is. It's 2016. I have a database in my browser. I can store structured objects. I can store binary data. I can just throw this stuff into PouchDB and forget about it. I mean, it has virtually unlimited space. So this was really exciting for me. But what does it look like to use it in practice with Polymer? So here's PouchDB query. Here you can see I'm specifying that my database is meat scope. I'm crafting a selector and sorting and specifying fields that I want. And then what I get as a result is GIFs. Now, GIFs is just an array of rows. I can pass that into a DOM repeat, iterate over it, and just spit out my GIFs. It's really that easy. And if I want to interact with a document directly, I have PouchDB document. So I'm binding it to a specific document with just some random doc ID. And the result is I get GIF out of the data property. And with GIF, I can just make changes. And by a two-way data binding, all of the changes are going to transparently reflect in the storage layer. So you know a bit about selfie GIFs. It's time to take a selfie. All right. I'm going to need your help for this. Everybody say, web components. All right. So just so you guys can see, I've got my phone here. It's encoding the GIF over there in the top left. But just to prove that I'm not full of hot air, I'm going to take another selfie. I'm good up here on stage. All right. There's the GIF that we recorded. And it's frozen. Well, you guys get the idea. And enough about selfie GIFs. Let's talk about important things like coffee. So a few months back, Justin from the Polymer team and I were sitting in a coffee shop. And we were looking ahead to the Polymer summit. We were a little anxious because Polymer 2.0 was coming about. And we weren't sure what we were going to talk about. And we had a conversation that went something like this. Justin's like, hey, we should build a really cool app for our users at the Polymer summit to show off how cool the web platform is and get them engaged, get them interacting with the team members. Don't you think that'd be cool? And I was like, yeah, yeah, yeah. Let's build a video game. That's a great idea. You have so many good ideas, Justin. And then Justin was like, no. We only have three months. Games are hard. What about a survey or something? Something where they can fill it out and then interact with each other somehow. I don't know. And then I was like, hey, guys, guys. Justin wants to build a video game. Tell him about it, Justin. It's going to be awesome. And Justin didn't really respond to that. He just sort of disappeared. So anyway, that was the conversation that spawned Polymon. And most of you are familiar with Polymon now. Polymon is a location-based monster catching game with player versus player battles. Now, I just want to stop for a second and say, awesome. This is on the web platform. You can build this on the web platform. Please build things like this on the web platform. Yes, clap. That's awesome. So Justin had something about he had a good intuition about this. Like building a game is not easy. It can take millions of dollars to build really cool video games. And often they go over budget and over time. How was I going to really prove that this was a thing? I had to have a really great minimum viable product. And I had to have it fast. Otherwise, this was probably not feasible for the Polymer Summit timeline. I mean, this was August. It was mid-August when we started doing this. And so I needed to come up with a cool way to get people interacting with each other. I needed to be able to rely on the web platform to do it. So I looked a little bit into physical web now. Physical web is cool and all. But maybe it's a little too cool, too cutting edge. Not everybody has access to it. I also looked a little bit at web Bluetooth. So web Bluetooth is a series of awesome new APIs that are landing. And I wanted to be able to scan Bluetooth beacons from JavaScript and get access to all of them so I could see what Polymon were nearby. Unfortunately, those APIs are kind of still in flight. And did you guys know that there's like a web NFC API? I didn't know about this. But it sounds really cool. I'd love to be able to catch a Polymon just by tapping my phone to somebody else's phone. That's not really there yet. Then I stopped for a second. I thought, you know what Scott Jensen would do? Scott Jensen would use a QR code. Yeah. So everybody loves QR codes, as we all know. And I thought, everybody can scan QR codes. So let's just use QR codes. This would be great. And wait a second. I want to use the web platform. Didn't I build a series of components for accessing the camera and processing image data on the camera? I think there's something to this. Maybe I'll just take the meat scope elements and pair them with, I don't know, a QR code reader library thing. And all of a sudden, I've got a QR code scanner on the device, or sorry, in the web platform. And within a couple of days, I actually had a demo that looked like this. This is a very early version. Hey, look, there's a polymon. Check that out, that's cool. Look, there's me in a cafe. Oh yeah, QR code. Awesome. All right. It's OK, you can clap at my prototype if it's awesome. Thanks. So remember this. I said we'd come back to this. So unfortunately, you cannot access the webcam directly in Safari. They just have not implemented the user media API. I encourage you all to go to the WebKit bug tracker and star the bug to implement this API because it's awesome and fun to use. But I still needed to access Safari. And a few weeks into the project, Monica came to me and she was like, I got it. I know how you're going to do it. I know how you're going to get access to the webcam in Safari. And it turns out that there is an input element that you can use to get access to the webcam. You just structure an input element like this. Type equals file. Capture equals camera. Accept image slash star. And iOS will helpfully show a little dialogue anytime the user clicks on it to take a picture with their camera. And voila, we have the ability to scan QR codes in a browser that does not even support the user media API. And this is the demo that resulted. So here I am. Oh, well, no webcam image. But oh, look at that. I can take a photo. Yeah. Now, many of you have come up to me and scanned my QR code with Safari. And you know it's not perfect. But I'll tell you what. Progressive enhancement is not about perfection. It's about making sure that every single browser has the ability to experience the way you intended. And if it works, I think that that's a good thing, better than not being able to do it at all. Yeah, please. So this is my obligatory purple slide, otherwise known as the laziest slide in my deck. How many of you guys and girls know what purple means? I know you all are lying because you didn't laugh at my lazy joke. But this is basically what first paint looked like around about the time that Polymon became feature complete. It was about 1.25 seconds in on desktop. Now, this isn't really great. I mean, it's OK. But frankly, I wasn't doing any kind of purply optimizations. And there was probably a lot of head room for improvement. So what did I do? This is roughly what Polymon looks like at the top level. I have a Polymon app component. And inside of there, I have a series of routes. This is just a few of them. But you can see I have a start route. I have a map route. And I have a Polydex route. And for those of you who've used Polymon, you know that on the start route and on the map route, you see the start page and the map at the same time. Here I have the elements that correspond to those routes. So I have a start screen. I have a map screen. I have a Polydex screen. And so the activeness of these screens corresponds to whether or not the routes match. So what I started to do was I decorated my app routes with declarative information about what element definitions needed to be loaded in order for those routes to be visible. So I added some data attributes here. You can see I'm loading the Polymon start screen and the map screen every time the start screen is active. Oh, boy. That's getting old. And then on the map screen, I have something very similar. But since the map screen doesn't have the start screen at the same time, I can just load the map screen by itself. And this is just an example of a few of the routes on my app. And then I built this thing called the Polymon lazy loader, which sounds fancy, but it's really not. All it does is it observes changes to route. And then it matches those changes against the app routes that I've also declaratively added as children to the Polymon lazy loader. And each time the route matches, it loads the data fragments if they haven't been loaded before. Now, just as a lesson for everybody about how important it is to do code splitting and lazy loading, I got my first paint time down to 275 milliseconds just by doing this, just by lazy loading all the different parts of the app. This is really important. It's one second of time that you just save somebody on desktop, but probably several seconds of time on mobile. For now, thanks a lot for checking out my talk. This is Monster Apps. I'm Chris Joel. Work on the Polymer team. It's OK. You catch me on GitHub and on Twitter.