 Good morning. Thank you for showing up at 8.30. Hi, I'm Mariko. I'm from Web Developer Ecosystem Team within Chrome. Let's talk about building fast and smooth web apps. So it's safe to assume, I think, that all of us use the internet using computers, touch device, or your phone every day. But what if I tell you that millions more people are accessing the internet or coming to access the internet using device like this, a feature phone? So for those of you who remember a days of feature phone, you might remember typing the text on those keys. I actually have it here. I grew up in Japan where the first popular mobile web network was built. And using feature phone to access the websites and doing all of the emails and text was like my part of high school and college years. So it feels really nostalgic to me. But also, I went into the industry after that. So I remember the days building a website specifically for feature phone. In Japanese, we call it Garake Saito. It's really hard. You need it to use a subset of HTML or sometimes completely different markup language. Yeah, so bad memories a little bit. But don't worry. These phones are new phones that's coming out to the market. It's developed right now, sold right now, all over the world. And it's not just me telling you that these phones are popular. According to a counterpoint research, 370 million smart feature phones are expected to be sold between this year and 2021. So what is that smart feature phone? This is a new feature phone. So smart feature phone means it has a new OS, sometimes such as KaiOS. There is a few other OS too. KaiOS is awesome because it's all web-based. So you can build an app using web technologies. It has a modern-ish web browser. It's a little bit of a versions behind, but it supports HTML, CSS, and JavaScript. For those who develop the website for feature phones in Japan, it's a great news. And then it also comes with app ecosystem, which means you can download Google Maps, YouTube, games on your phone, just like you would do with your smartphone. So software-wise, it feels like a smartphone, but it comes in the size of hardware. So the screens are usually small. I think most of the KaiOS devices are QVGA screen size. The navigation-wise, it does have a mouse. Shows up on the screen, but you have to navigate it with your D-pad. And if you want to input anything, any text on the field, you have to use the number key and the T9 key input to do that. So that part is a feature phone. So what does it look like to browse the web today using these new feature phones? Here's a website you might have visited once or twice before. Google.com loads great. It looks great, but we all agree that Google.com, the top page, it's just a bunch of links and one input field. So that's not impressive. Let's see how it's like to build a load a web apps, right? So here's a scooch. It's a web apps using web assembly. You can do image compressions. It's a PWA. We built it last year for CDS. How does it perform? Not great. The CSS layout is completely off. So the thing is, when we built a scooch, our team built it, we wanted scooches to be the best in-class web app. So the first load is only 16 kilobytes. It loads really fast, even on this feature phone. But we didn't really quite test it on the screen size this small. We tested it on the desktop, tablet, and then phones. But we never thought of having somebody accessing the site using these phones. News for one of the depressing layout of the website is that IO website, not great, not great. It's so frustrating to navigate IO website using this or it's almost unusable. But not everything on the web is bad. My favorite is Twitter. So Twitter mobile, mobile.twitter.com. When you access that on the smart feature phone, you almost get the same exact experience. You can tweet. You can lead to it. You can load a video. You can search for GIF. You can attach an image. Everything that's provided in the Twitter website available on the smart feature phone. That brings me to the project that I'm going to discuss today. So as I briefly mentioned, I am part of Web Developer Ecosystem Team in Chrome. And within that, I'm in the small subset of the team that tried to build a real-world web applications that we can share so that we can share our learnings by building something real. So our first project was SCUSH. It's an image completion application that completely built within a browser. We use Wasm to provide a newer file format, like WebP, for browsers that doesn't support WebP. And you can select all of the knobs and a bunch of settings to see how much better the completion can be. And you can download it and upload it to your blog. So that's the idea. And after we built this last November, we got to discussing what should be the next project. And we decided, let's just build a game. We wanted to build a game because whenever we asked web developers what is the web really good at, everybody says it's a document. And we wanted to say completely opposite of document to see if web can handle that. So games seem like a good topic. And developing a game came with a bunch of problems that we face as a web developer every day, such as how do we handle a lot of inputs coming from all of these UI? Or can we really provide a graphics heavy application on the web? And on top of that, because we were hearing that the feature phones are getting popular, we decided that our app this year is going to support everything from feature phone to desktop and everything in between. So I would like to explain what we built first. And then we're going to get into how we build it. Introducing Prox. Prox is a game of proximity inspired by the legendary game Mind Sweeper. Game is situated in the space. And your job is to find black hole. You can play Prox on any kind of devices, from desktop to tablet to D-pad, even with screen readers. One, dim button column 9 of 16. Hidden button column 10 of 16. It is a PWA, so you can download it and save it on your desktop or on your phone and play the game wherever you want, even when you're offline. So that's the game we built. And you can access that app at the prox.app. That's the URL. So let's discuss how we built Prox. So even before we go into the project, we decided on the baseline. Like, three of us got together, Jake, Sharma, and me, and talked about, like, what is this app going to be? Three points. We decided that every device will get a same core experience, meaning we are not going to build three different apps for desktop, tablet, and feature phone. We decided that it has to be accessible both on the input device, so all of the mouse, keyboard, touch, and deep parts are supported. And then we said, why not make sure that the screen reader is accessible, too? Performance-wise, our team really likes to build a performance web app. So we said, it has to be really, really good performance. So we set our performance budget to be initial payload of less than 25 kilobyte, time to interactive less than five seconds on slow 3G network, and animation should learn 60 frames per second whenever you can. So with that understanding, three of us got to working. So let me explain how this app is laid out. So the game started with a GameLogic file that Jake wrote on the long haul flight, because he wanted to play a minesweeper, but there was no internet on the flight, and he wrote it. He's that kind of engineer. GameLogic just contains the logic. There is no UI element to it. It's just how big of the field is, where the mine is, and with certain points are clicked, how should it be revealed, that kind of thing. On top of that, we built a UI and state services. So UI, we use Preact, and state services is a thin wrapper on top of GameLogic to communicate GameLogic and the UI service. We also wrote our own rendering pipeline, and which we will get into later for why. And we also have a little bit of utilities to glue it all together. So simple as you can understand, all of this could learn in main thread. Everything in one file, well, it could be separated, but it could learn in main thread. However, from the gecko, we knew we wanted to do the graphics heavy design. We wanted to animate it. And we were like, hmm, not sure about this. So we decided to move a GameLogic and state service into WebWorker. So WebWorker, those of you who are not familiar, is a way to learn your JavaScript off the main thread and separate thread. To communicate between worker and main thread, you use API code POST message. And it's like, hmm, not an enjoyable experience. Keeping track of the message, passing, it's a lot of work. Luckily, team member Sama, who is sitting there, wrote a library called comlink. It's a synlaper or abstraction on top of POST message to make using worker a lot more enjoyable. In fact, we improved comlink by doing this project, comlink VFO. So we were working on this project using comlink and testing it on the feature phone. And realized that it's not really working great. And realized that comlink has a processing intensive work happening. So we fixed that, and comlink VFO was released. So if you're interested in offloading all of the tasks to worker, you should definitely check out comlink. UI-wise, we use Preact. We chose Preact because three of us use Preact on the previous project, and we liked it. And then also, it's still the smallest UI library out there that fits in our performance budget. The app-wise, it is a standard single-page application. All of the custom elements are rendered on one div and then appended into the body. But we knew we have an aggressive goal of initial payload less than 25 kilovite. So we decided to do a kind of strategy where we will build time pre-lender the first load, the first interaction, using a little hack using Papatia. So basically, we have an app. And whenever we build our app, we learn a Papatia. And Papatia will download. Oh, by the way, Papatia is a way to control a headrest Chrome from your script. So Papatia will open up the headrest Chrome, do the thing, download the Preact. Preact will just build a HTML, put it into the DOM, and then we just grab whatever the output that was in there and put it into index HTML. And that's what gets uprooted to our static site hosting notify, and then that gets served to the user as a first payload. This is just showing you how easy it is to start with Papatia. You just start an instance, browser, Papatia.launch, and then create a page, go to that link, evaluate that link, and just put it into the HTML. And that's all there to it. So the basic architecture-wise, two key points. We use Walker to flee app the main thread as much as possible, because we knew going in, we want to do graphic heavy thing. Main thread can only, the graphic stuff, main thread can only do. Like, Walker cannot do the graphic stuff. So free that app for graphics. And then we also play render at the build time for speedy loading of the initial bundle. So that leads me to talk about graphics. Perhaps the biggest performance choice we made was to have our own graphic rendering pipeline. So our initial plan was to completely use the DOM. We were thinking, oh, yeah, we can just have a table and then put a bunch of buttons in it. And we can use the CSS animation, like transform, opacity, that thing that lands on the GPU to do the animations. And that would be great, right? Well, it turns out, we think that we might hit a Chrome browser back when all of this was in the one layer. And I want to update a just a single button. Chrome was painting entire table, which is not a great for performance. One way to solve these problems, if you see it, is to put a button or some elements that you want to update in the separate layers using things like world change transform. But we have a lot of buttons on the game, because each of the game cell, it's going to be a layer. And that might solve a painting problem, but then it creates excessive amount of layers. And that hogs up the memory. So we are creating another problem. So we're like, this side is not good. We should go for another lab. So we decided to do all of our graphics in Canvas. In fact, we have two Canvas on our screen. One is for background animation and one is for grid animation that is doing the game cell. So these are generated and rendered every frame, 60 frames per second, using the Quest animation frame. So if you're not familiar with the Quest animation frame, browser, the Quest animation frame is a way to schedule your script. At every tick, the browser refreshes the graphics. So you put some kind of JavaScript, in our case, the drawing call for the Canvas. And you put that in callback of the Quest animation frame, and next tick, that gets one at the next tick. And if you're doing animation stuff, you probably want to recursively call that so that you put that task into every frame. And then this is how we update our animation. If you are curious about all of the stuff that I was talking about, painting, layers, compositors, CSS, the Quest animation frame, I wrote a four-part blog series about inside look at modern web browser that explains what happens when your code gets the browser and how it's executed. So you should check that out. And so we have two Canvas now. And there's a few other things that we did for the graphics for performance. For example, the background animation, which we call it nebula animation. In fact, it's only a 1-fifth of screen size. So whatever device you have, we only create a canvas size of 1-fifths of your device. And just stretch it out to full screen. We were lucky because the design, the designer came up, was already a burly image. So creating a small one and stretching it didn't make much and saved us a lot of memory to do that. So for the Glid animation, we basically do a sprite animation. And we generate these sprites on the client side. So we do not send any image data down the wire to users. We send the JavaScript to generate the sprites. And then JavaScript creates the sprites. And once that's done, it's saved in index DB. This way, we can create the most optimal sprites for each of devices. So different devices have a different device pixel ratio. Some of them have one, some have two, some have three. So this way, any device is accessing our site. We don't need to create an image. They can create an image or the sprites on the client side. So that's graphics. Let's talk about accessibility, which is an exciting part that I worked on. I'm really excited. So as I explained, the game is now in two canvases. And we can totally build the game just with canvas. A lot of games do. Whenever you use a mouse or click on it, you just get the coordinate of the mouse. And then you write your own JavaScript to say, hey, did it hit the square underneath? And then you just redraw the animation. But we decided we are going to keep a DOM that table and buttons. Remember, that was throwing a painting yellow. So we just fixed that by putting in opacity 0. There's a reason why we kept the DOM version on top of canvas. Because if we have an element, we can focus on it or we can attach an event listeners. So when you're playing the prox, what you are seeing on screen is a canvas. But what your JavaScript is interacting with is the invisible buttons and tables. And this way, we can tap into browser's native accessibility features. So here's a screenshot of me playing a game with a voice over. It is written down what the voice over said. It says hidden button column 15 of 16. Hidden is a state of the button. And that's the only part that we manipulated from our JavaScript by adding an area label. Everything from button to column 15 of 16, kind of suggesting users where the locations are, that just came out of the box by using the table. So when we start the new game, we generate a table. Clear the table. And in theory, you should just add a low grid and the screen reader thing should be all taken care. But somehow, maybe because we are displaying the table opacity zero, the browser wasn't quite registering that as a grid. So when we cleared all of the lows and columns, we also needed to specify that this TR element is a low and this TD element is grid cell. And that solved this problem. But at the beginning, we were like, why is it not working? The documentation said that if we put the grid, it should work. So that was a fun challenge to do code or foe. And inside of each cell, we generate a buttons that user click on it. Speaking of buttons, we use a accessibility technique called Loving Tab Index. When we create a button for each cell, first, top left corner of the cell gets tab index of zero, which means it's tab accessible when keyboard navigation user hits the tab, the focus on that button. But then everything else, we set tab index of minus one, which means it cannot focus by tabbing it. This way, the keyboard user doesn't have to tab 100 times to get to the end of the game so that they can get to the other menu button down at the bottom of the screen. And when the keyboard user access the game, they focus on one cell. And they switch to using an ALO key. And ALO key will emit which direction and how many times that the focus should move. So the focus method basically takes like, OK, client focus is here. Just going to make it non-tabable or non-focusable. And then the new button that will be focused, it's going to be focusable by putting the zero. And then just set the focus. Let's set the focus. And this is how we implemented Loving Tab Index. I did not know any of this until I did this project, embarrassingly. We got a lot of help from Rob Dodson, who is on our team, who does a lot of accessibility work. And he has a guide on web.dev for accessibility to all, which explains all of the techniques that I explained right now. So you should definitely check that out. And thank you very much, Rob. So this is the halfway of the project. And we were feeling great. Our animations were learning, feature phones, game was working fine. But then we noticed, I explained, that the feature phone to desktop, everything in between. So we were testing iPad Android Go phone, which is a low end side of the Android phone. And we were noticing that the game is clashing on Android Go phone quite a lot. And we were like, but how? Like, feature phones are the weakest powered phone. Android Go should be upgrade. Why is it crashing? What turns out, how many pixels are on the screen matters a lot. So on the feature phones case, they may have a weaker chip, but they only need to care about a certain number of pixels to drive. On Android Go phone, which is a smartphone with touch screen, with much bigger screen, they might have a slightly powerful hardware, but they have four times or more pixels to drive. And that makes them sad and just shuts down sometime. So we decided that, OK, at this point, we should just check if the hardware can support animation. And if not, just render a static version. So basically, when the game loads, we do a little bit of a check. Say, can this hardware support animation? And if they can, we load a WebGL animations. And if not, we support a Canvas City static graphics. Static graphics is something we needed to make anyway for accessibility for those who prefer a reduced motion action. So we just exposed that to lower grade hardware phones, too. So this is just our approach to checking this. Might be a little naive, but this is just one check. Shader box, this class, is just a class abstract on top of WebGL. And we check a high precision vertex shader. And if that's possible, then we just land animation. And if not, we go with 2D animations, or 2D static rendering. We know it is a little naive, but we found that the devices that supports high P usually can handle animations. Of course, if users have a preference set for pre-first reduced motion, we check for the media query. And then that becomes the default, too. Let's talk about supporting different input devices, because I've been saying keyboard, touch screen, and we need that, right? So game has two main functions. Click to open the cell, and click to flag the cell. When you are playing on the mouse, it is just regular click and right click. If you are doing that on the keyboard, then we assign a toggle button to the F key for the flag. So you click on it, and then you can go back and forth with the mode. And then navigation wise, you use number key and enter. Sorry, the arrow key and enter. For the phones and tap, we went with just tap and toggle method. So you can see at the bottom, we have a toggle button. And every time a user wants to switch to that mode, you just click on that toggle. And you might be feeling, looking at this video of a phone saying, I want to pinch to mean to see if how many cells are left, right? I want to zoom in, zoom out to see how our game is doing. And this is something we discussed while we were developing this, and we actually needed to discuss whether to do it or not. Because we had this goal of the app has to be performant, learning smoothly. We had this debate about, yeah, but if we support pinch to, then we lose native scroll. And that means scroll gets slow, and is that OK? So eventually, we just decided that for this app, we are going to go with native scroll over pinched zoom. So we're not supporting the pinched zoom yet. Maybe hopefully. The thing about pinched zoom is that once you have the pinched zoom action, then you lose the native scroll. And you need to implement your own scroll physics, right? And that is not going to be comparable to, comparable fast to the native scroll. And on the web platform, we don't have the way to tap into browser's scroll physics. We would love to have them, and we would love to explore the possibility. But for now, native scroll only. I also wanted this interaction with a double tap to flag. I was like, why do I have to toggle the mode? Can I just double tap and flag it? And it was immediately shut down because of performance. If you implement the double tap, that means we all have to wait for single tap to check. Is it double tap? Is it double tap? Which means we create a few milliseconds delay for the user's interaction. So based on the baseline that we agree that it has to be smooth, we said no double tap. Possibility, though, in the future hasn't implemented it. Long tap, holding on the thing. We can do that performantly. This is just a question of UX and design. We don't know how to notify the user that you hold it enough to flag it. Now you can take it off. And the black hole is not going to get revealed. So once we figure out the design, we might implement it. Which brings me to this phone that does not have any of it. Yeah, it has a click. You can click the button, but there's no right click. There's no touch screen. The toggle key for F, F doesn't exist. It's only number key. So what do we do? We added a custom key navigation to the number key. So when you click on five, the sales focus goes up. When you click on hit on zero, it goes down. And then when you hit on eight, it is a click action. And then when you hit a hash sign, the sharp sign, then that's the mode toggle. So another thing we found was that we need to show users where the focus are. It's really hard on the small screen to see which button are they about to click. They also have a mouse indicator, too. But sometimes you can't really see like, is this mouse pointing to the one button or the other button? So we made sure that we highlight the focus and tell the user this highlighted element will be open once you click on it. Another thing we added, which my favorite, is a key shortcut guide. So if you access our game on the feature phone, then you will see these tiny icon indicating you can click on the hash button to start the game, or you can click on the asterisk button to open the information. This is a piece of UX that I took from a 2000 mobile development in Japan. So Japan had a feature phone web network. And when you go to a long document site or something, they usually have a table of contents at top and then the in-page link inside. And those are usually mapped to number keys. So you would have a number emoji right next to the table of contents indicating, oh, if you want to go to chapter 3, just click on 3, and then you can just move down. So I took that UX and then put it into this game. Another thing that's really important if you're designing a website or a web game for a feature phone is to have a way for user to get away from that view. So I have a close button there. Whenever user opens the settings model, which is quite a long because it also contains how to play the game, and scrolls down. Whenever they think, oh, OK, I got it. They can just tap on the press on the asterisk key and then just close that model. If we didn't do this UX and have the standard design that we have for the smartphone and desktop and all, which is a floating X button, this happens. In the middle of the page, user have to scroll up, up, up, up, up, hit the top. And finally, the mouse can move to that close button to close it. And this is really frustrating. So the element itself, the floating X button and the close button at the bottom, is the same element. But depending on the device, if it's feature phone or not, I just change the CSS to put the location differently. So that's the feature phone designs. Let's talk about offline strategy. As I talked about this game, it's a PWA. It uses service worker to cache all of the resources. So even if you're offline, you can play the game. And whenever you do offline, there's always question of, how do we update the game if there's a new update that's there? We might have seen these models saying, hey, update is available. Lead all this up or dismiss this and then use the old version. This is directly took from the previous project that we did. But in this case, we did not want to block the users of wanting to play the game or playing the game right now. So we hit this logic inside of this page, this button, to be exact. So whenever a user comes to the app and hit Start, and whenever there is a network, they make a call to, hey, is there update? And if there is update, then it starts fitting down the new version. And once that's done, it loads a new version of app, skip the opening screen, because we already know the game settings for it, and launch it into game directly. So when user sees this page, the game is already updated to new versions. And this is how we do the offline and versioning. Lastly, I want to talk about resource loading. So after all of this WebGL and feature phones and all of that, our total packet became a 100 kilobyte Z-Zipped. We feel quite good about this site. And out of 100 kilobyte Z-Zipped packet, 20 kilobyte is a first payload. So we hit the goal of under 25 kilobyte Z-Zipped first payload. Basically, this is just an index HTML that gets sent when the first request goes in, which means this index HTML contains this page. I like actually this page. So all of the animations and the opening title role that we handcrafted with CSS animations are lazy loaded. So this is the minimal set of features and the buttons that user need to start interacting the first action. The first action could be starting the game. First action could be opening the information icon, clicking the information icon to open the settings, or clicking the full screen button to go into the full screen mode. So we even subset the font. So we look at all of the glyphs that's used on this page, subset the font, and inline it into the index HTML. So really, our index HTML is the 20 kilobyte of data that kind of look like this. Yeah, a lot of inlining. But once that gets to the users and users start interacting, then little by little, chunks are downloaded, lazy loaded, and then game fully interactive. For doing this, we used Lollap. We really loved and enjoyed using Lollap. We even wrote our own plugins for things that Lollap didn't really provide out of the box, but we even felt comfortable doing that and kind of mixing and matching it, which was not the case on the previous project that we used the different building process. And Lollap worked really great for our setup using Walker. So as I mentioned, our codes are separated into the worker and main thread. And comlink is a shared dependency to communicate each other. If you do this in Webpack, then Webpack creates two different chunks of the dependency and separated it. And that's just duplicating it. But Lollap, out of the box, just keep it as a one chunk and then share that as a dependency for worker and the main thread. So this was out of the box great fitting for our project. For module loading, because JavaScript modules are not supported in Web workers, we use AMD. And Soma wrote a tiny plugin called Lollap Plugin Loaser, which is a AMD-like loader that is really tiny, specifically made for Lollap output. So you might want to check that out, but that's part of our build process. And even doing that, tools cannot really help the fine-tuning of shivering down the data. We needed to go in and look at our index HTML and what gets loaded and then see why is our index HTML getting bigger and bigger and bigger. So if you want to check what kind of refactoring we did, there's an epic PR up on the GitHub called I made stuff smaller by Jake. And things that he did or things he discovered was things like this. So our game screen has an element called top bar that has the number of cells that's open, the timer counter, all of that. But those are only for game time. Whenever the game is not running, like opening screen or win screen, it's only just a title banner. But when we are loading the index.html, which only need title banner, they also load the logic of timers and logic of the open count and everything. So we separate that element into top bar, the full on version, and top bar simple, and just load it two differently. And those shop some data. So this is great if somebody have a time to go in and dig through and every now and then check if we're doing great. But we try to keep reminding ourselves that every pull request that we make be conscious about the site. So every pull request we make on our repo, we learn a little script called Travis size report on a Travis CI to just check what changed. It just says, this file name changed, or this file size changed. And this screenshot isn't particularly interesting. Nothing really changed. But sometimes you find an unexpected change. Like, oh, why did this file's name changed? Or why did this file suddenly this big? So this was a good reminder. So that's the process we took. I would like to end with the three learnings that we had. I definitely think that having a set baseline for the project was great. We started the project with set understanding of what's important to us for this project and how we make decisions. And that got us showing up to the stand-up saying, hey, I want to implement double tap. And immediately Jake says, nope, you can do. It's not performant. And I'm like, I wouldn't be offended or not feel like defending because I'll just go in. Oh, that's right. Performance was an important thing for this project. Another thing, we think the worker is a crucial for learning a smooth application. We need to learn JavaScript off of the main set as much as possible. I don't think we could make this game possible on the feature phone if we didn't do the worker. Lastly, if you're feeling like I'm not game developer, I'm not going to do the game on the web, one thing you can take away from this talk is just study what's on the first interaction for your website or your web applications and just remove everything you don't need. That makes your first load data small and then user can get to your service quicker. So if you want to check out the app, here's the link for the game. All of the source code are open sourced on the GitHub. So you should check that out. Feature request or bug fixes are very much welcome. If you have any questions or want to play the game on the big touch screen, all of us will be at the sandbox tent A after this. Thank you very much.