 Good morning, everyone, and welcome to day two of Google I.O. My name is Zach Cook, and I'm a product manager on the Chrome team leading our web payments effort. A special thank you for coming out and joining me at 8.30 AM, which is very early. And of course, a special thank you and welcome to everyone on the livestream, as well as anyone that's watching on YouTube later. I'm really happy to be up here today, talking to you all today about how we think about the future of web payments and how we're trying to really help users, both our users and your users, your customers, have pain-free checkout experiences on the web. But I thought I'd get started by giving a refresher on why we care about this space at all. I think it stems from the fact that the web is better than ever. You've probably heard it over and over again, from Rahul's keynote to all of our web track sessions, that the web is really amazing these days. You can build fast, rich, app-like experiences, and they're really compelling. And that line between what is web and what is app is more blurry than it's ever been. And we think this is an incredible opportunity. And things like WebGL and Polymer and Service Workers, these are all tools that can help you create these incredible experiences. I think, personally, my favorite example of how far the web has come is the fact that you can literally circumnavigate the entirety of the Earth's surface directly inside of a browser tab these days. So this is Google Earth running inside of Chrome. And to me, it just kind of blows my mind that you can do this on the web. If you don't recognize this, this is actually where we are right now at Shoreline Amphitheater, which is the beautiful white tent where all the keynotes are. And then, of course, we're in the back parking lot. But it's a very nice parking lot. So really incredible stuff. But when you look at all this amazing progress, there's one area that seems obviously behind. And that's the way we buy things online. Because despite all of this great progress, this is still what most checkout flows look like. It's the same questions, the same form fields, the same multi-step process. It's like, for all this great innovation, we're still stuck here. And I suspect this is very similar to the way checkout forms first worked when they launched. In preparation for this talk, I did a little bit of research. And I discovered that the first online transaction was made in 1984 in someone's home by a 72-year-old woman named Mrs. Snowflake, which I think is just an incredible piece of fairly worthless trivia. But I think that very little has changed in those years, even though it's 2017. And I think the stats reflect that we need some change here. Long checkouts continue to be one of the leading causes of card abandonment out there. And it makes sense. Here's a question for you. Has anyone in here on a mobile device ever abandoned a transaction because the process was too long or cumbersome? You can raise your hand. Oh my god, that's way higher than the stat, by the way. So that's amazing. And me too. And sometimes I tell myself, I'm going to go back and I'll complete it on desktop later. Maybe I do, maybe I don't. The point is that there's a lost opportunity to give users a really great experience here and to convert at that moment. So we're quit literally losing money. Despite these challenges, though, mobile commerce is huge. I mean, mobile commerce in the US alone this year is expected to exceed $150 billion. That's all the more impressive when you consider the fact that mobile websites still convert about a third lower than their desktop or laptop counterparts. So we have a lot of work to do and a huge opportunity here to really better the lives of our users. And so every good platform out there, whether it's apps or the web, needs a good payment solution to succeed. And so it became obvious to us that the web needs a better answer for payments. And that's what I'm here to talk to you today about today. And in that sense, I actually have really good news. It already exists. Last year at Google I.O., a couple of my colleagues were up on stage. And they announced that we were working on this new API called Payment Request. Well, I'm happy to report that not only did we say we were going to launch it, we actually launched Payment Request in September of last year. So what is Payment Request? It's an open, built into the web, designed to be fast and secure, ready to be used today API for transacting on the web platform. We're going to be talking about all of these different components throughout our talk today. And I'm really excited to just dive right into it. But first, I should clarify something. Payment Request is not a processor. We're not trying to make the browser a processor or a gateway or another new entity in the system to move money from point A to point B. We think the industry has already done a great job of actually filling this service. Great players like Stripe and Braintree and Vantive. And we love the work that they're doing. No, we're focused on users. We're focusing on taking that user experience that seems stuck in legacy mode and moving it out of that into a much more faster streamlined experience. So when we thought about designing the Payment Request API, we had two goals, I think, primarily in mind. The first is that transactions on the web should be seamless. And we have to start with the status quo. So we have to take the existing world, often times the world of credit cards, CVCs, billing addresses, all those complicated things from a UX perspective. And we have to make them better. The second thing is we have to think about security and how we can improve that. And I don't just mean things like HTTPS, because, of course, we should always have HTTPS everywhere. We're big advocates of that. But I mean, we should be forward looking and say, how can we bring more secure forms of payment, like tokenization, directly into the web platform? So how can we unlock opportunities for players like Android Pay and Samsung Pay and future tokenization technologies to have a home directly in the web platform? So high quality, secure payments become first class citizens of the web platform. Now, I could talk about this all day long, but I'm going to just go ahead and jump to a demo and show you what this all looks like. All right. So it's going to jump over to the Wolfvision here. All right. So as I mentioned, payment request is alive and ready today. So you don't have to wait to start implementing. And we actually have a number of really great merchants implementing, one of which is Kogan.com. It's a little blurry, but hopefully you guys can get the idea. Kogan is Australia's largest retail company. And they've implemented payment request. And it just so happens that Chrome also has an office in Sydney. Now, I've never been there, but I'm going to operate under the assumption that one day they will let me go to Sydney. And I'm going to give myself a gift on arrival that will be waiting at the Google Office for me. And so I'm going to buy something that everyone I think needs, of course, USBC adapters, super common. So I'm going to add this to my cart. And it's now added to my cart, as you can see. And I'm going to hit the Checkout button. And so you see here the sort of standard thing happens. I'm on Kogan.com. The shopping cart loads up. I've got my list of items I'm about to purchase. I have delivery estimates, and I hit the Checkout button. And here's payment request in action. So instead of going to the traditional checkout flow at this point, a payment request sheet slides up directly from the bottom. We've got the merchant name, their HTTPS URL with a lock icon. We've got the Chrome logo in the bottom left, so users are aware of where this data is coming from, as well as a bunch of other information, as you can see. So first, you see we have a total amount of money being requested. So you see right now it's Australian dollars for $5. And you see that I already have a Visa credit card default selected there. And the only thing I have to do is choose a shipping address. So this makes sense because, of course, it's a physical good, so we have to ship it somewhere. So I'm going to tap on and choose. And you see here that payment request already knows my most frequently used addresses. So you see the first one there, my number one, is my main office in our San Francisco headquarters. And then the second one actually is our Sydney office. What this tells you is this is not the first time I practiced this demo. So Chrome has stored and saved my address here. So I then tap on this address. And what's happening when I tap on that is we take that address and we send it behind the scenes to the merchant so they can use that address to dynamically populate the set of available shipping options. So you'll see they default selected free shipping for me. But if I tap on that, I can also select express shipping. Now in this case, I have no idea when I'm going to go to Sydney and pick up my little converter here, so I think standard shipping will be fine for me. And then I hit the pay button. And you'll see the only thing that I have to do is input a CVC. Now because this is a live transaction and something should be kept secret from the world, I am not going to show my CVC number to everyone. But you can trust that I just typed it in. And now the transaction is running behind the scenes. We've taken that data. We bundled it all up. And we've sent it behind the scenes to Kogan. And as you can see, the transaction is successful. Yay, I love when demos go well. So that's amazing. What? Thanks. I think what's really cool about this is that I didn't have to type in anything except for my CVC. We had all my data stored. It was all ready to go. All I had to do was confirm it, off it, and get it over. So that's the kind of seamlessness that we're talking about bringing to the web. Now I know that we are really focused this year on the mobile web track. But the reality is that we need to ship back in a demo mode because we still have, there we go, perfect. Again, we've been on the mobile web track. And that's the big focus for us at I.O. But we all know that users still love buying things on their desktops and on their laptops. And we thought that we should bring the same great experience to these platforms. So here you see, I have a shop loaded up on my Mac and Windows, or sorry, Chrome running on my Mac. And I'm going to tap the Buy Now button. And as you can see, a new payment sheet optimized for the desktop experience comes down from the top. And what's great here is you'll see that my same information is available. So I choose an address. You'll see it's got my same addresses that you saw on my mobile device because I'm signed in. And all that data is sinking across all of my instances of Chrome. In this case, I'm just going to ship to our Spear Street office. You see that methods are still dynamically calculated. So we offer free shipping inside of California. But you can see, just as a demo, if I do decide to ship somewhere else, shipping dynamically changes to $10. And in this case, my 8.605 visa is also there present, the card I just used to make that last transaction. But in this case, since I can't hide the screen, I added a test card that I'm going to go ahead and use to facilitate payment. So I do the same thing, tap on Pay, insert my three-digit CBC, hit the Converm button, and the same thing happens. So your order has been successfully placed. So really great. And I'm really excited to say that we are bringing payment requests to all Chrome platforms very soon. So really nice. So we can pop back over to Slides. Can we pop back over to Slides? I feel like we'll get there. Excellent. OK. So now you've seen payment requests in action. I want to help you understand a little bit how it works. Now, I won't go into too much depth here because we already have a lot of great resources. But I want to give you the core idea of how this whole process works. A payment request is a JavaScript API, of course. It's built and baked into the web platform to create these experiences. This is what you saw there. All was natively Chrome-rendered UI. And the first thing you have to do when you want to create a payment request is you have to tell us how you can get paid. We call these supported methods. And there's two types of supported methods that exist inside a payment request. The first one are things that are baked into the standard. In this case, we have something called basic card. Basic card is a great fallback mechanism. And it's your way of telling the browser, hey, if it looks and feels and functions like a piece of plastic in someone's wallet, it probably maps to basic card. So it's your way of saying, I accept any of these forms of payment. I don't show it here, but there are ways to get more nuance than this as well. You can say, I only accept debit cards, or I only accept credit cards from these three networks. All that's totally possible. The second thing you can pass in are supported methods that we call proprietary methods. These are identified by URLs. And in this case, for example, if you wanted to leverage the great new stuff that the payments team announced yesterday around Google Payments, this is how you tell the browser that, hey, I support any of those Google Payment methods. And so there's a proprietary system built in here so that every entity in the payment space can actually participate freely in the payment request ecosystem. It's the job of the browser to look at the ways a merchant can get paid and the way that a user can pay you and match those automatically. And that's what allows us to offer that great rich experience where things are default selected. So all I have to do is confirm and pay. The second thing I have to pass in are details of the transaction itself. There's only two required things here. You have to pass in a label, like purchase amount, for example, and you have to pass in an amount of money that you're looking to get paid on. In that sense, there's a currency code. This could be US dollars, or as you saw in our demo, Australian dollars, and a value. And we use that to render the appropriate amount on the screen. You can also pass in an optional set of display items. These are things that show up that basically inform the user about how that total amount of money was reached. These are totally optional, but we really recommend them because it helps the user explain how that total amount of money was reached, things like subtotal, taxes, shipping costs, et cetera. And the final thing you can pass into payment request is your completely optional set of additional information you might need to complete your transaction. So when we saw the Kogan demo, you saw that what they had done was they requested shipping true. This basically tells the browser, hey, if you have addresses stored, leverage those so the user can select them. And this is a fully dynamic system. So when I tap an address, we take that address, we bundle it, send it to the merchant so they can dynamically calculate shipping options, and there's a way in the API to then update us on the set of available options. So it makes a really robust, rich experience. You see, you can also request things like email, phone, and name. Email, for example, was also in the Kogan demo because that's utilized for a new customer like me who had never purchased to be able to send an email transaction receipt. Again, all of these are entirely optional. The only thing you have to have in a payment request is an amount of money and a way that you can get paid, at least one way to get paid. Something that we, oh, and now you have to put it all together, of course. So quite simple, you just construct a payment request so you pass in those three components we just talked about. And then whenever you're ready to actually show that payment sheet, you call .show. And .show is the function that says, hey, actually slide up that payment sheet and facilitate payment. At that point, that returns back a JavaScript promise, and you're just gonna wait. And at some point, you're gonna get back a payment response. And that payment response is just a JSON object that contains all of the data that you requested and that you need to facilitate the transaction. So if it's a credit card, you're gonna get things like card number, name, a billing address, even a CVC to run it through. If it's a proprietary form of payment, you'll get all of the data for that particular form of payment. So Android Pay is also supported on this. So if you got an Android Pay response, you would be able to parse that response and pull out that tokenized form of payment and send it off directly to your payment processor. And the final step is that once you're all done, you just call complete. And that's your way of telling the browser, hey, I'm finished with this transaction, go ahead and close that payment sheet. That then also returns back a promise that will resolve when that payment sheet is completely closed down. And that way, if you have like great UI, BIPs that you wanna flip or whatever, you can do that all seamlessly without the closing of our animation. And that's it, as much as it takes to actually get that great experience we just showed on Kogan.com. Now there's one other API I want to mention, or one method, which is Can Make Payment. Can Make Payment is an API that allows you to ask payment requests if the user has a form of payment already active and ready to go before calling.show. So if the user doesn't have anything set up, you'll get back false. If they have something set up, you'll get true. We think that this API, along with that set of optional information at the end there, that third component, allow payment requests to be really flexible and they can work into all of your different flows. And so it's really nice to be able to, you can just use it as simply as a payment mechanism, just for a credit card, or it can facilitate the entire transaction flow, including shipping address and names and emails and phone numbers. And we have merchant shipping, all different variations of this. Now as I mentioned, one thing that we're really happy to announce today is that we are coming to every single Chrome platform. We announced Android last year and now we're happy to announce it's coming everywhere. That's Android, Linux, Windows, Chrome OS, Mac, as well as even Chrome for iOS. And so for users that are synced, all of that data is stored and synced across all your devices. So the minute I sign in from one to another, all that data is there and ready to go. There's a theme here with payment requests, which is let the browser help you. We've got this data stored. Our users trust us to store it. What we want to do is share it with you. All we need is user consent to do so. So in some sense, think of payment requests as a way for a user to grant consent to share this data with you seamlessly and easily. Now something else I'm excited to announce is that we also have support for all Google forms of payment built right in. So if you went to the keynote yesterday with Shridhar and Polly, you saw that we're building this great new payment experience where all of your forms of payment in Google are grouped under a single application. All this is gonna be built and seamlessly work inside of Chrome and with payment requests where most of the heavy lifting is actually done by us. So you'll pass in that google.com slash pay identifier and you're gonna get all the benefits of the Google payment ecosystem built right in and access to those hundreds of millions of fobs, which we think is really compelling. And as you can see, it's a really nice experience built directly into Chrome. Something else that I think is also incredible is the fact that payment requests now also work seamlessly with AMP. So if you're not familiar, AMP are accelerated mobile pages. They're basically incredibly fast loading pages that load virtually instantaneously. So in this GIF on the right hand side, you'll see a live example of this working on one of our partner sites, portero.com. You'll see that it all starts from a search. You search, you tap on a result and that page loads basically instantly. So user can very quickly go from searching to seeing the entire product information. They then tap that buy button directly on the product page and the payment request sheet slides up. All the information is there. You're ready to transact. So the entire experience from front to back can be done in less than a minute. It's really amazing. AMP has seen incredible success. And so if you haven't yet looked into AMP for your product pages, I would encourage you to check it out. And if you have AMPed your product pages, I would really encourage you to look into leveraging payment requests to just facilitate that transaction right there on the page. Now, I know I'm talking about a lot of Google here, Google Chrome, AMP, which is an open standard, Google Payments. But one thing that is really great today to be able to announce is that this API is a cross-browser API. We talked about openness, right? And openness means that it should work everywhere. And so this is really great. Edge has launched. Samsung Internet has launched. Chrome has launched. And Firefox is in development and launching soon. So we've seen a lot of movement in the industry, which we think is really incredible. Yeah. Thanks. I think this tells you that there's a commitment in the industry to solve this problem and to really make web payments compelling and to give users on every platform independent of their browser choice a great checkout experience. And so it's been great working with all these players inside of the W3C. And of course, as I mentioned, this is available now. This isn't just Vaporware. It's not something that we're launching soon. No, payment requests exist. And we already have a number of great merchants, literally around the globe, shipping integrations. So you've got Wego. You may have heard already. They're based in Southeast Asia. We've got Kogan in Australia. JD Sports, Nivea, players from Europe. We're really excited and thankful to all these early partners. And these are just some of the merchants that are shipping payment request integrations right now. We also recognize that we want to bring, well, not recognize, we want to bring the benefits of payment requests and seamless transactions, though, to all merchants independent of size. And one way to get that reach is to work with these great channel partners, people like WompMobile, who you saw in the AMP demo, WooCommerce, BigCommerce, Mobify, Weebly, Shopify. By working on these guys and building payment request integrations right in, we can bring these benefits directly to all merchants, even those in the long tail, because everyone should be able to give their users stellar experiences. And so we've been really happy to work with all these great guys. Now, I realize that this can be overwhelming, and that checkout flows are complicated. You're like, Zach, this all sounds great, but you have not seen my checkout flow. I've got guest checkout flows. I've got registration. I've got sign-in. I've got coupon codes. I've got all this stuff. And I realize that it's a big ask to go from not using this new thing to leveraging this new thing. So I want to throw out a potential place to consider getting started. And it starts with a challenge. And the question is to go back and ask yourself, what percentage of your transactions, especially on mobile, have only one item in the cart at checkout? So go back, run your numbers, and figure out what percent of transactions only on, have only one item in the cart at checkout. And again, focus on mobile. I think you'll be surprised at how high it is. We ask this question to our partners all the time, because we're curious. And it's always higher than what we expect. But we've been amazed to see that it's up to 80% sometimes of checkouts that only contain a single product. But if that's the case, why are we still stuck to a legacy system? It's like, the way we walk into a physical store is I grab my cart, I walk down the aisle, I add items into my cart, and then I go up to the front and I check it out and stuff like that. And that's the whole process. And then we brought e-commerce into the world. And we thought, eh, we'll stick with the same model. We're going to go ahead and have virtual aisles. And the user will have a virtual shopping cart. And they're going to walk the aisles and add items in. And then they're going to click on that shopping cart to review and check out. And then our form factor's got smaller. And we're like, ah, we'll keep the same model. So users are going to go on their tiny little devices, navigate our virtual aisles, add to the shopping cart. And so the flow ends up looking like for 80%, up to 80% sometimes of your users are on a product page. They're having to tap on the product, add it to cart, go find the cart, click on the cart button, review the cart, and hit the checkout process. And you just think, man, maybe we can optimize this. And so here is my recommendation or something you should consider to get started, which is consider payment request as a buy now button that you can add directly to your product pages. It's a way to pretty simply get started. You can leverage tools like canmakepayment and request shipping and request email address to facilitate that entire transaction right there on the product page. So just as an experiment and see how it performs, I think it's worth giving users, especially those that only want to make a single fast purchase, the option to do so. And again, with all the tools like canmakepayment, you can do so in a way where you can have full control over the system. So they don't have a seamless experience built in. That's fine. Skip it and go to your legacy flow. But we think there is a really great opportunity here to have an impact and drive up conversions. And then later, if this proves successful, you can consider how to leverage payment requests in your default checkout flow as well. Now, we've talked from the very beginning about openness. And when we first announced payment requests, we said that it was never about just making Google forms of payment easier, although, of course, we care about that. But it's recognizing that for true openness means that everyone should be able to participate in the ecosystem. And so one of the things that we're really, really happy to announce today that we've been working on for a long time is that we're officially opening up payment requests so that third-party payment application providers can participate directly in the ecosystem and show up in that exact payment request sheet. Because the reality is that the web is global and payments are global. And if we want to have global scope, we have to open it up so that everyone can participate freely. And we think this is really great. But to talk a little bit more about this, I want to invite up a couple of my colleagues from the W3C's Web Payments Working Group who we've been working on this with. From Alipay and Alibaba, let me welcome to the stage Max and Jaja to talk a little bit more about this. Yeah. Hello, everyone. I'm Max. And I lead the International Centers for Work for the Alibaba Group. Hello, everybody. I'm Jaja. I'm a senior engineer from Alipay, which is a company of EndFinishers Group. Before getting started, let me see a quick show of hands. How many of you have ever heard of Alipay? Whoa. OK. So how many of you have ever used Alipay to make a transaction? Oh, not bad. So mobile payments are very popular in China. For example, Alipay is used for online shopping, taking taxis at the supermarket, and lots more. And we can use fingerprint, face detection, and other biometric authentication methods. It's very secure and convenient. Alipay is not only popular in China, it's also a global brand. When we're considering to bring Alipay into the global market, we found that mobile web is the popular venue for people doing online shopping. We wanted to bring the good user experience of native Alipay directly into the web ecosystem. And we see that the payment request API standard is the best way to do this. We joined W3C and the Web Payment Working Group to work with Google to make sure that the payment request API standard can also support native payment app. So updating your native Android app to run instead of payment request API actually is quite simple. You just need to add in your Android manifest XML file to respond to the payment intent and add metadata which specifies your payment master's name. For example, for Alipay is alipay.com slash WebPay. Your payment app will then receive the payment request through this intent and should reply with the correct response for your particular application. But as you can imagine, in working native payment app from browsers has security challenges. For example, if there is a phishing attack and a fixed Alipay installed on user's phone, we don't want the browser to open up that fixed app. So how do we prevent this in the standard? So also, let's see a couple of manifest files located on the payment provider's website. This can contain a variety of useful information used to verify the authenticity of the payment app. The first is what we call the payment method manifest file. This is downloaded by the browser using the payment method identifier information, which for us is alipay.com slash WebPay. The browser does this by issuing an HTTP link header request, which points to the location of the payment method manifest file on alipay.com. Instead of this payment method manifest, we can then define the location of our second manifest, the WebApp manifest. This is where we define the information about our application itself, including the package name and the fingerprint information. What is important is that only alipay has control over this manifest file. And when the browser tries to invoke payment app, it will first download this manifest file and verify the signatures match. Let's look at the demo and see how this works. So all the demo is real. So in the first demo, the merchants support multiple payment methods. And when the user clicks buy button, alipay can show up as one available payment method. And if the user selects alipay, the native alipay app will be opened up, and the user can use their fingerprint to do the authentication. It's very convenient and secure. So in the next demo, the merchant only supports alipay. So the merchant can even show a pay with alipay button. And with just one click, and within several seconds, the payment could be finished. It's very convenient. And I think that will help you to spend your money more easier. Cool. So we are currently working hard with Chrome Team to bring this feature into production. So if you are a developer, you can use this feature in Chrome, in alipay, in UCWeb, and hopefully in more browsers in the near future. Alipay is the world-leading third-party payment platform. We have spotted more than 18 currencies, including USD, Hong Kong dollar, URL, Palm, Japanese yen, and so on. We are continually improving user experience and security, so technology, innovation, and implementing industrial standards. Welcome to the joint payment ecosystem. Thanks for being here. Thank you. Awesome. Thanks, Max. And thank you, Jiajia. I think it's really exciting to see an industry like this evolve, and we're really happy to be a part of it, and really pushing for openness on the web. As they mentioned, I'll quickly recap. There are just three actually steps to get started integrating your application today. One, you'll define your identifier. For Google.com slash pay, that's what it is. For alipay, it's alipay.com slash web pay. Secondly, you'll make a few updates to your manifest file in your Android app, as well as some functions to handle those things. And then finally, you'll set up some manifest on your web server, so you can be confident that Chrome is opening the right application. So really simple, really great, and I'm happy to announce that we're already working with some really great partners. So you'll expect to see Samsung Pay coming into Chrome, Alipay, as you saw just in the demo, as well as Square Cache, and a number of others. So I think you're going to see a lot of evolution in this space over the next three to six months. Now, I recognize there's a bit of irony here, which is that we are on the web track talking about how great the web is and how amazing it is. And then I'm up here saying, our first integration is with native Android apps. This is for a couple of reasons. One is because we do think that there are great experiences that can be built with native applications. You get access to things like fingerprints, so you can have biometric authentication, and it makes great experiences. The second thing is just a little bit easier to get started. But I have great news, which is that we are actively working on pure web app support. So you get all the benefits of the web, like no installation, immediate availability, and global reach. And so web apps can be full first-class citizens inside of the payment request system. We think this is actually really compelling. We recognize that users love to pay with certain brands, but don't necessarily have those apps installed. That's OK, and we're fully working and committed to bringing that support. And today, I'm excited to announce that we are working with PayPal to actually build out this experience and bring PayPal's web app directly into the payment request experience. So look forward to sharing more about that over the next few months. Now, again, we talked about a lot today. There's a lot of code samples, a lot of overviews. So we have a lot of great integration guides that are out there that you can reference. We've got payment request guides. We've got how you can get started integrating your Android payment application. And we also have a great code lab that you can run through, which helps you get familiar with the payment request system. So I encourage you to check those out. As well as after this, we do have a mobile web tent, which is just around the corner out this door to the left. So I would encourage you to stop by, say hello, see some of these things in action, and ask the hard questions. So thank you so much, everyone. It's been a pleasure to be up here today. Hope to talk to you afterwards. Hey, everyone, Sam here. In this web series, we'll solve common web problems with standards. These techniques are part of the web platform and work with any framework or library. OK, let's go. If you're animating some elements, whether you're using CSS, the web animations API, or libraries which just use request animation frame, make sure you help your browser stay speedy by letting it use your 3D graphics card. Sam, what are you talking about? When you look at a website, it's made up of layers stacked on top of each other. And your graphics card won't need to redraw for changes to only some certain CSS properties, specifically opacity and transform. These two properties let you move stuff around and change how visible it is. You can animate width, color, or other CSS properties, but do it sparingly and not in response to user action, where you want to be snappy. To keep things fast, if you have elements that move around a lot, you'll also want to set the will change transform or opacity CSS property. This is a hint to the browser. This element should be a layout all the time. Otherwise, when you upgrade or downgrade to a layer, you'll incur a cost. The browser has to redraw that specific part of the page. But if you give everything in your page a layer, this might bog down your browser as it struggles to compose them all together. This is a complicated topic, and you should check out more documentation to learn more about it. And most importantly, your site doesn't need to be perfect. It won't crush your user's experience to make a few mistakes here and there, but it's good to keep on top of it. Check Chrome's rendering section, Inside Developer Tools, once in a while, just to see what's happening and how your site's going. This was Layers the Standard Way. See you on the next tip. The only thing that evolves faster than technology is our expectations. We want everything better, easier, now. Suddenly downloading an app feels like it takes forever. And in many parts of the world, data is still at a premium, with one megabyte costing up to 5% of a monthly wage. Let's face it, though. Until now, the alternative to native apps hasn't been great. But that was then. Progressive web apps can now deliver mobile web experiences with a native-looking feel, offering features like real-time push notifications, adding a site to your home screen so you can easily jump back to it with a single tap, even when you're offline, plus the ability to make quick payments on the go. And all from your browser is the next generation of the mobile web. So what are we waiting for? Let's go and build something great. We tend to talk about the web, and today I think he's no different. Do you think you can beat Wikipedia at a server-side render, do you? I did. We face it to be two. The rules all change. One code base is not what you want, because then you end up with stagnation. I've got one function that actually produces the other function. It's almost become self-aware. The code is now writing itself. What time does the game start? LinkedIn is the premier social network for professionals. We are consistently one of the top apps on Google Play. My name is Pradeepra Dash. I'm the engineering manager for LinkedIn infrastructure team for the flagship apps. We have a million-plus reviews. We are consistently four-plus stars. Our users are really appreciative of how stable the app is, how it really helps them bring their professional profile forward. My name is Drew Hanny. I'm on the Android infrastructure team at LinkedIn. My team is responsible for the overall health of our Android application. We take care of releases to Google Play, our testing pipeline for the app, our build pipeline. One of the tools that my team has been excited about is the APK Analyzer tool, because we spend a lot of time paying attention to the size of our app, and it required some expertise to figure out what was causing your app to be a certain size. But now that we have the APK Analyzer tool, we've been able to expand that knowledge throughout the entire team. LinkedIn was an early adopter of Gradle, so adopting Gradle for Android was a pretty natural fit for us because we can share a lot of our custom plug-in logic that we've built, and we can also get consistent builds across developer machines and our build service. One thing that's really benefited us is the Lint system, which has a lot of built-in checks for common problems, and we've really appreciated being able to write our own custom Lint checks for LinkedIn-specific internal libraries. With so many developers checking into the code base, one thing that helps onboarding someone new is having a consistent code style throughout the code base. Android Studio lets us find a custom code style that each developer can add and automatically have their code formatted in the LinkedIn style. One of the things that we really appreciate is the open nature of the platform, where not only has the feedback that we have given back to the developer community in Accepted, but also being worked upon. In the last couple of years, my team has become way faster, much more agile, and being able to code easier and faster on the Android Studio. I also learned how to program computers, and then in 2001, we started the first video games company for mobile phones in Spain, called MicroJocs. In 2013, my studio was acquired by a big company. Some of the guys and myself, we decided that we should do something fresh, something new, and we found it on the run. Titan Roll is a real-time strategy game. It's considered it as a MOBA. It's a massive online battle arena, but especially designed for mobile devices. The game is today as it is, thanks to the Early Access program. We changed many things from the learnings, from the community. Since we launched the game on Early Access, we got more than 2 million installs on Android devices. We started in the Early Access program back at the very beginning of it. The difference between the Early Access program and traditional soft launch is that users are actively giving the team feedback. So you don't only check the metrics you have, but they also provide possible solutions. So you end up by doing the game players want to play. The thing about not having the ratings, but do having the constructed feedback was very good. The Early Access was a great opportunity for an indie developer, someone starting and very key for us in amnesia. When we started with the Early Access program, we approached it in different stages. So the idea was at the beginning to focus on the engagement of the games. Once we started that out, we focused on the retention of the game. And finally, we focused on monetization to do a valid product for the market. We managed with Early Access to improve our retention in a 41%. The engagement by a 50% and the monetization by a 20%. From the very beginning of the program till worldwide launch of the game. I feel very happy working on the video games industry because it has been my passion since I was a child. And it's really inspiring that through Omnibrand, we have a real chance to shake the new era of the video game. In response to popular demand, the Android framework team has written an opinionated guide to architecting Android apps. And they've developed a companion set of architecture components. Hi, my name's Lila, a developer advocate for Android. And I'm here to introduce you to these shiny new architecture components. These components persist data, manage life cycle, make your app modular, help you avoid memory leaks and prevent you from having to write boring boilerplate code. Your basic Android app needs a database connected to a robust UI. The new components, room, view model, live data and life cycle make that easy. They're also designed to fit together like building blocks. So let's see how. I'll tackle the database using room, which is a new SQLite object mapping library. To set up the tables using room, we can define a plain old Java object or pojo. We then mark this pojo with the at entity annotation and create an ID marked with the at primary key annotation. Now for each pojo, you need to define a DAO or database access object. The annotated methods represent the SQLite commands you need to interact with your pojos data. Now take a look at this insert method and this query method. Room has automatically converted your pojo objects into the corresponding database tables and back again. Room also verifies your SQLite at compile time. So if you spell something a little bit wrong or if you reference a column that's not actually in the database, it will throw a helpful error. Now that you have a room database, you can use another new architecture component called live data to monitor changes in the database. Live data is an observable data holder. That means it holds data and notifies you when the data changes so that you can update the UI. Live data is an abstract class that you can extend. Or for simple cases, you can use the mutable live data class. If you update the value of the mutable live data with a call to set value, it can then trigger an update in your UI. What's even more powerful though is that room is built to support live data. To use them together, you just modify your DAO to return objects that are wrapped with the live data class. Room will create a live data object observing the database. Then you can write code like this to update your UI. The end result is that if your room database updates, it changes the data in your live data object which automatically triggers UI updates. This brings me to another awesome feature of live data. Live data is a life cycle aware component. Now you might be thinking what exactly is a life cycle aware component? Well, I'm glad you asked. Through the magic of life cycle observation, live data knows when your activity is on screen, off screen or destroyed so that it doesn't send database updates to a non-active UI. There are two interfaces for this. Life cycle owner and life cycle observer. Life cycle owners are objects with life cycles like activities and fragments. Life cycle observers on the other hand, observe life cycle owners and are notified of life cycle changes. Here's a quick peek at the simplified code for live data which is also a life cycle observer. The methods annotated with at on life cycle event, take care of initialization and tear down when the associated life cycle owner starts and stops. This allows live data objects to take care of their own setup and tear down. So the UI components observe the live data and the live data components observe the life cycle owners. As a side note to all you Android library designers out there, you can use this exact same life cycle observation code to call setup and tear down functions automatically for your own libraries. Now you still have one more problem to solve. As your app is used, it will go through various configuration changes that destroy and rebuild the activity. We don't want to tie the initialization of live data to the activity life cycle because that causes a lot of needlessly re-executed code. An example of this is your database query which is executed every time you rotate the phone. So what do you do? Well, you put your live data and any other data associated with the UI in a view model instead. View models are objects that provide data for UI components and survive configuration changes. To create a view model object, you extend the view model class. You then put all of the necessary data for your activity UI into the view model. Since you've cached data for the UI inside of the view model, your app won't require the database if your activity is recreated due to a configuration change. Then when you're creating your activity or fragment, you can get a reference to the view model and use it. And that's it. The first time you get a view model, it's generated for your activity. When you request a view model again, your activity receives the original view model with the UI data cached. So there's no more useless database calls. To summarize all of this new architecture shininess, we've talked about room, which is an object mapping library for SQLite, LiveData, which notifies you when it's data changes so that you can update the UI. And importantly, it works well with room so that you can easily update the UI when the database values change. We've also talked about lifecycle observers and owners, which allow non-UI objects to observe lifecycle events. And finally, we've talked about view models, which provide you data objects that survive configuration changes. Altogether, they make up a set of architecture components for writing modular, testable, and robust Android apps. You can sensibly use them together or you can pick and choose what you need. But this is just the tip of the iceberg. In fact, a more fully fledged Android app might look like this. For an in-depth look at how everything works together and the reasoning behind these components, check out the links in the description below. To jump straight into code and get started working with these objects, you can check out the code labs and samples for lifecycle and persistence. Happy building, and as always, don't forget to subscribe. I'm Vojtek Karciński. This is Android Tooltime. And let's talk a bit about the Espresso test recorder and how it can help with adding UI tests to your app. Well, first, a short explanation for those unfamiliar with Espresso. Espresso is a testing framework designed to provide a fluent API for writing concise and reliable UI tests. However, it is often the case that developers are reluctant to add UI tests to their apps or simply don't have time to learn the framework. This is where the Espresso test recorder comes in. It lets you create and add UI tests to an existing app in an interactive way. You might have previously seen the beta version of this feature, but in Android Studio 2.3, we are promoting it to stable with a few enhancements. To get started with the test recorder, click on record Espresso test under the run menu. The device selection dialog pops up and after you make your choice, the test recorder runs your app in debug mode. Simply progress through your app's UI as a regular user would by clicking buttons, swiping and typing into input fields, and all those actions will appear in the test recorder window. You can also click here to add an assertion to your test at any time during recording, which will trigger the test recorder to dump the current view hierarchy. To select the view you want to assert on, click on the screenshot that appears in the recorder window and choose between the assertion type from view exists, doesn't exist, or check that it contains the specified text. When you've finished recording your test, the test recorder generates the equivalent test code to run your actions and assertions and puts it in a new file in your projects instrumentation test folder. It also checks if your build file contains the required Espresso dependencies and adds those if needed. When you look at the source file that Espresso test recorder created, you will see that it's perfectly normal human readable code. So if you need to further customize your tests or alter them when your app changes, you can simply open the file again and make the alterations you need. As you can see, the Espresso test recorder is very simple to use, but it does come with some limitations. As of Android Studio 2.3, only a few most common assertions are available through the recorder UI. So if you need anything more complicated than that, you will need to edit the generated code by hand. Also, at this stage, the test recorder cannot handle all situations where additional synchronization is needed to deal with delays and async operations in your apps. I highly recommend getting familiar with the Espresso idling resource API and using that in your tests to signal when a long running operation happens. For advanced users who want to tweak some aspects of test code generation, there's a settings page for the test recorder in Android Studio Preferences. Here you can change the maximum view hierarchy depth that will be used for view identification and if app data should be cleared every time you record a new test. The Espresso test recorder is a great way to start adding tests to your app, whether you want to learn Espresso by examining the generated code or simply to quickly build a test suite which you can customize later. We look forward to your feedback on our social channels and happy testing. There are approximately 285 million people with visual impairments around the world. Making your app accessible not just opens it up to these users but it has a potential to improve design for everyone. Most people are familiar with an accessibility service called TalkBack which is a screen reader utility for people who are blind and visually impaired. With TalkBack the user performs input via gestures such as swiping or dragging or an external keyboard. The output is usually spoken feedback. There are two gesture input modes. The first one is touch exploration where you drag your finger across the screen and the second one is linear navigation where you swipe left and right with your finger until you find the item of interest. Once you arrive to the item you're interested in you double tap on it to activate. The primary way in which you can attach alternative text description for your UI elements to be spoken by TalkBack is by using an Android attribute called content description. If you don't provide content description for an image button, for example, the experience for a TalkBack user can be jarring. Unlabeled button, double tap to activate. Unlabeled button, double tap to activate. For decorative elements such as spacers and dividers setting content description to null will tell TalkBack to ignore and not speak these elements. Make sure to not include control type or control state in your content description. Words like buttons, selected, checked, et cetera as Android natively does that for you. Android Lint automatically show you which UI controls lack content descriptions. To keep TalkBack spoken output tidy you can arrange related content into groups by using focusable containers. When TalkBack encounters such a container it will present the content as a single announcement. For more complex structure such as tables you can assign focus to a container holding one piece of the structure such as a single row. Grouping content both reduces the amount of swiping the user has to do while streamlining speech output. Here is an example of how ungrouped table content works. Song details, name, hey Jude, artists, the Beatles, cost $1.45. And here's the same content with grouping applied. Content grouping activity, song details, name, hey Jude, artists, the Beatles, cost $1.45. You should manually test your app with TalkBack and eyes closed to understand how a blind user may experience it. We also provide accessibility scanner as an app in Google Play. It suggests accessibility improvements automatically by looking at content labels, clickable items, contrast and more. Vision impairments doesn't just refer to blindness. 65% of our population is farsighted for example. With careful design you can make sure that many of your visually impaired users can have a positive experience without having to rely on TalkBack. Begin by making sure that UI of your apps works with other accessibility settings including increased font size and magnification. Keep your touch targets large, at least 48 by 48 DP. This makes them easier to distinguish and touch. Provide adequate color contrast. The World Wide Web Consortium created color contrast accessibility guidelines to help. And to assist users with color deficiencies, use cues other than color to distinguish UI elements. For example, more descriptive instructional text. If you're using custom views or drawing your app window using OpenGL, you need to manually define accessibility metadata so that accessibility services can interpret your app properly. The easiest way to achieve this goal is to rely on the Explore by Touch helper class. With just a few methods, you can build a hierarchy of virtual views that are accessible to TalkBack. Making your app accessible doesn't just need to new users. It helps to make the world a better place, one app at a time. To read more about developing and testing your apps for users with visual impairments, check out the links below. Also, check out the video on developing for users with motor impairments. Hey, good morning, everybody. Good to see you all here and on the live stream. Good morning around the world. If you're joining in, welcome. My name is Paul Bakos. I'm a developer advocate working on making the web faster and user-friendly and currently with my focus on Accelerated Mobile Pages, AMP. And I'm here to tell you about a combination made in Heaven, as I would call it. And what I'm talking about is the combination of AMP and Progressive Web Apps. Now, AMP makes the first hop blazing fast. If you've experienced it before, you know that it's usually the case. But in Progressive Web Apps, enable amazing reliability and engagement that you will only see from a native world. But that's just on the surface. So on the surface, that combination kind of makes sense. But then we have to start from the beginning because once out of time, the story starts with a modern-day web developer in 2017 with a growing imposter syndrome and the feeling of drowning. And so in 1999, if you created a web page, what you did is you created some HTML marker. You added some CSS. And then you hit the publish button. You had some really cheap server. There was no slash dot anything to have the slash dot effect. So it really was fairly good. And I felt like I could do anything. I felt like, oh, yeah, this is amazing. I'm connected to so many people around the world. And it's a free entry for everyone. But now let's look at 2017. So you create a marker. You add some CSS. Then someone tells you to definitely choose a build tool and make sure you choose the right one. You add Babel Webpack. And obviously React, because I heard millennials like React. And then so many other things that you have to do to actually be competitive in today's landscape. And then so yes, being a modern web developer is hard. But in fact, if you're looking at it from a company angle, what does your company, let's say you're a publisher, what is your company trying to accomplish and what do they need to do? Now you go one level up and you see it gets even worse because you have a number of different deploy targets, the mobile web, the desktop web, native Android, native iOS, instant articles, Apple News, and so on. So there's a number of growing ecosystems that you need to support. And now I'm here to tell you that you add two more to the roster. And you're like, oh, no. So yes, I get it. I get it. But hear me out. So let's take a zoom out world view, a bird's eye view, onto what we're actually trying to do. Now if we look at an e-commerce case, what we're actually trying to do, regardless of technology, I'll take a guess. You want to sell stuff. You want to sell lots of stuff. To sell stuff, you need to bring your product landing pages to lots and lots of places where they're seen by millions of people. And landing on your browse and product pages should be effortless and instant. And finally, the payment process should be effortless and simple as well. The user should automatically be inclined to come back and buy more of a time through re-engagement. And that's the kind of experience I want. I want to start fast and then stay fast. So why does AMP solve part of this problem? Well, I'm just going to recap briefly why AMP matters here. So think about your apartment key. Would you give your apartment key to all of these people? Well, there's usually one or two that say, yes, sure, could turn into an epic house party. However, that's really optimistic. I think this scenario is way more likely. However, on the web, there's an ongoing challenge of having a lot of monetization and user acquisition methods in play versus the user experience. And this happened because we're giving the keys to everyone. We're giving the keys to everyone, for everyone, to our web pages. So as soon as we add a script from a third party to a web page, all bets are off. If you don't trust, if you can't trust that source, they could do whatever they want on your web page. It is really like giving a key to a stranger. And so what we get because of that are slow-learning pages, non-responsive content, content that's shifting around that is really, really not really great to experience. And we just had that. So I'm going to skip this. Now, the current situation is pretty bad. We have over 200 server requests per mobile web page on average. And 19 seconds is the average mobile launch page time over 3G. Now, out of those 19 seconds, 77% of all mobile sites take longer than 10 seconds to load. But don't worry, it gets worse. In fact, if you look at that 10-second number in particular, it's really important because at 100 milliseconds, an interaction, for instance, clicking on a button feels instant. One second, it still feels natural. You still have your context model. But then at 10 seconds, you lost the user's attention. And yes, that has been measured multiple times before. The fact that after 10 seconds, if you load a mobile page, almost all users drop off. Now, if you've been listening to the previous stat, yes. That means 77% of all pages have likely never seen a mobile. So that's why we created AMP to actually combat this problem, to bring back the beauty of the mobile web. So we have three components in AMP. The AMP HTML, which is a superset and a subset of HTML, that adds new components, but also restricts some of the things that you can do. And then we have AMP.js that powers those things. And then we have the AMP caches. Now, just two quick examples of what AMP does are a few of the things it does. For instance, it can prioritize content loading. So it knows exactly where everything is rendered on the page before all the external assets are loaded so that we can pre-render the above the fold and not render anything else below the fold. And the caches can then use that. The platform that uses the caches to deliver your AMP pages can use that to smartly pre-render quite a set of pages. So if you're arriving, for instance, on Google Search and you get the top stories carousel, which is powered by AMP, some of those things are often pre-rendered before you click on them, which is why they feel so instant. Now, if we go back one step and look at this comparison that I did before of 1999 versus 2017, how does this look like in AMP? Let's have a look. Yeah, I think it looks way better. In fact, that's what you do. You create HTML markup. Because we have high-level components like accordions and all sorts of things in the HTML markup, you never write JavaScript. So you create HTML markup. You add some CSS. Out of it, you get a fairly nice, responsive, interactive content page if you want to. And then you just hit Publish. And it can be on a really cheap server because the caching infrastructure on top automatically quals and caches all of your content to be displayed in the apps. So you get back to a really, really cheap deployment model. So why progressive web apps? Well, they're engaging. They give you push notifications and home screen stickiness. And they're also reliable. They give you offline access and a response of UI, and those are the things that you know usually from the native world of things. And that's a common reason why developers build native apps. But in fact, 80% of the time is spent in the top three native apps on a phone. And there's zero number of apps that get installed on average per month. So that means that it's very, very expensive, for instance, through advertising to acquire new users through the App Store model. And so what we want, essentially, is a combination of those things. We want the capabilities of the native app ecosystem, but we also want the reach of the mobile web. And we get that with progressive web apps because they have many of those features built in. And they're also basically a website. So it's a pretty good way to get both of those things. So why is the combination of those two a good idea? They seemingly seem to attack different spaces on a different level. Well, if you just build a progressive web app, then you have the challenge that your first load is still going to be so slow. And the reason is that progressive web apps are really dependent on a technology called service worker, a client-side proxy that accelerates and caches the delivery of your app shell, of articles, et cetera. But the service worker only kicks in after the second request. So on the first load, you don't get those performance benefits. And usually, it's the first impression that counts if you want to get a new user. But then on the AMP side of things, you get no user author JavaScript. So we don't allow you to write user author JavaScript, except in an iframe. There's no custom service worker, no push notifications, no web app manifest when served from the AMP cache. On your origin, that still works. But when served on the AMP cache, you don't get all of those benefits. And so that is in order to be predictably fast. So the difference, really, on the AMP side, you get reliable instant delivery and optimized discovery. But no JavaScript static content, mostly. But then on the progressive web app side, the first delivery is usually slower. It's not as easily embedded as AMP documents. I'll talk about this later. But you get access to the latest web APIs. You can do whatever you want. And it supports much more dynamic content, even. And to combine those two is what we call start fast, stay fast. And now, of course, you could forget all of that and simply focus on the fact that the combo makes first a mesmerizing led animations. And if you've been doing web animation, sorry, if you've been doing web development in the 90s and early 2000s, I think this one might work even better for you. Wait for it. Mm. Yeah, OK. OK, let's not go there. But how do you actually do it? How do you actually get there? Well, we have three application patterns that make it happen. First, I call AMP as progressive web app. Second, AMP to PWA. And the third, AMP in PWA. Now, if you've been listening and I've been comparing the two, you're like, wait, AMP as PWA? We have actually, for sites that have static content mostly and don't have a lot of interactivity, you can have an AMP that is both a progressive web app. In fact, MyNet, who is one of the leading publishers in Turkey, has done that with their pages. So what you see here is a fully compatible AMP page. All of this is AMP with a full navigation concept, back and forth, carousel, whatever. All of it is AMP. And it also has progressive web app features. So it can be installed on the home screen, has a web app manifest, and so on. And in fact, just by doing that, they saw tremendous uptick and quite a few numbers. They got over 25% higher revenue per article page view. That is really the important number, because that means the bottom line is better. And four times faster average page load speed, over 40% longer average time on site, over 30% more page views per session, and much lower bounce rates. So in the end, really something worthwhile to do. But you can actually go further than that with our pattern. You could actually, because you have the service worker in place on your origin, you can insert anything into the AMP page that you want, random stuff that AMP doesn't like, because you are in control as soon as the service worker is running. Now, I had some nostalgia, so I thought, why not insert some 90s DHML magic into the AMP website? And I found this amazing mouse cursor. And this is exactly what I did. So here we go. I'm going to enable the service worker. I'm going to reload. And yes, I have a fancy mouse cursor and an animated background, all the things that AMP doesn't like. And first of all, what have I done? So you shouldn't do exactly that, maybe. But this pattern still comes in useful if you want to insert ads that are, for instance, not supported in the ecosystem yet, or other things that you need on your origin to run. So next is AMP to Progressive Web App. Now, all of this is based on a technology, on a component that we call AMP install service worker. And the AMP install service worker, the way if you've used service worker before, you register it in JavaScript on your page. And because you don't have access to JavaScript, we have an equivalent that is AMP install service worker. But the cool thing is that this component also works when your page is loaded on the AMP cache, let's say, in the top stories carousel on Google. It can install the service worker from your site's origin from your own domain so that subsequent clicks are accelerated. And the way this looks is the pattern is always the same. The user discovers content. The service worker installs in the background while the user is consuming the content. And then the user is upgraded instantly to a PWA. Let's have a look at how this looks in real-world production examples. So first, let's look at crossing, which is a job search site similar to LinkedIn in all of Europe. And so if the user discovers content, they check out a job that they like, they click on it, and then the service worker installs in the background. And now, if they're actually interested in one of those jobs, they're instantly forwarded to the Progressive Web App that push notifications can remind them to come back if their job is not available anymore, et cetera. So the next click is going to be instant. Now, next example, with Goibibu, which is a leading travel search company in India, you get the same pattern in a completely different vertical. User discovers content. Second page, the service worker installs in the background. Now, this page is a page that just shows you a tail. But now there's a Get Availability button at the bottom. If you click that button, again, you're instantly redirected to the Progressive Web App that allows you to check the availability. We could have even checked in advance in the service worker if there's availability so that even the API call wouldn't be necessary anymore. And then Rakuten, a Japanese recipe website that I've created recently is called Rakuten Recipe. And in this case, same pattern. The user discovers the content, arrives on a recipe site. And so you browse through the recipe. You realize, OK, there's a couple of connected recipes that I'd like even more. Or I want to bookmark this. And then the user is instantly upgraded to a Progressive Web App. Now, with Rakuten in particular, I have some stats to share. In fact, if you look at the different combination of things they've done, it's quite impressive, in my opinion. So with AMP, they've seen over 50% more time spent per user over 3.6 times higher CTR within the AMP page compared to other Rakuten Recipe sites. And then with Add to Homescreen, with a Progressive Web App feature, they're seeing over 70% more visits per unique user and over three times more page views per unique user once they install to the home screen. And push notifications, same thing. A lot more re-engagement, three times more weekly sessions per user after the first week, four times after second week and five times after first month. And now this is really important. With push notifications, they get three fourths, so 75% lower bounces for those users coming in by push notifications than by installs to shares. So they're seeing much more re-engagement through that channel. OK, so this sounds hopefully pretty good, but there's still a problem. Now, if you copy the URL in the Progressive Web App, you share or send it to a friend, then that friend will not go through the AMP install service worker flow. So they will open the Progressive Web App without a warm cache. Now, what do we do with this? Well, usually you would have AMP pages deployed on one subdomain, and you have the Progressive Web App deployed on another subdomain, and a link from an AMP page leads to the Progressive Web App. Sounds pretty straightforward. However, you can change that and reuse the same domain, the same URL space for both of these. And you can do that because the service worker can intercept navigation requests. So if you click a link, the service worker can say, OK, no, I'm not going to give you the next AMP page. I'm just going to give you the Progressive Web App instead, even though you're on the same domain. And it's actually just a few lines of code to do that. So you check for a navigation request, respond with a Progressive Web App. Now, what happens if we do this? A couple of magical things, because without the service worker, we still just get AMP. So for browsers that don't support it, you always get a fast experience. And then with service worker, you get the combination of AMP in the background and the Progressive Web App because the service worker intercepts and delivers the Progressive Web App instead. And if you don't have a service worker and you still want people to lead to that richer experience, a Progressive Web App that might not work fully because there's no service worker in some browsers, you can still do that with a technology that we call shell or rewriting. That's part of AMP Install Service Worker. Basically what this does is it says, detect if service worker is available in this browser. If it's not, rewrite the URLs on this site to go to a fallback domain. Now, OK, we have this complete, but now we have another problem. We still have this problem with our deploy targets. We still have plenty of deployable targets. And in fact, in this case, we probably have two content back ends. We have AMP HTML on one side and then probably JSON or some other content back end for the Progressive Web App that powers the Progressive Web App. Now, that is where the final pattern comes in handy, AMP in Progressive Web App. And in fact, AMP pages aren't just web pages. They're ultra-portable, embeddable content units that can stand on their own. And you think about it this way. If you don't think about it just as a website, some magic things can happen because we can get from this step to this step where AMP HTML powers all of my experiences, in this case, the AMP experience and the Progressive Web App experience. Now, one way to do this is to build an application shell and then use an iframe. But iframes are slow. And what we do instead, because we trust the content, it's our own content, we use Shadow DOM. So how does this look like? Without Shadow DOM, you get one window for every iframe that I initialize, infinite number of AMP library instance, and then an infinite number of documents. So lots of overhead to initialize the AMP library over and over and over again for every iframe. But then with Shadow DOM, you just have one window, one library instance that we call Shadow AMP, in the top of the page. And it simply connects to lots of documents and renders them out. So that's a much cleaner flow. And in practice, the PWA hijacks navigation clicks, fetches the requested AMP page, puts the content into that shadow root, and then calls it to Shadow DOM. But this was a slide that showed it just very briefly. I actually want to show it a little bit more in detail because we have a little bit more time today. And in fact, I think you can do this in an hour. This flow of inserting content via the Shadow DOM, if you already have AMP pages, I think you can do an hour. Now, if you don't believe me, let's do it in five minutes. OK? All right. So the first thing that we want to do is, and this is a caveat, we want to have a content source somewhere that serves as the navigation. So in this case, we need to have an overview page from somewhere that has a bunch of links, a bunch of images maybe. In this case, I just used YQL to fetch an RSS feed from somewhere, which is a really nice hack to create a compelling demo, I would say. And this gives me a nice list of articles that I can use. And I used the Guardian as a back end. Thanks again for the Guardian for letting me use the RSS feeds. And so as you can see, this is what I have so far. So this is just using the RSS feeds through YQL. And so far, so good. And this again, this step is not so hard to do. So I shouldn't be saying this, but I started on Monday. And so next one, we add the AMP shadow library to the head of the page in our progressive web app. Now again, this is a special version of AMP. And then we wait until AMP is loaded, which is using this technique that you find a lot of frameworks nowadays. So this is just a wrapper that the code inside this wrapper runs as soon AMP is actually fully loaded. And then when you have that, you fetch the AMP doc via XML HTTP request. And now again, straightforward, you know the link from the previous navigation. You know the link to the full article. You fetch it via XML HTTP request. And then you return something that we call response XML. Now, response XML contains a ready to use document object. Not that well-known feature of XML HTTP request. And I'm not using fetch here because fetch doesn't support this yet. This is why I'm using old school XML HTTP request. So now we have our actual document, our AMP document. It's not rendered anywhere yet. But now at step five, we render it using shadow AMP. So what we do is we create a shadow root. What this means, it's just a fancy way of saying we create a diff, a container, and we give that diff to a shadow root. Now, the AMP doc does all the magic. The attach shadow doc function does all the magic. So we give it the container that we just created. Actually, this should say shadow root. So let's just back on the slide. Then we give it a doc that we just fetched via Ajax. And we give it the original URL. And then in this case, we can also have a function that notifies us a promise when the page is ready and rendered out. Now, that's a basic flow. But there's a few things that make it even better. For instance, if you don't need certain things in the AMP page, because the AMP page usually has its own sidebar, its own navigation, so it can stand alone. But if you don't need that stuff while in embedded mode, you can actually remove those things before you give it to the AMP shadow library. You just remove the header, remove the sidebar, et cetera. So this is actual code from the demo. But also cool, if you don't want to do something as complex, that's cool, too. Because there's a class that we add to the body of the embedded page, which means that you can, in your CSS of your AMP page, you can use the dot AMP shadow page to simply hide things that you don't want in embedded mode. So that works as well. And finally, if you don't want to remove things, but you want to insert things back into the AMP page, features, for instance, like JavaScript highlighting or something like that, into the embed, you can do that as well by using shadow slots. And they provide a path for progressive enhancement of AMP docs when shown in the publisher's own context. So how does this look like in action? Let's have a look. Here we go. So the content experience is now all powered by AMP. As soon as you click on one of those links in the overview, the AMP page renders and loads. And because it's everything in our control, we have the whole document available. We can do some really nifty transitions to animate between those two instead of a full page load. Thank you. Now finally, let's wrap things up. With a progressive web app, with a progressive web AMP, sometimes I also call it plump, because I like the sound of it. It's always fast, no matter what. There's great distribution built in. It's progressively enhanced. It's really just one backend to rule them all and there's less client complexity, fewer deploy targets. And to visualize this last point again, remember how we had this plenty of deploy targets before in our previous slide. Well, in fact, we're now down to just three if we want to. We have that PWA shell, the native shell, and native shell and iOS. And those are just thin layers that serve as the application shell that provide a navigation model and maybe some features on top. But then AMP is the backend that runs it all, the content source. All of those could use AMP as a content source. And so you support the web, you support Android and iOS at the same time with very thin layers of extra code. So finally, a word of caveat. When do you actually do this? Well, you do this when you have a site that has lots of static content, for instance. So you don't want to do it if you built an XGmail, because it's really more like a really dynamic app that doesn't have landing pages, what we call leaf pages that accessed via organic search or social discovery. So your site has to have lots of static content to make sense. And ideally, you already have a large corpus of AMP pages that already gets you through the first step. And it just works extremely handy if your engineering resources are constrained or to reduce infrastructure complexity. And finally, you need to test out, if you haven't done AMP before, you need to test out if your content monetizing works fine within the AMP ecosystem. So before I leave you, and I think I have some time to open the room for questions, I'm going to leave this up for a second so you can take some screenshots. So we have a React sample app as well. That's not the one that you saw today, but it uses React, so it's great for millennials. So use the React sample apps to see how you do it in React. It has similar nice transitions. We just launched the AMP channel. If you're not sick of my face yet, you can see it again over there. And AMP project.org, we just published a big new guide about all of these patterns that I just talked about. If you want to recap all of those things and go through the tutorials, you can do that now on AMP project.org. With that, I'm going to open for questions. Thank you, everyone. And we'll see you next time. I'm Sarah from the Google Developer Certification Team. Last year, we launched the Associate Android Developer Certification at I.O. Now we're adding two certifications for mobile web developers. Why mobile? Mobile now accounts for over half of all web traffic. Users expect their small screen experiences to be as quick and intuitive as those on a desktop. But making the mobile web fast and easy takes some special skills. How can you prove you've learned them? We've created two new certifications to help developers get recognized for their knowledge and skill, introducing the mobile site certification and the mobile web specialist certification. One focuses on sites and the other on web apps. Let's talk about mobile sites. What happens to your beautiful site if it takes too long to load? 53% of mobile visitors will leave a page if it takes more than three seconds to load. But the average mobile page loads in 22 seconds. Making this even one second faster increases conversion rates up to 27%. Google believes in the mobile web. And so do our customers. That's why we've created the Google mobile site certification to help site owners find the best talent. Passing this exam demonstrates you have the knowledge for building high-performing mobile sites. It also highlights your understanding of best practices and current browser technologies. To pass, you'll need to be proficient across mobile site design, UX best practices, and site speed optimization. This certification is especially useful for developers working in-house for agencies or clients. To prepare, use the online study guide or eLearning course. Both are free. Once certified, you can promote your certificate on your Google partner's public profile and social media. What if you're developing mobile web apps? Developing applications requires even more specialized skills than sites, so we have a certification for that. The mobile web specialist certification shows you can build quality web apps, including progressive web apps. You take this exam by solving a series of coding problems. We'll test your skills in many in-demand areas, including responsive design, accessibility, and progressive web application development. This certification is especially useful for developers looking to move up in their careers. It will prepare you to tackle a wide range of challenges. We also provide a study guide and a range of courses to help you prepare. With multiple certifications, how do you know which one to take? Are you building mobile sites and need to demonstrate you have the knowledge to do it? Take the mobile site certification exam. Need to show that you have the skills to build a mobile web app? Take the mobile web specialist exam. Visit our certification page under the Google Developers website to learn more about our programs. Get the study guides, get ready, and let's go. We're bringing together a really talented group of designers and developers to collaborate and innovate and generate exciting ideas for what can be done on the Android platform. Within this context of a sprint, I really think that the distinction between designer and developer is blurred. It's all about problem solving and getting it done. You're not working with designers as an engineer making a mistake to begin with. I feel like every designer engineer brings a perspective to a project. And that's what's nice about working with a designer. You think like an engineer and designer comes and brings a perspective. In this group, I'm happy to work with super talented designers. It helps me understand Android better because I see how they're thinking about it. It kind of kills my prejudgements about the product and I start thinking about it from a fresh perspective. We focus in on generating a broad range of ideas that are really innovative and far reaching and then prototype and pull together concepts and prototypes to demonstrate and create a vision for what those concepts could be. I never was really familiar with the idea of using this type of process for ideation. And it's impressive to see the degree of precision that Kai specifically has introduced and in the way that she's run this process, it's been really enjoyable to see specifically how one exercise leads into the next and the next and the next and how that can actually effectively yield good ideas. We're a very small company so effectively we're doing similar things all the time but we often rush straight to the solution. It was nice to see some structure around, I guess, the process. So, you know, it starts here and you don't get to fix it straight away. It's like define the problem, then go to this step, then go to this step. And that kind of structure, though it seems kind of burdensome, like it actually improved like the overall thing we came up with. When you're in a company, you tend to think about how the company kind of does something and how the things that you've learned in the past and then here is just kind of like a blank canvas again and you get to start new and then rediscover the things that kind of work in a workflow or in a much more creative space. I mean, here we try to build something in three days, which is insane. This is the first time we actually been in a sprint with just doing Android 2 and I would love to bring some more of the Android sprinting back to our company. I think design sprints facilitate interdisciplinarity, interoperability and all of the kind of amazing things that can happen from a good collaboration. One of the really exciting features of Android is that it's a very open platform. Anyone can come and write their own apps and create their own concepts. We want to bring that opportunity of openness to the design community and inspire designers to generate concepts and ideas and design really cool apps that leverage the openness of the platform. I was impressed with the openness of Android. It's definitely a unique thing that you might not find on iOS or other systems. In our app in particular, there's things that we definitely couldn't have done on iOS that are actually really useful and it is nice because the app can organically come with you into the rest of your life. I think for me, one of the big awesome parts of it was that I was able to begin to learn Android which is something that I've always wanted to do as a prototyper. I just would like to get to know Android better and this is like a really big jump start. One of the few constraints that they put on us here at the Design Sprint was like to sort of like come up with something that's like unique to the Android ecosystem. And as we were going through all of our crazy-ates and all of the myriad ideas that we had come up with, we actually abandoned some of them that were cool because they were kind of like something that could feasibly be built on any platform. And I think that giving us that constraint infused our other ideas with more creative solutions and I think what we came up with is like so simple and so delightful and only available on Android. Developing a successful app isn't easy. To reach a broad audience, you'll need to consider your iOS, Android and mobile web users. And to build for these platforms, you'll need a backend server to store data and support the apps. Of course, you want to get your users logged in, hopefully lots of users, which means your backend will have to scale. Then after you've solved your scaling problems, you'll have to find more ways to spread the word to get new users. But have you found a way to measure all this activity? And oh no, your app is crashing and causing servers to melt down and you haven't even made a dime yet. Don't you wish this could be easier? This is why we built Firebase. It has all the tools you need to build a successful app. It helps you reach new users, keep them engaged, scale up to meet that demand in addition to getting paid. From the beginning with Firebase, you'll have Test Lab and Crash Reporting to prevent and diagnose errors in your app. Your backend infrastructure problems are solved with our real-time database, file storage and hosting solutions. Acquiring new users is easy with invites, add words and dynamic links. And using the authentication component, you can get those users logged in with minimal friction. Once installed, you can keep your users engaged with notifications, cloud messaging and app enhancing. Then with Remote Config, you'll have the freedom to experiment with new features and optimize the user experience in real time. And of course, you can earn money with the same AdMob component that's been monetizing great apps for years. Last, but certainly not least, our all new analytics component designed uniquely for Firebase brings insight into how well these components are working for you and your users. With Firebase Analytics, you can measure and optimize your advertising campaigns, discover who are your most valuable users and understand exactly how they are using your app. Now, all these components work great on their own and provide a solid infrastructure to build out your app, but they work even better when combined in creative ways. So let Firebase handle the details of your app's backend infrastructure, user engagement and monetization, while you spend more time building the apps your users will love. To get started right now with Firebase on Android, iOS or the web, follow these links for more information. Then, to manage and monitor your apps connected to Firebase, there's a web console to view crashes, setup experiments, track analytics, and a whole lot more. And to learn more about Firebase and all of its components, you can read the documentation right here. We can't wait to see what you build. We have with us, Aparna Shridhar, who is a product manager at HackerRank. There are so many people who really look up to you. What really inspired you into technology? I started writing my first program when I was, say, in sixth standard. Just playing around with basic back in the days when we had dial-up connection and when I decided what to do for undergraduate studies. That's when I had the option to, again, pick computer science. And at that point in time, I dated back to this enjoyable experience I had as a child and thought maybe I would give that a try. Tell us about your experience as lead coach for the Udaa City Android Nano degree. Something that I would like to highlight about teaching is that it's been like the most fulfilling and gratifying experience. Can I help that one student who almost wants to give up on programming, who almost feels like this experience is too hard, if I can help that one person at a time progress? I feel like later in their life, sometimes they would look back to do it for somebody else again. What is your message to the women in tech out there? We undergo a lot of stereotypes. I cannot tell you the number of times I'm the only woman in the room and automatically the question is, are you in sales, are you in marketing? The more women can do to break that stereotype to embrace more of these roles, the more we can change this perspective that we do have in tech. Thank you so much, Aparna. Welcome. This is our first certification summit. You guys and ladies are among the first certified Android developers. The developer base growing very fast, going and becoming the largest developer base in the world. The interesting point is that India is a mobile first market, however the percentage of developers developing for mobile is relatively low. So we're trying to really supercharge that. India is one of the emerging markets. 80% of the smartphone growth rate is expected till 2019. You guys are Android certified developers and just imagine that you are going to reach these many people with your applications that you are going to develop. They are not trying to solve for the entire world. They're trying to solve for their own users. You are at the end of the day developing a product, not for yourself. You're developing for an consumer. I'm going to talk to you guys about what's new with Android O. Any of you guys use some of the Fireboy's 2.0 features? Yes, it's about recognition, it's about getting a job, it's about growing your career, but there are bigger forces at play. I feel that development, mobile development, Android can make a difference actually in the world, fixing problems in one's own community, whether it's water, education, environment. But we want to support you connecting to communities and trade change in the world. Firebase makes authentication easy for end users and developers. Most applications need to know the identity of the user so they can provide a customized experience and keep their data secure. Firebase supports lots of different ways for your users to authenticate. If your users want to authenticate with their email address, you can build that for them. Firebase Off has built-in functionality for third-party providers such as Facebook, Twitter, GitHub, and Google. It can also integrate with your existing account system if you have one. You're given the choice about how to present login to the user. You can build your own interface or you can take advantage of our open-source UI which is fully customizable and incorporates years of Google's experience in building simple sign-in UX. No matter which one you use, once a user authenticates, three things happen. Information about the user is returned to the device via callbacks. This allows you to personalize your app's user experience for that specific user. The user information contains a unique ID which is guaranteed to be distinct across all providers, never changing for a specific authenticated user. This unique ID is used to identify your user and what parts of your backend system they're authorized to access. Firebase will also manage your user's session so that users will remain logged in after the browser or application restarts. And of course, it works on Android, iOS, and the web. That's Firebase Off, allowing you to focus on your users and not the sign-in infrastructure to support them. Thank you for joining us for today. India is coming along the way as I just mentioned. Today, India is the second largest country in the world in terms of number of developers. Soon, it's going to be number one. What we want to invest in is actually training the faculty from your colleges. The potential is so great and what Google is doing to help catalyze that innovation is it's really an exciting time for these campuses. We are really trying to provide the best possible experience to teachers in these faculty hubs because the first step to training 2 million developers is to train the teachers that are going to teach those 2 million. Industry, as of now, demands a lot of updated curriculum, developing 2 million Android developers. Being working in a technical university, we can contribute hugely on developing those million app developers. So we're excited that all the raw materials are there to create an innovation revolution in India. I really think the students are going to make some great things and I can't wait to see what comes out. There's a lot of potential in India and we need to take it forward. With Google, we can provide rich opportunities to all. That is the essence of Google program, which I have seen. This is a good move and this program will definitely be useful to the students because app development is going to rule the world for the next few years, really. Just waiting for the shot to finish. Okay, there we are. So I'm here with Daniel in the Community Lounge where people from all over the world are gathering at Google I.O. Can you tell us a little bit about the activities going on here and what people are saying? Yeah, the lounge is pretty new on the I.O. Like we used to have community areas on the I.O. But nothing huge like this. I don't know if the camera can capture it but this is basically like the whole tent, like the front porch of it. And as you can see, it's crowded and people come here to chill, to play. They play pool. We have some wooden block building games. We've got some networking games, people chatting. You don't hear me saying we have any gadgets here. This is completely unplugged. And like people here maybe come to relax a little bit from all the technology which happens in the tents everywhere else. And they just sit back and relax and talk to other people. So we are the place sort of maybe to unplug from the technology for a little bit. We also have some local meetups. So I am with GDGs. I'm in the community team in Google. So we help communities around the world to do meetups. So we thought, why don't we do meetups here? So we have about 25 small meetups and it's anything for like people who did applications for social good. Or just now, I think there is a meetup of Russian speaking developers. So we have the hallways. They come here, they meet. It's very unprogrammed, very anarchistic in a way. So we have also a bunch of Googlers coming here to relax from the IO madness. I love it. I hope we do it next year. And when you come next year, make sure you're to buy. Awesome. Well, let's check out some of the stuff. Awesome. Let's go. So this IO17 hello world hashtag. Hashtag IO17 hashtag hello world. You can actually check it right now if you are bored with my face. And what happens is that we are building a bridge here. But communities, they don't function with a boss or blueprint, et cetera. So this experiment is about what happens if we want to construct maybe the next golden gate without blueprints or managers. You see, it's pretty chaotic, yeah? Timothy, I know like we are not golden gate bridge quality yet, maybe we will not be ever. As I said, this is an experiment. We are on day one. So we'll see how it goes. We also ask people to, if there are developers, to write hello world in their first programming language. So I hit mine here, like 10 print hello world in basic. I was very tempted to say 20, go to 10, you know, and see the wooden blocks being covered everywhere here. So we can see if plenty of languages here. So yeah, let's see what happened. All right, so Timothy, I know I said the launch is unplugged, but this is a, I was cheating. This is an exception. Like as you can see, this is very much plugged in. But what we are playing here is something that I want to tell you informally. We are not publishing. This is a YouTube playlist of some of the greatest GDG trailers and videos from around the world. Google Developer Groups, they organize meetups, events, activities, and they sometimes put crazy videos online. So we made like a two hour playlist out of that. You can find this whole screen on your own screen if you enter the address bit.ly slash launch TV. I'm not saying don't say it to anyone because I just said it to you, but you can be here with us at least virtually through this playlist. So enjoy. Cool. So what is this? Yeah, you had seen the movie Inception, right? And in the keynote in the morning today, you were talking about the inception. So this is a device. I don't know what for, I don't know what it will do to you. I mean, if people want to see more videos of you, I don't know if you want to see the name, but. Let's try it. This is a core workout. It's kind of like planking, but better. You can get a hangover from this. All right. So let me introduce Nino. Nino is from Georgia and she's running one of the meetups I was talking about. And this meetup is, what is it about Nino? Who is it for? This meetup is for developers mostly and we cover the topics of how we can develop apps or use current apps for social good. So basically the social aspects that our apps can have because technologists have so many opportunities and it can really go beyond just consumerism, right? It can serve so much more purposes. So I'm going to be sharing my own experience, how my team at Elva Instructor are helping to help people in conflict and crisis situations to better monitor situations or how you are helping farmers with information and probably get some more ideas from others. So it is very informal, very improvised because everyone is in the line for ice cream. So I hope that someone joins and it's going to be just a friendly session and thanks to Dan for this opportunity. I think it's fun and cool. So do develop apps for social good. Awesome, thank you. So you're welcome to join. Hello, everyone. I welcome you all in the center of development fundamentals. An app is basically a solution or a medium by which you get a solution to a lot more users. You have to figure out what kind of benefit the person is going to get by using an app. I feel personally, the phone is not a phone. It is something that can change people's life. With Google, we can provide rich opportunities to all. That is the essence of Google program. There's a lot of potential in Google. There's a lot of potential in India. And we need to take it forward. To get everyone to start thinking about Android and developing for Android, we're at the Cusp of a Revolution. We encourage you to continue learning, continue developing, and now go build some great apps. You have the talent and that is the need. Bring it on. Now it's time to learn about all the new cool stuff with Google Play Console and I'm here with Matt to tell us all about it. Hey everybody, we're super excited to be here. I.O. is a big moment for us with the Play Console because we're all about developers. As you probably know, the Play Console is what developers use to publish their apps to Android devices through the Play Store. And what we've seen is over the years we've gone from a Play Console, which was simply about getting an app from A to B from the developer to the end consumer's phone. Now it's so much more. There's really a hugely diverse set of tools that enable developers to be more successful with their app. And what we've found is that the user base has diversified as well. In addition to our core audience of Android developers, we have product managers, marketing managers, many other users wearing different hats in the organization are starting to use the tool. At I.O. this year, we're announcing a whole new set of features as well as updates to some of our popular existing features. In some cases, these features are helping apps to be higher quality. And so we're launching Android Vitals, which is a whole new set of reporting about bad experiences on the Android device. So these are a user experience where your app may be crashing or your app may be rendering in a slow way or you may have excessive wake ups, stuck wake locks that are causing excessive drainage on batteries. So we're going to start revealing that reporting to developers that can help them to debug, understand more about optimizing their app for the variety of devices that are out there. And that's one of many different new features we're launching. Another one, actually you can see on the screen behind me, is the device catalog. So Android's diverse ecosystem has been a tremendous strength, the source of the scale of the two billion Android devices that are out there now. But that diverse ecosystem can also be hard to navigate and understand for developers until now because the device catalog is a much easier way to browse the more than a thousand different Android certified devices. You can search and explore these devices by spec like RAM, like System on Chip and you can then see a reporting information about the number of installs your app has on that particular device or on that group of devices, how much revenue you're getting, how your ratings compare, which can really help you to understand whether any of the bad behaviors, any of these problems that users may be having with your app are concentrated in a particular area which can be critical to optimizing and making your app super high quality. So we're finding even just in the initial testing which we've now completed, this feature is rolling out to all developers. We found out early testers have really embraced these tools, it's really helping them to be much more surgical in the way they think about targeting their app and making it a great experience for the end user. In addition to the features that help developers to build higher quality apps, we've also got a number of features which help to mitigate risk during the release process. And so when you're releasing a new version of your app, you may have an installed base of millions, potentially hundreds of millions of users. If you're pushing out a new version of your app, you can otherwise be a worrying time. You don't wanna know that bugs are causing uninstalls. You don't wanna know that information late. So we have a new release dashboard with very low latency reporting that helps you to have visibility of these metrics. On the hour, every hour as it's starting to roll out to potentially global audience of hundreds of millions of devices. And so this can help you to manage your release with a lot more confidence. A third area where we're launching new features right here at IO is in the business area. And so developers, in addition to building amazing experiences, they wanna make money, they wanna grow their business. We have a lot of new reporting, including for app developers that have a subscription based model. You can look at subscription by tenure, look at cohorts from when they've started, when they've installed the app, and how that life cycle trends over time. You can help to see how different inflection points are contributing to subscription renewals or people leaving a subscription. So it can really help you to manage your subscription business. And then in general, we have a lot of statistics that are available through the Play Console. In the past, they weren't always easily navigable. We have a new configurable page. App developers are much more easily able to slice and dice these metrics, much more easily able to create custom configurations, to compare different time periods against each other, to compare one metric against another, and to introduce benchmarks of whole classes of apps so they can see how some of them compares to peers who make similar apps. And so we think that features like this will really help developers to manage their apps as a business as well. Wow, that was a lot of stuff. Thank you so much, Matt. Thank you. Hello, thank you for coming. I'm Martin Wick. I'll speak about effective TensorFlow non-experts. I'll later introduce Francois who will take the second half of the talk. So, TensorFlow, why? So, I have a son. You'll see him later in the talk, actually. And when I used to explain to him how image search works, I would have to say something like, oh, he has a computer, it looks at the metadata, and it looks where the images are. And now, when I explain how this works, I can just say, well, the computer looks at the images. And if you search for, say, Cherry Blossom, it looks at the images, all of them, in the world. And whenever it sees one that has Cherry Blossom, then it returns it. It's a much better story. And you've seen a lot of this particular event. AI is now going to be everywhere. And machine learning is everywhere. And it generates these products that were impossible to imagine before. What makes these products work in reality is, for us at least, TensorFlow. And this enables us to make these apps and make these products that use machine learning. And once you've written one of these models, one of these machine learning systems in TensorFlow, you can deploy it anywhere, mobile. TensorFlow is truly what enables apps that tell hot dogs from not hot dogs. So why don't we see that more? Why is it why are hot dogs not hot apps so cutting edge? The main reason is complexity. And complexity is in various shapes. So first of all, complexity is computational complexity. Computational complexity used to be a big thing. And now with cloud and with all the availability of data centers that you can rent, that's really not much of an excuse anymore. So we've solved this problem, assuming your code can run in a data center. But you also have to contend with, you have to make your code, your models, your apps, work on all these different platforms. In order to train them in a data center, you work on CPUs and GPUs. You probably have heard by now we have this thing called a TPU. It should work there too, because that's going to be fast. And of course, if you want to deploy it, once you're done, it needs to work on a mobile device. If you're into IoT or embedded systems, it has to work there too, say on an Raspberry Pi or something like that. So making that happen is hard. And finally, machine learning itself is actually fairly complex. So that's why we have TensorFlow. And TensorFlow gives you distribution out of the box so that you can run it in the cloud if you need to do that. It works on all of the hardware you need to work on. And it's fast, and it's flexible. And what I'm going to tell you today is that it's also super easy to get started. And that's why we're here. So TensorFlow takes all the details of a distributed system in their various hardware and just hides them from you, takes care of them. You don't have to know about it necessarily. It's nice if you do, but if you don't, that's not a problem. What you mostly see is the front-end. And what I'm going to talk about today is the Python front-end. The generic thing that people used to say, oh, this is TensorFlow, it's pretty low level. So you're thinking about multiplying matrices, adding vectors together, that kind of thing. What we built on top of that is libraries that help you do more complex things easier. We built a library of layers that help you build models. And TensorFlow is going to talk more about that. We built training infrastructure that helps you actually train a model and evaluate a model and put it in production. And again, this can you can do with carers or you can do with estimators, and as well as going to talk about carers. And finally, we built models in a box. And those are really full, complete, machine learning algorithms that just run. And all you have to do is instantiate one and go. That's mostly what I'm going to talk about today. So usually when you talk about, oh, my first model in TensorFlow, it's usually something simple. Like, let's fit a line to a bunch of points or something like that. But nobody is actually interested in finding a line to a bunch of points distributed. Doesn't really happen all that much in reality. So when I'm going to do that, I'm going to show you instead how to handle a variety of features and then train and evaluate different types of models, possibly distributed. And we do that on a data set of cars because we have that. I have uploaded all the code I'm going to show you to this address, which has both an O and a 0, so be a little careful. And what I've done there, it'll work with TensorFlow 1.2. And I am not sure whether it's been announced previously, but there is now TensorFlow 1.2, the first release candidate. So my code will work with that and nothing earlier. So you have to watch out. All right, so the first model today will be about predicting the price of a car from a bunch of features about the car, information about the car. Right, so without anything more, let's just do code. Whereas the talk is going to be code mostly. This is it. This is my model definition in TensorFlow. I'm exaggerating only a little bit because this only takes into account three different things. So first here, we define the input. And I'm defining three different what we call feature columns. I'm telling the model that it should expect an input that is a categorical feature, a string that's called make in the input. And I'm going to transform this into something that's usable by my machine learning algorithm by hashing it. So I declare it this way, a declarative way of telling you what to do with the feature for the inputs. The next thing is I'm going to say, oh, there's something called horsepower, and that's just a number. So I call it numerical column. And then there is something called num of cylinders in the input. And that's also a string, but it has very few values. So I'm just going to give you all the values directly here in the code. So in this data set, it actually is the case that num of cylinders is encoded as the words 2, 3, 4, 6, and 8, I think. So that's it. So there's probably many more of these. But in principle, that's what you do. And you say, OK, this is what my input looks like. And then we specify what kind of machine learning algorithm we want to apply to this. And in my case here, I'm going to use first a linear aggressor, which is kind of the simplest way to learn something. And all I have to do is tell them, hey, look, you're going to use these input features that I've just declared. That's it. And I'm done. Now, I still have to give it some input data. Intensive flow has off-the-shelf input pipeline for most formats, or for many formats. In particular here, in this example, I'm using input from pandas. So I don't know who knows pandas. It's a Python library. Can read a bunch of stuff, process, data. It's nice. So I'm going to read input from a pandas data frame. And really, what I'm telling it here is I want to use the batches of 64. So each iteration of the algorithm will use 64 input data pieces. I'm going to shuffle the input, which is always a good thing to do when you're training. Please always shuffle the input. And num epochs equals none means cycle through the data indefinitely. If you're done with the data, just do it again. And then I can say, OK, train me my thing for 10,000 steps, say. And what happens then is that TensorFlow goes off and trains my thing. This is what the logout looks like. Actually, any interesting information for it. What's more interesting is that TensorFlow will also integrate with other tools that we have. In particular, it'll integrate with something called TensorBoard. TensorBoard is this wonderful front end that the training system writes data for. And then you can visualize it, and you can look into it. And one of the things that you can see, what I show here, is what's called the loss curve. Loss curve is kind of the most important thing to look at if you're trying to train a model. And what you see here is that our loss, that is kind of the error that the model makes when looking at data, is decreasing over time. And that means it's learning something, plain and simple. So good. That's good. The model is learning something. Great. The next thing I can do with TensorBoard is I can actually look at the model that was created and look at the lower levels of the model, look at what we call the graph. TensorFlow works by generating a graph, and then this graph is shipped to all of the distributed workers that it has, and it's executed there. You don't have to worry about this too much, but it's awfully useful to be able to inspect this graph when you're doing debugging or something like that. So here is the graph that was generated by the model declaration that I showed earlier. And I've highlighted in red here the part that's the actual model. That's the actual linear model. I can look inside of it. And you can see that there are these slightly yellow boxes, and those are the input processing computations that happen in this model. So creating all of this is taken care of you by the infrastructure, but it's really useful to be able to look at it if you're debugging something or if you just want to know what happens. So what's going on? The linear aggressor I defined is what we call an estimator. We inherited that estimator word from scikit-learn, and it's kind of this very, very similar concept. It supports all the basic operations that you need for an ML model. You can train it. You can evaluate it, usually on separate data. If you take away anything from this talk, it's that you should always evaluate your models on separate data while you're training or during your training. You can query the model for predictions once you've trained it, and then this is fancy. You can export what we call a saved model and give it to TensorFlow Serving. There's a talk about TensorFlow Serving tomorrow in the amphitheater at 1.30, and you should all go to it. So these methods all hide a lot of tricky details that you don't have to worry about and that are just done. You can just call them, and you can be reasonably certain that it actually works. We also defined an input function earlier, and that basically reads from data files and feeds data in an appropriate format into the estimator itself. There's a trick here. The estimator actually saves its state to what's called a checkpoint. The checkpoint contains all of the variables that the model contains, and every time we call one of these functions, it'll synchronize with the checkpoint. And this is very important for the distributed setting, where you have several machines and they all do their own thing, but they have to synchronize and they synchronize when something restarts or when something breaks. They synchronize via a checkpoint. This is also, the checkpoint is also what we use to export the saved model. OK, so it's great. So I've shown you how to train a linear model with Panda's input, but what if you want something else? Just anything else, really. Well, so all of these components are obviously swappable. So for instance, the Panda's input function, you can swap for other input functions. We have several that are just there. It's also pretty easy to write your own, so you can read from NumPy. You can read from any Python generator. Anything you can put out, like anything you could spit out of a generator, you can use. So this makes it very, very flexible. And you can write your own input function, too. It's not that hard. Also, the models that we have are not limited to just linear models. So we have linear regressor classifier. We have a deep neural network, which is a basic straightforward neural network, basic feed forward neural network. And then we also have estimators that do random forest, k-means, support vector machines, you name it. So all of this is available directly for use, so you don't have to implement it yourself. Let's say we want to swap this out. So first of all, we have to obviously change the name of the class that we're using. And then we'll also have to adapt the inputs to something that this new model can use. So in this case, a DNN model can't use these categorical features directly. We have to do something to it. And the two things that you can do to a categorical feature, typically, to make it work with a deep neural network is you either embed it or you transform it to what's called a one-hot or an indicator. So we do this by simply saying, hey, make me out of the make, make me an embedding, and out of the cylinders, make it an indicator column, because there's not so many values there. Usually, this is fairly complicated stuff, and you have to write a lot of code, but this is it. All you have to do is declare, hey, I want an embedding for this, and you're done. Then also, most of these more complicated models have hyperparameters. And in this case, the DNN, basically, we tell it, hey, make me a three-layer neural network with layer sizes 50, 30, and 10. And that's all really you need to this very high-level interface. So for other models that can be more complicated or less, depending. And if none of this works for you, you can still take advantage of the training loops and all of the infrastructure that is in the estimator itself, and you can completely swap out the model and write your own. And the way you do this is we have this estimator base class, and you can put in what's called a model function. And this allows you maximum flexibility. The model function takes the inputs, and it produces tensors that contain the thing that you have to do for training, the thing that they have to do for evaluation. It returns the predictions, and then it'll also produce a safe model if you need one. So you can write all of this yourself if you really want to if none of the preexisting models fit for your use case. And you can use any kind of, you can use regular TensorFlow to define this model function, or you can use Keras, and Francois will say a little bit about that. So the exciting thing about TensorFlow is actually the fact that it runs not only in a single machine, but distributed in a data center. So to make this work, we have a utility that uses several workers and basically trains the model on all of these workers at once. We call a distributed training run, like a single run of training a model, an experiment. And to make one of those, we first make an estimator, as you've seen before, and then we add the input function and the estimator together and make an experiment. The name experiment comes from hyperparameter tuning, and I don't know how many of you are familiar with hyperparameter tuning. It's a very important concept in machine learning. Basically, instead of just training a single model for your data, you train a whole class of models and you pick the best one. And it's very powerful. And I'm not going to go into details on this, but the infrastructure for it is all set up. So you can, in the experiment function that makes the experiment, you can pass in hyperparameters, and then you can use those hyperparameters to create your estimator, and then you return that estimator with the experiment, that way you can implement hyperparameter tuning. Well, finally, this experiment, this function that makes an experiment, we pass to what we call the learn runner, which just runs it. And if we can pass some user options in a run config, what the run config also does, it contains information about the cluster that we run it on. And so we can use that information, for instance, if we make the estimator want to tweak it, depending on how many machines we have available or something like that, we can do that. So it contains information, how many workers do we have, how many parameter servers do we have, and so on. But the run config also takes information from the environment, and so we don't have to pass it explicitly. So what do we have to do in order to declare to TensorFlow, hey, here's a cluster. Please use the whole cluster as we write what's called the cluster spec. And it is a very, very simple thing. It's just simply a map from names of different machine types that we'd want to do different things and then a list of all the machines that can do this thing. And usually, you have parameter servers or PS, and you have workers. And the parameter servers store the variables, the workers do the actual work. That's kind of traditional, and you just fill out these lists, and you're done. And you save that, you dump that into the second thing here, you dump that into an environment variable, and then everything else will work from there. OK, setting up a cluster depends on the environment, and I'm not going to talk in detail about it. But if you are interested, it's on GitHub in the ecosystem repo. There's a number of scripts and a number of examples that help you get started. So one thing I'd like to mention, and I don't know how many of you have seen the TPU talks, this is a part of TPUs. If you stick to these concepts, your code will basically work on a TPU without modifications. So you will be able to use this once you can use it. So what I've shown you is we have intensive flow implementations of complete machine learning models. You can get started on them extremely quickly. They come with all of the integrations with TensorBoard, visualization, for serving and production, for different hardware, different use cases. They obviously work in distributed settings. We use them in data centers. You can use them on your home computer network, if that's what you like. You can use them in flocks of mobile devices. Everything is possible. And they run on all kinds of different hardware. Particularly, they will run on TPU, which is nice. They also always run on TPU and CPU, obviously. OK, so before we move on, the full code, again, is at this URL. If you're interested in more, do check out the tutorials on TensorFlow.org, and particularly the estimator tutorial. I think that maybe the most interesting for people who want to do right their own estimators. And then do go to the talk tomorrow at 1.30 in the amphitheater for TensorFlow serving with Nola. And with that, thank you very much. And I'll hand over to François. I'll tell you more. Thank you. Thank you, Martin. So canned estimators are great for many use cases. But what if you need something that's not available as a canned estimator? What if you need to write your own custom model? That's where the Keras API comes in. The Keras API is this high-level API for building TensorFlow models. And you can use it together with the estimator class and the experiment class that Martin introduced. As you know, TensorFlow features this fairly low-level programming interface, where you spend most of your time multiplying matrices and vectors. And that's very powerful. But that's also not ideal for building very advanced complex models. So we really believe that in the future, deep learning will be part of the toolbox of every developer, not just machine learning experts. Because everyone needs intelligent applications. And to make this future possible, we need to make deep learning really easy to use. We need to make available tools that are accessible to anyone. Because you shouldn't need to be an expert in order to start leveraging deep learning to solve big problems. So if you could design a deep learning interface without any constraints, what would an ideal interface look like? I really think that the core building blocks of deep learning are all fairly well understood. And rather than letting the user implement everything themselves, instead, it should be really easy to just take existing building blocks and be able to quickly assemble them to build new deep learning data processing pipelines. It should be basically like playing like it works. And I think actually, if you think about LEGO bricks, it's the ideal metaphor. LEGO bricks are very intuitive to use. They're very easy to use. They provide this very flexible, expressive framework in which you can build almost anything. They allow you to try different things very quickly and immediately get visual tactile feedback about what works and what doesn't work. And of course, LEGO bricks are very accessible. They're accessible to any human being, as is found above. And that's really the idea, the ideal, I would say, that we had in mind when we designed the Keras API. We wanted to be the LEGO for deep learning. So let's take a look at what Keras can do. So first of all, I think it's better to think about Keras not as a code base, an API framework or a library. It's just a high level API specification. And it's a spec that has several different implementations. The main one, of course, is a TensorFlow implementation. But there's also an implementation for Tiano. There's one for MXNet. There's one for Java. And there are more coming. And what makes Keras different from every other deep learning interface that's available is its deep focus on user experience. Keras is all about making your life easier, simplifying your workflow, especially in terms of providing easy to use, building blocks, intuitive affordances, in terms of providing good feedback when things go wrong, and just reducing complexity, reducing cognitive load. And of course, if you make deep learning easier to use, then you're also making it accessible to more people. So the end goal with Keras is to make deep learning accessible to as many people as possible. Until now, the TensorFlow implementation for the Keras API was available as part of an external open source repository. But now what's happening is that we are bringing the Keras API directly into the TensorFlow project. And we are doing that to make Keras work seamlessly with your existing TensorFlow workflow. Concretely, what's happening is Keras becomes available as a new module inside TensorFlow, the tf.keras module. And it contains the entire Keras API. So if you're a TensorFlow user, what that means for you is that now you get access to this new set of easy to use deep learning building blocks that will work seamlessly with your workflow. And if you're an existing Keras user, what this integration means for you is that suddenly you gain access to high level TensorFlow training features, things like distributed training, things like distributed hyperparameter optimization, training on Cloud ML. So that's really powerful. So to give you a concrete sense of what your workflow will be like when using the Keras API to define your models and when using TensorFlow estimators and TensorFlow experiments to train your models in a distributed setting, I will walk you through a simple yet fairly advanced example. We will look at the video question answering problem. So that's what the problem looks like. We have this data set of a few thousand of videos. Each video is about 10 second long. It shows some people doing some activities. And a deep learning model will be looking at the frames of these videos. And it will try to make sense of it. And then you can ask the model short natural language questions about the contents of the video. So in this example, we have a short video of a man packing some boxes into a car. And you can ask, what is the man doing? And the model will be looking at the video. We'll be looking at the question. And we'll have to select one answer word out of a set of possible answers. So here in this example, you can ask, what's the man doing? He's packing. And that's actually an interesting question, because if you were to just look at a single frame from this video, you couldn't answer the question. The man could be unpacking as well. So the reason you know he's packing is because of the order of the frames. So we expect our model to be able to leverage not just the visual contents of the frames, but the order of the frames as well. So needless to say, this is a tremendously difficult problem. Just three or four years ago, before Keras, before TensorFlow, this would have been only doable for a handful of freely well-funded research labs. This would have been pretty much a six-month project for a team of expert engineers. And what we are doing now is that we are making this really advanced problem accessible to pretty much anyone with basic Python scripting abilities. So we are democratizing deep learning. So that's what our solution looks like. That's our network. It is structured in three parts. So first, you have one branch that takes the video input and turns it into a vector that encodes information about the video. And you have one branch that takes the question and turns it into a vector. So now you can concatenate the question vector and the video vector. And you can add a classifier on top. And the job of the classifier will be to select the correct answer out of a pool of candidate answers. So the first step is to turn the video input answer into a vector. A video is just a sequence of frames. And a frame is an image. So what you do with an image is that you run it through a convnet. That's what you do with an image, natural thing to do, a CNN. And each CNN will extract one vector representation for each frame. So what you get out of that is a sequence of vectors encoding the frames. And when you have a sequence, what you do with it, the natural thing to do is to run it through the sequence processing module called an LSTM. And this LSTM will reduce the sequence to a single vector. And this vector is encoding information about all the frames and their order. So the entire visual contents are the video. The next thing to do is a similar process applied to the question. The question is a sequence of words. And you will use a numbering module to map each word into a vector, a word vector. So you get a sequence of word vectors. And we reduce it using a different LSTM layer. Once you have your vector representation for your video and your vector representation for your question, you can connect them. And you add this classifier on top, whose job is to select the right answer. So that's really the magic of deep learning. UTex is really complex inputs, which could be videos, images, language, sound. And you turn them into vectors. So you turn them into points in some geometric space. You turn meaning into points in a geometric space. That's the essence of deep learning. And what's really powerful about it is that once you've done that, you can use linear algebra to make sense of these geometric spaces. And you can learn interesting mappings between different geometric spaces. So in our case, we are learning a mapping between an initial space of videos and questions and a space of answer words. And we are doing that just through exposure to training data. And the way we are doing this is really by assembling together these specialized blocks for information processing. And it's a very natural thing to do. If you have an image, you process it using an image processing module, which is a CNN. If you have a sequence, you process it using a sequence processing module, which is an LSTM. And if you want to select one element out of a pool of possible candidates, then you use a classifier. It's a natural thing to do. So what you're really doing with deep learning is plugging together these information processing bricks that are pretty similar to LEGO blocks, right? To LEGO bricks. So building deep learning models is conceptually similar to playing with LEGOs. And if the ideas behind deep learning are so simple, then the implementation should be simple as well. So let's take a look at the implementation. This figure is a very straightforward translation, a form model, into a Keras implementation. On the video encoding side, we have this inception v3 convnet. And we use a time distributed layer to essentially apply this convnet to each frame alongside the time axis of an input video tensor. And then we pipe the output through this LSTM layer, which will use it to a single vector. So one interesting thing to note here is that our inception v3 convnet will come loaded with pre-trained weights. And the reason that's important is because with our current video data set, we don't have enough data to learn interesting visual features on our own. So we need to leverage pre-existing visual features that were learned on a larger data set, so ImageNet in this case. And that's a very common pattern in deep learning. And it's a pattern that is made really easy by Keras. And we'll see how in a second. On the question encoding side, it's even simpler. You just run the sequence of words into an embedding layer to produce sequence of vectors. And then you reduce the sequence of vectors to a single vector using an LSTM layer. So once you have this video vector and this question vector, you can coordinate them with a simple concat app. And you add the class file on top, which is just two dense layers, which will select the correct answer. Let's look at the code. So this is the entirety of the code for the video encoding part. It's just a few lines. It's very readable. It's very simple. You start by specifying your inputs. So this is your video input. It's a sequence of a viable number of frames. So the none here is the number of frames. It's undefined, which means it could change from batch to batch. And each frame is a 150 by 150 image, restrict color channels. In the next step, in just one line, we're defining an entire Inception Visual Stream model, which is a fairly complex model defined in just one line. And it comes loaded with pre-trained weights from the ImageNet dataset. And all of this is built-in. It's already into Keras. So you don't have to do anything more. It's literally just one line. And we are not including the top layers because they are not relevant to us. And we are adding some pooling on top, which allows us to extract exactly one vector from each frame. In the next step, we are setting this carbonate to be non-trainable, which means that its representations will not be updated during training. And the reason that's important is because this carbonate already comes with good, interesting representations. And you don't want to alter them. Again, so that's a very common pattern in deep learning to take a pre-trained model and freeze it and make it part of a new pipeline. And it's a pattern that's made really easy in Keras. So once we have this frozen pre-trained carbonate, we use a time-distributed layer to distribute the carbonate across the time axis of the video input. And the result of that will be this tensor of frames, which we run through an LSTM layer to get a single vector for the video. On the question side, things are even simpler. We define our question input as a sequence of integers, a variable number of integers. So why integers? Because every integer will be mapped to a vector in some vocabulary. And we run this sequence of integers into an embedding layer, which will map every integer to one vector. And these embeddings are trained, of course. It's just part of the weights of your model. And then you run this sequence of vectors through an LSTM to reduce it to a single vector. So one interesting thing to note here with the two LSTM layers that you've instantiated so far is that we were not configuring the layers beyond just specifying the number of output units. And that's interesting because usually when you're using LSTM layers, there are lots of things you have to keep in mind. Lots of best practices you should be following to make things work. For instance, you should remember that the recurrent weights should be initialized with an orthogonal initialization. You need to remember that the forget-get bias should be initialized to one and many more. And here we are not doing this because all these best practices are already part of the default configuration of Keras layers. It's a very important principle in Keras that best practices come included as default. And what this means for you is that your models will typically just work out of the box without you having to tune every parameter to make it work. So that reties into our goal with Keras to reduce cognitive load, to reduce complexity. We don't want you to care about these technical details. We just take care of them for you. So once you've done encoding your video and encoding your question, you just use a concat op to turn them into a single vector. And you add these two dense layers on top, which will select one answer word out of a vocabulary of a given size. In the next step, you're using your inputs and your outputs to instantiate the Keras model, which is essentially a container for a graph of layers. And then you are specifying the training configuration. So you are specifying the optimizer that you want to use, the item optimizer. And you are specifying the loss function for which you are optimizing. So that's very simple so far. At this stage, we've defined our model. We've defined our training configuration. And now we want to train this in a distributed setting, maybe on Cloud ML. So let's see how that works. This is that. This is where the magic happens. You can use the TensorFlow estimator and the experiment classes that Martin introduced to train this model, this Keras model, in a distributed setting in just a few lines of code. All you have to do is to write this experiment function in which you define your model. You use your model to instantiate an estimator using this built-in guest estimator method. And once you have this estimator, you use it to create an experiment. And that's where you specify your input data, for instance. So it's just a few lines of code. It's really like magic. In just a few lines are very readable, straight for our pattern code. We've defined a state-of-the-art model. And we are training it in a distributed setting. So to solve this really challenging problem of video question answering, we should have been completely out of reach to almost anyone just a few years ago. So these APIs, these new high-level APIs, are really democratizing deep learning. That's made possible by two things. On one hand, you have the Keras API, which is this very high-level, easy to use, and powerful way to define deep learning models in TensorFlow. And besides being just easy to use, each layer comes with good default configurations, which allows your model to just run out of the box without much tuning. And the other piece of magic is these new high-level TensorFlow training APIs that Martin introduced, estimator and experiment. And together, this allows you to solve any deep learning problem with very little effort. So we think these new APIs are a big step towards democratizing deep learning and making TensorFlow and deep learning available to everyone. And we hope you will find them useful. And we are very much looking forward to seeing all the cool applications that we'll be building with TensorFlow and Keras. Thank you very much for listening. As a lot of developers know, there's more to having an app succeed than just building a great app. You want your app to be dynamic and responsive by delivering fresh content to users and quickly reacting to their changing needs. You want to test out major decisions to make sure you're doing the right thing before you push them to your entire audience. And ideally, you want to provide a tailored experience for each user so your VIPs feel like, well, VIPs. But let's be honest, that can be a lot of work. And if you're a developer without a ton of resources, that's time you'd rather spend on other things, like building your app. That's where Firebase Remote Config comes in. Firebase Remote Config is a simple key value store that lives in the cloud. But don't let that simplicity fool you. Because it lives in the cloud, it means you're able to deploy changes that your app can read within a matter of minutes. For instance, say you've just pushed your app out to the world and you suddenly discover that your Swedish text contains some offensive language. How are you supposed to know? You don't speak Swedish. I don't blame you. But fixing that text the old-fashioned way would mean creating a new build and going through the entire publishing process again. That's something that could take days, which is an awfully long time to have 9.2 million people cursing your name. But if your app uses Firebase Remote Config, you could change that text in the cloud through the Firebase console, kind of like this. The next time your users fire up their app, Remote Config will grab the latest values, update your app's text, and just like that, you've averted a major international crisis. Or let's say you've got a puzzle game and you're hearing complaints from your players that level five is too hard. If you've configured your app using Remote Config, you could tweak those settings to give your players a few more turns and push out that change to the world. But hang on, are you sure that's the right thing to do? What if the silent majority of your users actually enjoy the challenge of a more difficult level? And by making it easier, you're gonna turn away your most hardcore and potentially highest-paying customers. How could you test whether or not this changes a good one? Sounds like you need an A-B test. That's where Remote Config's Audience Segmentation feature comes in. This allows you to deliver different configurations to different groups of users simultaneously. So you can try out your new level settings with half your users, while keeping the old settings with the other half. But Audience Segmentation isn't just great for A-B testing. Maybe you've got a feature change that could have a major impact on your in-app economy. Or maybe you just wanna double-check that some new networking code isn't going to set your servers on fire. You can use Firebase Remote Config to gradually roll out these changes, trying them first with a small percentage of your users before pushing them out to your entire audience. Remote Config can also deliver different configuration sets to your users based on all sorts of different factors, from device type or locale to any audience segment you've defined in Firebase Analytics. So you can send out one welcome message to your New Zealand customers and another to your Australian ones. Or only show your review this app button to people who use your app every day. Or you can change your home screen experience for your customers who have spent large amounts of money on in-app purchases. So they feel special. Remote Config is backed by a client library on iOS and Android that handles important tasks like caching, dealing with flaky connections, and keeping network requests lightweight, which is always a good thing. To give Remote Config a try, check out our documentation here. We can't wait to see what you built. So you've built an amazing mobile app that your users are gonna love. But you wanna get it into people's hands and let them see just how awesome it is. Well, AdWords helps you do this, putting ads for your app in front of billions of people that use Search, YouTube, Google Play, and more. You can quickly set up an ad campaign to reach the type of users that might be interested in your app. You only pay if the user clicks on that ad and you can set the budget and acquisition costs that you're comfortable with. But how do you know you're reaching the right users? Maybe some will install your app and forget about it while others will make it part of their daily lives. Firebase Analytics helps you do this. You can define events that happen in your app that you consider to be important, such as reaching the first level of your game, purchasing a fancy new pair of sunglasses, or returning every morning to check out new products. You can tell AdWords which of these events are most important to you. Then, AdWords will display ads to people who are more likely to complete these important actions in the future. You can also build audiences, which are specific segments of users and have AdWords display your app to them. For example, imagine that you have a group of users who are very active, have added a product to their cart, but haven't purchased yet. Well, you can use Firebase to create an audience of just these people and then use AdWords to give them specific ads and encourage them to come back to your app and take action. Understanding your users and engaging with them at just the right time and in the right way will help you build loyal users for your app. Firebase and AdWords working together to help you grow your user base. Get started today, your new users are waiting. Okay, so we're now in the Android sandbox area. I'm gonna spend some time in this area because there's a lot going on, but we're gonna start with just features of the OS and David here is gonna tell us about a couple things I've been really curious about. Can we start with Kotlin? Let's talk about Kotlin. Sure, Kotlin is definitely the show stealer, it seems to me, from watching the keynotes. So Android Studio now has Kotlin support. You can create a new project and you'll see that you have the ability to check Kotlin support in from the beginning. We already have a Kotlin app here, but you can also write Java code and convert it to Kotlin very simply. And do you like writing Kotlin? I love writing Kotlin. I actually haven't gotten a chance to write Kotlin recently, but when I was first learning about it a year ago, I spent a lot of time just going through the language and getting very excited about it just because it looked so much more concise and it brings in a lot of modern features that I like from a lot of other languages. And I was really happy to see how fast they moved on it and the great tooling support that IntelliJ brought into it as well. Yeah, I mean, I've been talking to people around the festival today and one of the things they keep saying is how well designed they believe Kotlin is and what a mature language it is because of that. Yeah, no, it's absolutely. I feel that they got a lot of influences from a lot of new languages. I highly recommend trying it out, give it a shot. And Catherine, you've been working on bringing Kotlin into Android for a little while. Can you tell us what the motivation for that was? Yeah, so I think the big reasons we wanted to bring Kotlin to Android were that, like David says, most developers who try it find it to be really concise and expressive and it has a lot of awesome features like type and null safety, which are really useful for writing stable, non-crashy apps. Plus, it's totally compatible with the existing Android environment and also with the Java language, which means that even if you have a large existing Android code base, it's really easy to incrementally adopt it and you can kind of try things where it works. You don't have to go rewriting large swaths of code. That's awesome. Being able to incrementally adopt it, as you were saying, so you don't have to jump in and refactor everything all at once. Yeah, exactly. If you have some chunk of Java you're happy with, you can just leave that, keep modifying it, write some new module in Kotlin, kind of adopt it very organically as you see fit. Awesome. I think the final thing is just, we've been hearing about from more and more developers that they love it, it really makes them happy to write in it. And we try to listen to our developers and give them things they love, so hey. I want to look at something else as well, David, the profiler in Android Studio. Can you show me something there? Sure. We're really excited about the profilers. I feel like this is a long time coming. If you look here, you can actually see that this is displaying bitmaps as a sample Android app. We downloaded it as is, we didn't modify it at all. And if you look over here, you can actually see we've already started the profilers on it. We have the CPU memory and network profiler, so if I start to interact with the app, you'll start to see the different profiler shown. Yeah. So you can deep dive into any of them. You can sample CPU activity here. We can click on some stuff, and then what you'll see is if you drag across here, you could actually look at the content of the request. This is actually an HTTPS request, and you can still see the contents of it, which is something that we're very excited about because that was normally a challenge for a lot of people to try to do. I think some people would debug in kind of HTTP traffic in order to see it. We've sort of enabled some new features here, which we're happy about. Oh yeah, I'm used to inspecting the traffic on the machine that I'm testing on to try and get that debug information, and yet here it is right in studio. That's so convenient. Yeah, I really hope that people have a chance to play with it. We hope that they find the profilers intuitive. I mean, to me what's exciting is as you're messing around with your app, you might, all of these things that we're invisible for, you now have a chance to actually play around with it and see it. And as I said, we don't even think, we're including documentation with it, but we're even hoping that even without it, you should just be able to play around and try things out and see what happens and then learn more about your code. We were once debugging some code and we're like, no, something's wrong here. We're getting this weird memory spike and we were actually, we were surprised to find that all along, this code had our problem with it, but we didn't know about it until we had written our own profilers. That's profiling. There you go. Awesome. David, thanks so much for taking the time. Thank you very much. A few people online have been asking me to get a demo of picture and picture. So I found Rob and he's going to give that to us. Hey, I'm Rob from the Android Window Manager team and I've been working on picture and picture for this release. We're really excited about it because Android, you know, it's always been a good platform for interacting with content, watching YouTube, et cetera, but if you want to start to interact more deeply and multitask, we haven't always provided the tools and I think picture and picture is a great one. So I'll walk you through a way of using it. So here we have this video and it's a go game by Lisa Dole, the player who famously challenged AlphaGo last year. And we can see if we go back to home, we'll get the video and it's still playing here. We can move it around. And for example, I could go into my Go app and I could start to multitask and I could play the game along so that I could, you know, play along with the game and begin to explore my own variations and I can dismiss when I'm done. That's awesome. Enjoy picture and picture. Thanks, Rob. Well, thank you. Everybody's talking about Kotlin so I was able to get some more info out of Andre from JetBrains. Hi, Andre. Hello. So first question, you've been working with the Android team for a while. What are you most excited about with this collaboration? Well, this is very great to be here because basically this means that there are many people will be coming to Kotlin, new users, new, well, new exciting ways of using the language, new learning materials, new libraries, everything. So it's basically Kotlin's growing as we are looking at it and it's all together wonderful. I'm very grateful to the Android team that they had the courage to make the move. But that was, as they say, that was what the public was actually wanting them to do. So we're very happy about it. All right. So let's talk a little bit about the future and what's coming next. First, the language. What's coming next with the Kotlin language? There are actually many things we're working on but so the brand new thing with Kotlin is coroutines. We have shown the experimental design pretty... Yeah, it was like three months ago. So coroutines are a big thing now doing asynchronous programming in an easy way. So we're now looking into improving that and finalizing the design so that the next version of Kotlin will probably have it already stable so everybody can use it and be sure it will work for all the versions. Then the next big thing is multiply from Kotlin. So Kotlin is now big on Android, big on the server side. We are working on JavaScript and we have recently introduced native which is in a technical preview version now. So we're trying to span the language across many platforms and enable multi-platform development where we can say have a couple of modules reused for many platforms and then some platform specific modules implemented some functionality in a specific way that's leveraging the intricacies of the platform. So it's a very big direction for us. And then there are language-only things like value types, for example, for optimal storage or collection controls. So we have very many directions there and the strategic ones are those, you know, platform things and the coroutines. Awesome, what about tooling? What's next there? Yeah, so we're actually, JetBrains is all about tooling. So our first and foremost goal with Kotlin was making the language toolable and now we're in a pretty good shape with many things but there's a long road ahead. We are now tightly integrated with Android Studio and we'll work more on that. So in 3.0 preview, we have new wizards for creating things and we have unified analysis for Java and Kotlin and we'll improve that. We're working on the incrementality of the tool chain. So Google side as well as JetBrains side and for non-Android things, we're doing pretty much the same. We're basically getting on par with Java on the Java platform. We're working on debugability and incrementality for JavaScript. Native is very young for that but we'll get all that there too. Awesome, thank you so much. Thank you very much. The Firebase Notifications Console lets you re-engage your users quickly and easily. With it, you can manage and send notifications to your users easily with no additional coding required. Messages can be addressed to single devices, Firebase Cloud Messaging topics or devices that you select using powerful analytics tools. So for example, you can send a message to all of your users who've made an in-app purchase, giving them a special offer, allowing you to re-engage with them. The Firebase Notifications Console integrates with analytics so you can measure the effectiveness of your messages and explore insights based on your users' activities so you can grow your application by easily engaging your users through the Firebase Notifications Console. All right, so now we're in the Android Wear area and we're gonna take a look at some of the newest watches with the design lead from Android Wear, Brett. Hi, Brett, how you doing? Hey, Timothy, doing great. So I was talking to David Singleton earlier today and he said you have to check out the new Tag Watch. Can you show it to me? Yeah, I can. So here it is. And you can also see that it's available in this diamond-studded edition as well. And one of the things that's really nice about the Tag Connected is that it's modular and that's actually in the name. And so not only can you swap bands, but you can also swap out the tabs on the device in addition to the bands. So you could have a metal bracelet band, a leather band, a sport band, as well as you can change these accents out as well so you can go for same tone or contrasting tone and that all snaps back together. And when it's together, it's like rock solid. So that's one example of how partners are really doing really interesting things on the hardware side. With, of course, Tag, it's got all its iconic watch faces in the software and it's all powered by the Android Wear platform. Yeah, so that's pretty exciting. It was fun to be in Switzerland for the launch event for this too. They know how to host. I bet they do. Tell me one other thing I want to know is for developers, what's the new thing that they get to do that you're most excited about? Really, I'm going to demo a different watch at that point. So the best thing that developers can do now is they can start powering the complication slots or developing watch faces that show off really actionable and great information for end users. So here I've got this watch face that us two developed and it's got four slots in addition to the time. So all I have to do is long press on it. Just, you know, awesomely it's working just like it should. So I long press on it and I can change any of these slots. So I can make my, you know, next meeting. I can go and do like countdown to some important upcoming date. I can show today's date. I can go and for, let's see, let's change the layout you know, over to a different one. So I can have like more slots and now I can do stuff like, you know, put a fitness goal that I've set up, you know, right on my watch face with just a few taps. Whoa. And so developers can do two things. They can either be the watch face itself. If you're a developer, you feel like you've got a really strong sense of style, you can develop a watch face, but you don't necessarily have to be an expert in, you know, what data does the user want? You can just let the users decide. You can add these slots for complications and let data providers either from the system or from other apps power them. And the benefit to users is that they'll really like your watch face. Or if the users are already using something like Robinhood for stocks or Google Fit for fitness tracking, they can get that data right on their watch face. So we're really excited about what developers are doing there and what there's still a lot of possibility left of mind. Awesome. Thanks. Sure thing, Tim. Well, I work as a developer advocate at Google. My primary job is to kind of work with our last partners and work on technical integrations and work with our product teams to improve our products, which are developer facing. I regularly write software. We, I'm part of the engineering ladder. So yes, we are expected to be hands-on with code. Right. So do you love coding? Oh, yes, yes. I'm very much in love with coding. I still do it regularly. Well, I started very early. I was about 12 or 13 when I started coding. And it was always fun for me. I wrote more code than I played games, I guess, at that age. So it was always fun for me. At age of 12? Yeah, I started off pretty early. I had good mentors, so that's the way I would put it. So with which programming language you really start? I started with C. That's the mother of language, right? Yeah, C was where I started off. Well, I went on to do my engineering and after that, I started working in a startup and then moved to IBM. So I guess it was a logical progression for me after engineering to kind of take up a job which paid for it. I wanted the software engineering job itself. And which was particularly in coding where I found, I felt more comfortable. So I just went ahead to do that. If somebody is to design a social app in which areas participant needs to search out those things. So the first step would be to identify a problem that you want to solve in that community which that community would accept as a value add to them and then use an app as a medium through which that solution can be delivered to them. So identify the problem would be the first step for me. But gathering that data will give you insights which will help you define that pain point really, really well. Because you are not that audience. If you're not part of that community, it's very difficult for you to kind of understand the pain point of the community and be in that shoes. So it is best that you interview people who are in that situation and then get the data to decide on the problem. Exactly. So that was Mr. Amarik for you. Thank you Mr. Amarik for your time. We were honored to have you here. That was fun. Yeah, it was fun too. Welcome friends. Today we have with us Preeti Guruswamy to share her journey as a women in tech. Preeti, how did you get started with programming? I don't come from a CS background. I come from a biotechnology background. I wanted to be in medicine, unfortunately, because of financial situation. I couldn't get into medicines. So a lot of people said, why not try IT? Started as a software tester and something caught me. After a year, I wanted to do programming and I remember my mentor. She asked me, have you programmed? I said, no, will you do it? She asked me and I said, yes. Preeti, you've been involved in many communities. What is the one thing that keeps you motivated to contribute back to the society? I feel women need a lot of empowerment and even as a child, right, whenever there used to be a maid and their daughters, I used to always sit with them and teach them so that, okay, do something as an education because education is a very important part of it. And mostly you're educated because you need to have a good husband. Otherwise, if you're less educated, you may not get a very... I think that's why mostly all of us are educated. At least my cousin clan, we're all educated because we get a good husband and I was not very keen about not getting good husband because I never thought education is for an husband, right? Education was for self. What is the message that you would like to give to the women out there who want to start mentoring other women? I think first thing, whether you want to be a mentor or not, you need to believe in yourself and move forward. I think most of the time we don't believe in ourselves, right? And mentoring is one of the beautiful concepts that has been in India. We call Guru, we had Gurus before. They were all mentors. Unfortunately, we have forgotten that fact, right? And being a mentor, you will learn a lot. So be truthful, be sincere and tell your experience in a very sincere way. You know, you don't know who will be inspired by you, who will learn from you. You don't know, they might learn or they may not learn, but at least you'll carve a path for somebody and definitely you will be a good mentor. Thanks a lot, Preeti. And you have a great story to share. Thank you so much. It's a pleasure to be here and thank you very much. Welcome everyone. We have with us Soham Mondal today, who is a Google Developer Expert for user experience. Soham, what is UX according to you? So UX is, as you know, right, user experience. It's about understanding the need, the basic goal that they want to achieve and helping them achieve that goal as simple as that. Recently, your app got featured on the Play Store. So what are your tips for other app developers? So the first thing is, do a lot of user research. Understand their backgrounds and motivations, why they are installing your app in the first place and then build something for that. After that, once you've understood that, you've tested that, done some usability testing, then follow guidelines, right? Guidelines make your life easier. Follow material design guidelines and other guidelines. That's how my app got featured on the Play Store. What are the tips you would like to give for people who are building for rural India? The challenges in rural India are completely different. You have to first of all localize the application, right? There's so many languages in India. It's very important that you localize the app and make it very, very accessible. Apart from that, make sure that the gestures and icons and the overall application is very, very localized. So people are not used to swiping because that's their first computing device. So make sure that you are building something that they understand. And finally, make sure that you're doing usability testing, that they're able to achieve the task, right? In any kind of application. That's very, very important. So with all of this, I'm sure you'll be able to make a great application for the whole of India. You are a BLR Droid community organizer. What is it that motivates you to give your skills and expertise back to the community? I've been part of this community for, you know, since 2009. And, you know, initially I was just a member. I used to go to meetups, I used to learn so much. So meet so many interesting people. Such a great experience that you learn something, you meet people, and then you want to kind of give back to it because it's so good. I've learned so much from it. It's only fair that I give it back. So that's my motivation. Whether you're just starting out on your journey toward a career in Android development, or you've been working as an Android developer for some time, you might ask yourself, how can I separate myself from the pack and get recognized? Introducing the Associate Android Developer Certification by Google. An achievement available to those who can display the skills of an entry-level Android developer. The first step on your journey is determining if you're ready to take the exam. Start by learning what the exam covers. And review the skills that you'll need to demonstrate when taking the exam. Next, decide whether you need training or are ready to take the exam. Training is available online as well as in person. Also, training is available in some colleges and universities. When you're ready, sign up and take the exam. As part of the sign up, you'll pay an exam fee. If you live in India, you will pay 6,500 rupee. If you live outside of India, you will pay $149 US. After you've signed up and paid the fee, you will download the exam and load it into Android Studio and begin. The exam is a timed performance-based assessment in which you'll implement new features and debug issues in an existing app. When you start the exam, you'll have 48 hours to finish. And once you are done, you will submit the exam for grading. Your submission will be evaluated through a combination of machine and human grading. Based on the outcome of machine grading, you will move on to the exit interview. After you've finished and you've passed your interview, you will then receive a mark from Google and join our community of Google-certified associate Android developers. Once you are certified, you can share your mark on your resume, LinkedIn, G-plus, Twitter, and in your email signature. Every time your mobile app crashes, it's an invitation to your users to rate it poorly and uninstall it. This can spell disaster for the new app that you just launched. If you're an app developer, you need to know exactly where your app is having problems. And you need this information quickly so you can correct the issue before it affects too many of your users. This is where Firebase Crash Reporting can help. Our Crash Reporting tool collects information about crashes that your users are experiencing and sends that data as quickly as possible to be tracked in your dashboard. With the dashboard, you can monitor the overall health of your app. Here, you can see the top crashes and track the recent history of crashes in your app. Crashes are grouped by similarity and ordered by the severity of impact on your users so you always know which issues to address first in order to best increase the quality of your app. Each instance of a crash comes with detailed information surrounding its circumstances, including the stack trace, device type, and other important details about the device at the moment of the crash. To further enhance these details, you can log additional information as the app is running. All recent log messages are captured for every crash to help your diagnosis. In the event that you're able to handle and recover from an error in your code but want to report that event for analysis as well, there's an API to send these non-fatal errors for display in the dashboard. It's easy to get started with Firebase Crash Reporting. On Android, the SDK is enabled simply by integrating the Firebase Gradle plugin into your build with no additional lines of code required. And on iOS, there's a CocoaPod which requires a few lines of code for initialization when the app launches. To learn more and get started with Firebase Crash Reporting today, be sure to start with the documentation available right here. We can't help you write perfect code, but we can help you fight fires with Firebase. Launching a great app requires dedication and vision, but growing one takes revenue. How about a monetization solution tailored specifically to your app? One that has rich and engaging ads? One that works with Firebase to give you the insights you need to grow? And one that uses mediation to connect you with networks all over the world? Well, that solution is AdMob. Trusted by more than 1 million apps, AdMob offers developers everything they need to implement first-class monetization strategies. And when paired with Firebase, it's even better. AdMob is included with the Firebase SDK and its APIs are built to make adding banners, interstitials, and video ads to your app simple. Plus, AdMob automatically selects the ads that pay you the most so you can sit back and watch your revenue grow. And as your business grows, you can benefit from AdMob's advanced features. Say version two of your app has a slick new design and now you need an ad format that fits naturally with your content. With AdMob's native ads, you can create CSS templates designed specifically for your user experience. We'll style the ads to match and display the result in a native ad view that fits your app like they were made for each other. And it doesn't stop there. AdMob helps you earn in-app purchase revenue too. AdMob can determine which of your users is most likely to make a purchase and target those people. They'll see an ad you design and they can make purchases right there. Now, with your app's slick design and in-app products, it's become a worldwide sensation. But how can you make sure you're maximizing the revenue generated by each user? With AdMob, you can connect to ad networks around the world, bringing even more advertisers who'll compete for your impressions. And because you're using Firebase, you get access to free and unlimited analytics. Imagine a big-time blogger in Tokyo posts about your app and overnight your Japanese audience quadruples. With Firebase Analytics, you can easily spot the trend and then switch to your AdMob settings to tweak mediation configurations or start a campaign targeting your new fans. That's AdMob with Firebase. It's as easy as you want and as powerful as you need. Analytics, we all know they're important to building a successful app, which is why there are many different kinds of analytics tools for app developers to use. There are in-app behavioral analytics, which measure who your users are, what they're doing, and so on. And then you've got attribution analytics, which you can use to measure the effectiveness of your advertising and other growth campaigns, not to mention push notification analytics and crash reporting. But quite often, this work is being done by completely different analytics libraries, which means you've got reports living in various tools across the web and trying to understand trends across these different reports, much less get them to talk to each other, isn't always easy. That's why we've created Firebase Analytics. Firebase Analytics is built from the ground up to provide all the data that mobile app developers need in one easy place. And it starts by giving you free and unlimited logging and reporting. That's right, no quotas, no sampling, and no paid tier to worry about. Simply by installing the Firebase SDK, Analytics automatically starts providing insight into your app. You receive demographic information on who your users are, how regularly they visit your app, how much time they've spent using it, and how much money they've spent in your app. But not all apps are alike, and you can get detailed information about what your users are up to by logging events specific to your app. These can include common events that Firebase Analytics has already defined, like when your users add an item to their cart, and there's also support for custom events you create yourself, like when a user completes a workout in your fitness app, or when they take a selfie in your photo app. Geez. But it's not just about seeing what your users are doing, it's also about discovering who your users are. So in addition to demographic information, you can also discover how your different groups of users behave by setting custom user properties. Have a music app and wanna find out whether your classical music fans are browsing more albums than your jazz fusion fans, that's the kind of data you can easily break out thanks to custom user properties. And Firebase Analytics doesn't just measure what's happening inside your app, it lets you combine your behavioral reporting, what your users are doing with attribution reporting, or what growth campaigns are bringing people to your app in the first place. So if you wanna know which ad campaigns are bringing you the users who spend the most money, or are sharing the app with their friends, or have unlocked the last level in your game and are ready for the sequel, you can do all of that in Firebase Analytics. But don't stop there, once you have all this information, you can take action on it using Firebase Analytics Audiences. Firebase Analytics gives you the power to build up groups of users, or audiences, out of just about anything you can measure in your app. Wanna target users in Brazil who have visited the sports section of your in-app store? It's as easy as a few clicks in a Firebase console. Once your app has built up this audience, you can send them notifications using Firebase Notifications, or you can modify their in-app experience using Firebase Remote Config, or you can target them through AdWords, Google's ad platform. And then, because that impact can be measured using Firebase Analytics, you can confirm you're getting the outcomes you expect. Firebase Analytics already comes with a dashboard that lets you view answers for common questions, but if you need more specialized analysis, you can export all of your data into BigQuery, Google's data warehouse in the cloud, where you can run super-fast SQL queries to slice and dice this data however you'd like. You can even combine it with other analytics data that you might be capturing. And this is just the tip of the iceberg of what Firebase Analytics can do for you. To find out more, check out our documentation here and give Firebase Analytics a try. We are in the era of progressive web apps. Browsers are more performant and capable than ever, and front-end JavaScript frameworks like Angular and Polymer have simplified development of rich app-like websites. You can now build an entire application purely with static files like HTML, CSS, and JavaScript. Firebase Hosting is tailored for front-end web applications. Firebase Hosting is a developer-focused static web hosting provider that is fast, secure, and reliable. No matter where a user is, the content is delivered fast. Files deployed to Firebase Hosting are cached on SSDs at CDN Edge servers around the world. From San Francisco to Stockholm to Seoul, your users get a reliable, low-latency experience. And every site is served over a secure connection. Firebase Hosting automatically provisions and configures an SSL certificate for each site deployed so you can get that green lock of confidence. Deploying your app from a local directory to the web only takes one command. So whether you're building a single-page web app, a mobile app landing page, or a progressive web app, Firebase Hosting has you covered. To get started with Firebase Hosting, check out our quick start to get you up and running in minutes. Happy deploying! So here we are on Main Street. It's pretty much the main thoroughfare at Google I.O. 2017. I figured, just take a stroll and see what's going on. Wanna come with me? Okay. Hi, everybody enjoying the festival? Okay, good. So apparently this is an AR augmented reality for those paying attention. Well, I haven't gotten, oh, figured out how to do it just yet though. Maybe it's over this way. It's just one of the many things that you can do here in between the sessions. I believe this is dancing, right? So let's go over here. So one of the cooler things that I found here on Main Street is the opportunity to send a postcard well to anybody. I'm choosing to send it to Future Timothy. Shall we write me a few notes? Okay. Hi, I'm gonna join you all here and write a postcard. Well, who'd you write? I'm writing back home to Georgia, the country. Quite far. Let's see how long it takes to get there. We'll find out. I'm sending mine to Future Timothy. Oh, wow. Can I do that? But not to Future Timothy, but Future Nino. Can I send it to myself? Let's go do it here. I'm just gonna write mine over here. What was that? I don't know where we'll be leaving at the time. Well, see, if you send the postcard now, Future U is any time after now. So you're good. So the address is Future. Are you still writing the address there? Yeah, I'll get there. Are all the addresses in Georgia long? No, not really, not really. Okay, so, but you had yours written, so that was cheating. Yeah. Okay, okay. It's okay, we'll wait. Okay, I hope you're doing well. Take care. I'm so nice to myself. Take care and good luck with your startup. I have a startup. Great, so. I think we gotta mail them. All right. Oh, there was more, more designs, next time. Okay. Bye. See ya. Okay. All right, let's check out what this wall is all about. It's going well, how are you? I'm great, thanks, having a great time here. Hey, it's a developer festival. How could you not have a good time? I have no idea, honestly. It's beautiful out, it's awesome time. So tell me what's going on with this wall. It's more than just a wall, right? It is more than just a wall. So this is actually gonna be an AR mule. So as you can see, there's a lot of little icons and things on the screen. If I actually put up my tablet here, you can see that there's actually gonna be floating overlays, so this AR is being generated because of there's triggers within the actual mule itself. So there's a couple of different icons and different animations here, so here you can be the Google icons in. And so what we've been doing today is just taking little short, five just second GIFs for people and then being able to send them to them so they have something that's kind of a moment of today. Awesome. I love it. I do, but can you send me a GIF, not a GIF? I don't know about that. Yeah, whatever. So there you are, jumping around in the background. What do you think? That's very cool. It's got the heel click and everything. Exactly. Awesome. Thank you so much. You're very welcome, of course. All right, let's go somewhere else. At this time, please find your seat. Our session will begin soon. Hello, welcome. Good afternoon, everybody. I'm Yiit. I'm the technical lead for architecture components and I'm here joining with Krill. It's also part of the team. So today we are going to talk about persistence. Now this is a loading screen. This is one of my favorite screens. Said no one. No one ever said, I like to wait. Especially if there's some content that they have already seen. And if you are making the user wait to see the same content, it's horrible. Like, you're a bad person. I don't mean that. So now how do we fix this? We persist the data. This is what we recommend developers. Whatever information you fetch from your network, different data sources, save it to disk so that if your application is restarted when there is no network conditions, you can show something to the user. You can make that experience seamless. When we say persist on Android, we know this is a very crowded sphere. There's a lot of different options, different companies. And most of these solutions are really, really good solutions. But especially if you come to Android and if you are new to the platform, what you would do? You would check what is inside the framework already. So there's these three things that come with the standard library. And if you read about them, you realize that if you want to put structured data, then you want to go with SQLite. SQLite is something we have been shipping since Android 1. It's a proven technology. It works very well. So you go ahead and say, OK, I know SQL. I want to use SQLite. You go to this page. This is the very first page you see. This is horrible. It's kind of trying to say, you know what? You actually don't want to persist. That's not what we meant, but this is what we have. So we said, you know what? Let's look at this page. We want to make this better, right? Let's look at this page. It's trying to say, I want to select these columns with this constraints, with this order. So if you look at this, this is a very, very simple SQL query, but it takes a lot of stuff to write it. You will need to define all these constants, which are not even visible on this page. So I want to say, OK, there should be some room for improvement when we came up with room, which is an object mapping library for SQLite. Now, OK, we said, OK, let's step back. We said writing the same thing with SQL is a lot shorter, a lot nicer. So let's go back to our roots, which now there's an SQL query that we assign it to a string. It is standard SQLite. Now, of course, if it is like this, we cannot understand it. We love annotations, so we put it inside an annotation. And if it is inside an annotation, now you want to get the response from the query. Want to say, well, put it into a method so we can understand what the query wants to return. Now, we know that this is a query. It wants to fetch these columns from this table and with this constraint. But if you look at the constraint, there's actually a bind parameter. This is SQLite standard bind parameter syntax. We didn't come up with this. So where do we get this bind parameter? How do you get parameters to your functions from the function argument? So we put it there, the most obvious place to get this argument from. And then, last but not least, we want to know what it wants to return. OK, so it's a returning list of weeds. Now, of course, there's that feed class, so it needs to be somewhere. We have that class. And I want to put this query inside a data access object. That's what DAO stands for. Because you don't want to have your application making database queries around the code base, you want to put these into certain classes, which we call data access objects. So it's a DAO. We need to tell the room that this is a DAO. And then we need to tell the room that feed is some class that it would like to persist into the database. Last but not least, we need the database to put these two things together. That needs to extend the room database class. And so the feed DAO we defined there, we just say this database has this DAO. And the feed entity we defined there, we put inside the database. So you can have multiple entities, multiple DAOs. You can actually have multiple database definitions that access the same DAOs. As long as whatever schema you define in that database works with the DAOs and entities, room will figure it out. Once you have all that description, you can get the implementation of that database through this builder. It's very similar to how you use retrofeed dagger. You define the interfaces. We provide the implementation. Now, of course, because we are doing the implementation, select queries are very specific. We really don't know how you will select. But there are other queries, right? You would like to insert something into database. So we said, OK, we can just define another annotation to make it easier for you. Inside these annotations, these are actually very, very flexible. You can pass multiple arguments. Like, if you read this method, it says insert both. You really want to insert both of them to database. And the room understand this. You can send a list of items. You can send variable arguments, multiple parameters. Like, if it is readable, room will make it work. You can even say, when you try to insert this thing, if there's a unique key conflict in the database, do this. So you can also specify these constraints. So since we have the insert annotation, we have similarly delete or update annotations just for the very common tasks. Now, there is, so we said, we just start up on the SQLite. But there are some cases where writing SQLite is harder. For example, I have this query where I want to query these feeds. But I'm trying to return a list of feed, which means I want to get multiple feed items. So that means I want to get them by ID. So I probably want to pass multiple IDs. Now, in SQL, you will do this in these IDs. But you need to know how many IDs you have, so you can put that number of bind parameters. But this is some information we don't know while writing this query. Well, that's OK. Room can understand it. If whatever the parameter you are passing to a function is a collection, Room knows that, OK, they want to have multiple bind parameters. At runtime, we will generate the right query for you, and we will do that. Send them as an array. Send them as a list. Doesn't really matter. If Room can understand it, we will do it for you. Now, the most important part about Room is that once we let you define all these things, let your Room understands these queries. It's not just take this SQL and compile time generate the code. Room understands it, knows what you are trying to do, and this gives us a lot of power. For example, if Room looks at this query, it says, OK, this is a select query with this bind parameter, which is passed in as a string. And it knows it is from the feed table. And it wants to return items as feeds. Then Room goes and says, OK, can I validate this? Yeah, the table has these three columns. The feed item has these three columns. These match. They're fine. You could have another class instead of that feed that has these three fields. Room will still work. Now, after this verification, let's say you made a typo. The reason why we define all those constants or use Java builders is also avoid this typing, like mistyping queries. Room can do this validation for you. Similar to Java code, right? You don't use builders to write Java code. You write your Java code. Compiler tells you if it is wrong. And the ID helps you with this. So the reason why we don't have builders is that we think helping you write a SQL query is the job of Android Studio, which they are working on it. But once you write the query, Room verifies that it is correct. So if you do this mistake, Room will not let you compile the application. Similarly, if you access some columns that doesn't exist in the database, again, it will be a compile time error. By the way, it can be any query. You might be joining five tables. Room will still understand. You might have grouping, like almost anything in SQL. Let's say we made a mistake like that, where we only fetch the subtitle column. But we want to return this as a feed. Now, in this case, the feed object also has a subtitle column. So this might be intentional. Maybe you are going to fill in those fields later on. Or you made a mistake. We don't know. In this case, Room just generates a warning. It says, hey, feed class has these two other fields that you are not returning from the query. It will just give you all the details, well, what the query returns. Maybe you made a typo. And what are the fields in the column, in the entity? Now, there are two ways you can get through of this warning. The first thing you can do is maybe you really want to return them as feed instances. And I can tell Room to ignore it. I know what I'm doing, suppressed this warning. Alternatively, you can just create any class. As I said, you can just say, return me these as a string. You are returning one column as a string. And Room will do it for you. Very similarly, what if we are returning ID and title? In that case, again, Room will say, well, there's two columns, one string that doesn't match. It will give you an error. And when that happens, we can just create a pojo. Again, any class in your application, as long as we can read what's inside it and it matches what the query returns, we will generate the code. And to make the match, you can rename the columns in SQLite. All of those things works here. Now, this Room does everything automatically. But there's sometimes you have classes we don't know about. Now, I want to invite Kirill to talk about them. How do we extend Room, Kirill? Thank you. So let's talk about something a little bit more interesting. Going back to this feed object that has one ID integer field and two strings, these are primitive types that are directly supported by the underlying SQLite database. What happens if we want to add a field of a type that is not directly supported by SQLite, such as Java utility. In this case, at compile time, Room is going to tell us, well, I can't really figure out how to save this. You need to help me out. And the way you help out, in this case, is provide two methods to take your original data, convert it to something that SQLite can store. So that is going into the database. And on the way out, when you're doing the queries, convert back from that representation in the database to your original data type. You implement these two methods. In this particular case, we are converting date to long and back from long to date. We annotate them. As Yitz said, we'll have annotations. We use the type converter annotations on these two methods. And then we take this MyConverters class and use the type converters annotation pointing to these two methods, pointing essentially to this class that has our converter methods. You can put it directly on the field in your entity object. You can put it on the entity object itself. You can put it onto your data access object class or the specific query in it. Or you can put it on your database class, no matter where you put it. At compile time and at runtime, Roon will do the right thing to find those two methods. Now with this, we can write a query that finds the list of all the feeds that were posted between two specific dates, from and to. So here, since we are accessing the from and to fields or columns in the database that are defined as long when the database table is created, you don't really want to use long as the implementation detail. Since you are using the date class for your entity data field, you want to use the date objects in the query itself. And this is what you do here. And when the query runs, these two values from and to are going to be converted to longs. The query will run. And on the way back, when Roon constructs and fills these feed objects, it's going to convert back from long to dates. Now let's talk about even more complicated stuff. What happens if you have a class hierarchy in your kind of data model universe that has object graph that is not flat? In this case, we are adding a location object that has two double fields, latitude, longitude, into our feed. Now what happens at compile time? Once again, at compile time, you need to tell Roon how to store this field. Since especially here, not even it's not a primitive type, but it's a type that has subfields in it. So one option is going back to our type adapters. You can do something like take latitude and longitude and convert them into a single concatenated string with some kind of, let's say, semicolon as a separator on the way in. And on the way back, it's going to be some kind of a split where the string goes back to these two doubles. Or you can go to do something like bit masking and bit shifting to try and convert these two doubles, encode them in one long field. This is going to be hard to query even for this simple case when we only have two doubles that we are trying to encode. We're trying to kind of compact it into one field. And the more complicated the data structure is, the harder it becomes to use type adapters for these cases. Another option is to say that I'm going to flatten everything that you are going to flatten everything for us. So instead of having the location data field in your data entity, you are essentially flattening all the fields from the location class into the main feed class. And then, Room is going to create these two latitude longitude columns, which works. But first of all, you lose the encapsulation. Now you just look at the definition of your data class, of this feed object, or somebody else in your project looks at that. And it's not really clear that these two objects kind of represent one single entity. And another thing is just because you want to persist this object and later on retrieve it, why should you be changing the definition of your data classes? That's not good. Instead, there's another option, which is also not ideal. You can say we have two data classes. Why don't we store these objects, each one in its own separate database table? So we have our feed object with ID title, subtitle, and posted at the date. And then we have the location latitude longitude. Now in order to connect them, not when you save, but when you retrieve them, in order to connect the location back to its feed, you need to have the primary key, the feed ID, so that they are combined together. Once again, it works. First of all, yet another table is not very clean, but also going back to what I mentioned before, why should you be forced to change and tweak the definition of your data model classes just because you want to persist them? So what would be ideal is that you have your clean separation of how you define your data classes, but at runtime, the way the feed object is persisted is in one flattened table. So essentially, the latitude and longitude fields from the location are flattened into the same table. And the way you do it is with this embedded annotation. At compile time and at runtime, room is going to figure out how to flatten your entire hierarchy into this one table. Now you can write a query like this in your data access object. We are going to select all the feeds in the specific geographical rectangular area. As you can see, we are referencing latitude and longitude in the query directly. They are not in a separate table. And you are just using the same names for the database columns as the attributes in your location object. You can write a query like this to select the location for the specific feed based on the feed ID. And room is going to figure out that while it needs to fetch the entire row from the database, it only needs to create and fill the location object, because this is what this method wants to return. Or if you want to be more specific, you can say, I only want you to fetch the latitude and longitude from that table. The last part about embedded objects is what happens when you have more than one field of the same kind of nested class. So in this case, we want to store two locations. So what is going to happen at compile time, room is going to say, as I was flattening these fields into one database table definition, I see that the same latitude field is defined by two objects. And since SQLite doesn't support having two columns with the same name in the same database table, it's going to fail as an error, not as a warning. And what you need to do is, in case you do want to have something like this, to have, in this particular case, two locations, you want to use the prefix attribute on the embedded annotation. At compile time and at runtime, it's going to be used as room is flattening the data graph definition. It's going to use these prefix to create latitude longitude for the original, for the first location, and seen latitude, seen longitude for the second location object. Now let's talk about observability for those of you who have listened to yesterday's introduction session and who have been here earlier in the day to listen to the lifecycle live data view model. We want to talk about observability. So this is a simple query that returns the list of feeds based on a particular query. This is a snapshot, a point in time where you have run this query. And however many feeds it returns, it represents the state of your data universe at that particular moment. What happens if your data is dynamic? It can be manipulated directly by your users in the app, or maybe it's pushed or pulled from the server. So in this case, if one of the feed changes in the database, when you add new feeds, when you delete existing feeds that match this query, every single time to reflect these changes back onto the screen, you would need to refresh this query explicitly, to run this query again and again. Instead of returning list of feeds, you can wrap this return object. It can be a single feed or a list of feeds in a live data object. This instructs room not only to fetch it once, but also update the live data. The callback that you pass to the live data when you call the dot observe will see it in a few slides. Every single time, any one of the objects that were returned by this query was changed, or new objects were added, or existing objects were removed from the result of the query. And this is my favorite part by far of architecture components. And also, we provide support for using flowable from RxJava 2. So let's see. Woo. Like this one. OK, so let's see an example of how room integrates with the rest of lifecycle aware and lifecycle aware parts of architecture components. We're going to start with a simple database interface. It has one method to load the data for a specific user, and one method to save the data. The first time, it's going to be a simple insert. And then as the data changes, it's going to be an update or replace. As you can see here, instead of returning the user object directly in our load method, we're returning a live data that traps the information about this user. Now, we're going to set up the data binding to actually show the information on that user, perhaps in the details page or somewhere in some kind of a profile page. We have our lifecycle activity. It can be a lifecycle fragment. It can be a lifecycle service or your custom lifecycle owner. In this case, we're using a lifecycle activity. And in onCreate, we're going to get access to our database and database access object. We're going to call our load method that returns the live data that traps that user data. And now, we are going to use the observe method on this live data object passing two parameters. The first parameter is the reference to the lifecycle owner. This is our activity. And the second parameter is the callback that is going to be invoked by a room at runtime every time this user information changes in the database on the first load or on the subsequent updates. Now, word of caution, because it's kind of like, you know, we don't want to put too many lines of code here on one slide, you usually don't want to kind of expose database details or details about loading this information from your web service or caching it locally in some form directly to the activity or directly to the fragment. So this code is highly recommended to be put in a view model. And once again, the most powerful part here is this connection between the lifecycle activity, the owner, that says, I'm active or I'm not active, and the live data that room returns as a result for this particular query. This callback, the callback that you provide as the second parameter to the method, once again, is going to be invoked every time room detects that the result of that query that you wrote, select star from the user database table, has changed. Once again, the flow of information is one part of your app. It can be some kind of a service that pulls information from your back end. Or maybe the information is pushed through some kind of network tickles. It gets updates on the data. It's using this save method to insert the information into the database or update the existing information. As the data is updated in the database, room is going to detect at runtime all those active places in your app, activity, fragment service, or your custom lifecycle owners that are active and that are subscribed or are observing a live data object that traps the information that has been updated. And then this callback is going to be invoked, letting you know that most probably you want to update whatever slice of data is right now showing in the application screen. So for example, it can be an app that shows live results for some sports games or live updates for weather wherever you happen to be. All live updates to market stock prices. One part of your app handles the data flow, get that information, and display and insert it into the database or update in the database. And the other part gets notified by room and the live data that the information is now has new data. And you need to update whatever the presentation is once again in your activity, in your fragment, or in your service that posts maybe notification or handles a widget. All right, thank you. So let's focus on another important topic, which is relations. Now, SQLite is a relational database so it can understand the relations between entities. And if you are using any ORM on Android or like in any other platform, they usually try to handle these relations for you. So let's look at it how it works in room. So if you have a feed, we probably also has a user object which has posted this feed item. So you want to have a user field inside that field, inside that entity. If you do this in room, it actually won't compile the application. The moral of the story, we don't allow entities to contain other entities. Now, you're probably asking why everybody else does this. And if we can understand all this SQLite, why can't we just enable this? Now, there's a very particular reason for us not to do this. We have seen a lot of problems with applications caused by these kind of models. So I just want to go through an example. So first, let's look at this query. It strikes to select a feed with some ID. When we select this feed or when you look at this feed item, you cannot know whether it also fetches the user or not. It's very hard to define. And most of the ORMs, the way they solve this problem is you just say, lazy loading, right? Until the user wants to fetch that data, don't load it. Which works very well if you're working on the server side. But on the UI, it is a little bit more tricky. So it's implemented lazy loading, right? We could just generate this code for you. We could say, just keep a user ID. We'll say, keep a user instance. When the get user is called, we could say, if the user is the first time, now fetch it from the database. Otherwise, just return the existing ground. Well, it's very easy. We believe this is actually a mine planted in your code base. Let's see how we exploit it. We are using a recycler view where we show these feed items. And then we'll get the feed, and we'll show the title and the subtitle of the feed item. It all looks fine. And then like two months later, your product manager comes. Actually, you know what? Let's show the user name in that feed item as well, right? It makes sense. It's like, OK, it's so easy. Your developer goals just add, well, feed get user, get user name, put it on the text view. You are done. You send the code review. It looks obvious. It passes the code review. You test it. It looks fine as well. But when your users start using it and your application starts receiving these ANRs, which stands for application not responding, this happens because when you are testing the application, you are on device. There is good network. Everything is fast. But when the user is using their application, there is this probably 50 other applications that are also trying to run. And if you want to relate to this, just try to use your application when Play Store is updating your applications. You will see how it feels. These are still mobile devices. The disk on these devices is actually relatively slower. And the UI thread only has 16 milliseconds. So even if your query takes five milliseconds, like ignore all the locks and everything. Even if it's five milliseconds, five milliseconds is a lot of time. You can probably lay out 20 recycler views at that time. So how do we solve this problem with room? Now, previously, we said SQLite is a relational database. We should just take advantage of it. We just keep the user ID. We know the feed as a user ID. Just keep it. And now we can write a query that says, I want subtitle, subtitle, and username, and join me these two tables on this constraint. Like this is already a solved problem on the SQLite LAN. Why don't we just embrace it? When you do this, it's faster. You only fetch the data you need. And when you fetch the data, you know that data is in memory. I'm going to say, wait, what is this item? This is, say, it can be like any Java class. It may even have a constructor with public fields, public final fields. Room will set this without any reflection. We will use that constructor. You can even say return a live data. Because room knows about the query, it knows it's querying these two tables. Like, think about the other example. If you were observing the feed, if the user changes, how do we know should we invalidate the feed or not? But when you wrote this, we actually know we know what you are returning. We know which tables you are querying. So we can be clever about this. So it's what we tell people. If you want to have relations, and you cannot use embedded or type adapters, you probably want to use pojos for your relations, which means you don't have any hidden costs. You only fetch what you need. And it's still observable, and you don't need to do anything for that. But of course, we said SQLite is relational, and it supports foreign keys. Room supports foreign keys as well. So inside your entity, you can say, this entity has a foreign key to this other entity, where I want to match the ID column in that entity with this child column. SQLite foreign keys also support, like you may have multiple fields matching multiple fields in the other tables. It's very complex. You should just go ahead and read. But once you declare this, now SQLite knows about this relationship. And if it knows, you can say things like, you know what? If someone tries to delete that user, don't let them delete, because I have a feed for them. Or you can say, if that user is updated, please update me as well. So we're trying to embrace SQLite rather than trying to hide these details and create a bunch of pitfalls. Index is another great thing. Now, SQLite keeps the data on the disk in a structured way so that you can query it back. As you can write a query like this, where you want to select feed items, which has a certain title. But if you just write this query and don't do anything else, that means SQLite needs to go through every single row in the database to find the ones that are matching. Now, if you are not making this query frequently, that's fine. But if this is the query that you run frequently, you probably want to index it. It's very simple in room. You just inside the annotation, you can say, well, can you please index this column? If your query is like this where you have your query title and subtitle together, you can also create a composite index, which means these two things will be indexed together. So if you are querying together, that will be fast. And you could have multiple indices. It doesn't matter. Either way, just because you're querying something doesn't mean you should index it, because every index you add has a cost on every insertion or data change. So you need to just measure and see what's the best way. And SQLite's documentation on query planner is amazing. If you read that, you will know how to write queries and how to optimize it. Now, testing. We know in the past we haven't been very good in this area. But when we were designing architecture components, testing was a very, very important topic for us. We wanted to have whatever we create for architecture components that can scale and that can be testable. So it's OK how you can test your queries. By the way, you should really test. Just because we are verifying your SQL queries doesn't mean that that's what you really intended. Your Java code compiles, but you still test it. Similarly, if your room SQL compiles, you should still test it. But testing room is actually very easy. Now, we still recommend testing on the device. But there is no activity or UI, so they run faster. You can also test on the JVM by changing the SQLite bindings we use. So you say, I'm just going through a sample case. So before each test is created, I create my database, but I create an in-memory database, which means the database will be created when the test starts, and I can clean it afterwards. You want these tests to run isolated from other tests. So you don't want to save the data into disk. So I say, after, just close the database so the memory can be released fast. And I can write it as the rest. This is just like I create an item. I inserted the room and then make some queries on it and then verify that that's what I expect. But the way room is designed actually helps with the overall application testing. Do you remember we talked about those DAO classes we generate? Because you defined how you access the database as an interface, which has nothing to do with SQL, you can very, very easily mock it. So if you have a V-model that you want to test, first of all, you can test V-models on your computer. You don't need to run them on the device. You can create a mock of that DAO. You can just use Mokito to mock it. And then you can write a test like this, like when load feed with this parameter is called, return these feeds. Because it's just a Java interface, like there is not even any mention of database there. So it's all very well abstracted without you doing anything. Now, migrations is another important topic, right? So as you use the application over time, your entities will change. They say, if you add this user object, now we start to show user photos. So you want to add this new field into that. You see, if you do that, when you run the application, as soon as your first access to the database, the room will say, oops, looks like you changed the schema, but you didn't increase the version. It's going to crash your application. Now you need to handle this. You can simply increase your database version in the annotation, which means you don't want any of the previous data you want to start from scratch, or you can write a migration. Now to write the migrations in our database builder, you can just pass a couple of migration implementations. Like this one says, I know how to migrate from one to two. You can have one from two to three, three to four, and then room will run whatever necessary from the current version on the device to your latest version. It will just chain them. And if anything in the middle is missing, that's going to recreate the database. Another important thing to know in these migrations, don't use any constants. Always use full SQLite, because over time, your application will change. Those constants may change, and you will get bugs. OK, we wrote this, but we said testing is important, right? So room also comes with a testing main one artifact. We have a separate artifact so that you can test your migrations. Now these are a couple of steps. First, when we compile room, we can actually ask room to export the schema, how your database looks, into some JSON files, which you should be committing to your version control system. Past argument. And then we want these schema files to be visible to our test, right, so that we can access them. Last but not least, for our Android test, we add this new dependency from room, which includes some helper classes that will help us with our migration testing. So let's say we are trying to test a migration from version 1 to 3. There is this migration test helper clumps, which comes from that artifact. We create an instance of that helper. This canonical name is how room exports these files by default. Just need to think about it much. And we create an open helper factory. Once we have that, we say, OK, migrate from 1 to 3. We ask helper to create the database at version 1. You may even have entities that doesn't exist anymore. It doesn't matter, because it did export the schema previously. By reading that, it can recreate your database in that version. You can just change it directly to whatever you want. But as you realize, this doesn't return you a DAW. It just returns your database instance, because DAW may not be valid anymore. You can just say, run these migrations and validate the schema. We give it a database name. We give it the version we want. Now, you may have tables that you had before. And maybe you want to still keep them, but don't tell room about it. So you can decide whether we should check for these or not. And then you give a list of migrations. These are the all migrations we have. What the room do is migrate the database from the previous version to the new version, and then check the schema for you. And then you can also have, if you have custom things, have changed the entities in the previous versions to the new versions, then you can manually assert on these things. OK, so that was room. We have a very detailed documentation page on the documentation on the web page. By the way, like we do for room, also for all our architecture components, we have a testing artifact, which includes one rule for JUnit and one rule for your instrumentation test that starts with dealing with the background test we use. So we want to make sure everything we have here is testable, and if it is not, we will fix it. So what's next for you? Room is available today. Please go check it, like play with it, see how it looks. Check out developerandroid.com slash Arch, which is the main page for all of these architecture components. Now we say these can work independently, but these also work very well together, as you've seen in the live data example, Kiril Hatt. So check it out, and also check out the call labs. We have call labs for both room and life cycles, and they're available in the call lab tent. Thank you. Hello, everyone, and welcome to the 2017 TensorFlow Developer Summit, and I'm delighted to see all of you here today. Today, we are excited to announce TensorFlow 1.0. TensorFlow's philosophy has always been to give you the power to do whatever you want, but also make it easy, and this makes it even easier. We really were hoping to build a machine learning platform for everyone in the world that was fast, flexible, and production ready. The point of TensorFlow was to figure out, how can we give this back to the community and be able to use TensorFlow to further weather as the research for the production needs? It's how we express our ideas, and it's the piece of software our engineers and scientists spend most of their time interacting with. So TensorFlow is a really exciting tool. Something that will let you take the confusing world of TensorFlow and start to dive into it. It's just a really amazing time to be an AI researcher. One of the projects that we've been working on is using deep learning for retinal imaging. Can we use deep learning and reinforcement learning to generate compelling media? But this is just the beginning. The TensorFlow community is truly global. We want to see all the amazing things that you guys can do with TensorFlow. Thank you very much. Good morning, Berlin. It is an absolute pleasure to be here with you. Here we go. We're live. We have a lot of experience building some of the world's most popular applications. And we've learned a thing or two about what it takes to build an app. And we've found that it's a pretty difficult process. A lot of your time goes into running infrastructure instead of building the features that make your app your app. There has to be a better way. That better way is using Firebase. We're now up to over 750,000 developers using the product. If you use Firebase, your app's code talks directly to our powerful managed backend services. We take care of security and of scalability so that you can focus on building the features that your users love. Today, we're launching Firebase UI 1.0. It's an open source library. It has customized theming. And it works for web and Android and iOS. So you can go ahead and drop that in. And you'll have all of the UIs that you'll need. Is my app set up correctly? Which events are being captured by the SDK? Are you receiving my events and parameters? We've built something, the ideal tool to answer all of these questions and these pain points. App quality leads to better user retention. Better your app is and the more stable it is, the more likely for users to come back and for your business to be successful and sustainable. And that's where we come in. So we're really looking forward to get the feedback from the community, as always, to help us continue to refine a product and to work together to help you build a better app. And we want you to be able to spend all of your energy on bringing innovation and creativity, something new to the world. That's really what we're trying to achieve here is making all the infrastructure pieces simple for you. And I'm really excited for you to engage with Firebase and see how it can make you more successful. All right, let's get back to the code. Hey, gang, want to see something neat? Check out this awesome hidden feature I found in Firebase Analytics. So I'm over here looking at all my reports in the Firebase Analytics dashboard. Here, for instance, I've got my active users for the last 30 days. And while these graphs sure are pretty, I'm thinking it'd be kind of nice if I could get these numbers into Google Sheets or maybe Excel so I could analyze them a little better, right? Well, watch this. I'm going to select my graph here in the Firebase console. It's kind of hard to tell, but you can see by the highlighted text here that my graph has been selected. And then I'll hit Command C to copy it. And then I'm going to switch to a blank Google spreadsheet and hit Command V to paste. And look at that. All my values are right there in the spreadsheet for me to analyze. So you can see here on the leftmost column, I've got the date. And then all the actual numbers are in the columns next to it. Now, you might notice that I seem to have two columns of what looks like the same data, right? I've got monthly active users here. And then right next to it, I've got this monthly users column. And then the same goes for my weekly actives and same for my daily actives. And so basically, that first column is for the value that corresponds to the date here on the left. The second column is basically for that corresponding day in the previous 30-day time period. Basically, it's the values that belong to this dotted line here in the graph that I copied. Make sense? Okay. And then I can do the same thing for a bunch of these other graphs. Here I can copy and paste my daily engagement numbers. Let's get these into a new sheet here. And again, you can see I've got my engagement numbers from this timeframe and this first column, and then those same numbers for the previous 30 days in this second column. And better yet, I can jump over to an individual event like this completed five levels event and copy all these graphs here at the top. And you can see I'll get event counts, user counts, event per user counts and values for every one of my events that I am recording in Firebase Analytics. And this lets me do some pretty nice calculations right here in Google Sheets. For example, let's say our game designer is curious how often people are failing a level in our game. Well, for starters, I've got my level start graph here to show when people are starting a level in my game. So first I'm gonna copy and paste these numbers into a new sheet. Let's put them in. Okay, great. And then I'm gonna do the same thing for my level fail graph. And that will show when people have failed the level. So we'll copy from here and we'll paste them right in next to my other numbers. And once I've copied and pasted these values into Google Sheets, I can then calculate my average failure rate per game stat by dividing this number here by this other one. I'm gonna copy this formula down for all of my dates. Let's give it a percentage format so it looks nice. Maybe we'll add an average at the bottom here. Let's do average for all these numbers. And there we go. Looks like my game has an average failure rate somewhere in the low 30s, which sounds like it's just challenging enough for our players. So our game designer is happy. Now, a couple of disclaimers here. First, this doesn't work on all the graphs I've tried. Some of them just don't seem to copy and paste as well as others, but it does work on a surprising number of them. You'll just kind of have to try them out and see if they work. And second, this will never be a replacement for some of the awesome and sophisticated data analysis capabilities you get by exporting your raw data to BigQuery. And you should totally go watch this video if you wanna find out more. But if all you wanna do is maybe compare two graphs to each other or calculate some standard deviations or averages on a particular event, this trick can work surprisingly well. So give it a try yourself, have fun with it, and we will see you soon on another episode of Firecasts. We're now in the IoT tent. And first, we're gonna learn all about OpenThread from Jonathan. All right, so OpenThread is an open-source implementation of the Thread Networking Protocol. What it is is a low-power mesh networking technology that allows devices, IoT devices, to talk to each other over a low-power mesh network. So if you're building products that run on battery that's supposed to last for years, not months, Thread's a great solution for that. So what we've done at Nest is taken the protocol and made an open-source implementation, put it in our products and made it available to developers to build into their own ones. Awesome. Yeah, so you want me to walk you through the demo here? Yeah, show me what's going on. So all these devices you see on the wall are actually running OpenThread. They're running our partner hardware and they're connected to a single Thread network, one giant mesh network. And one of the benefits of Thread is actually using IP, so just internet protocol. And each device has an IP address, an IPv6 address to be specific. So that makes it really easy for developers to build apps because it's just IP that they're used to. So if you can ping a web server, you can ping a Thread node. So in this demo, we're actually showing pinging a device over the Thread network. So as you see that light blink, it's actually going over Wi-Fi from this tablet to this Raspberry Pi, which happens to be on Wi-Fi, and then fanning out to the Thread network. And you can imagine sensors or actuators like door locks and windows being replaced by an LED, but we're just simplifying it with this demonstration. It's really cool. That's awesome. Thank you so much. Hey, before we go, actually, if developers want to get started, where do they go? Sure. It's been public available for the last year. We launched it at Google I.O. last year. And you can go on github.com slash OpenThread slash OpenThread. There's code labs available if you're here at Google I.O., but it's also available on the Google Code Lab website for you to try out at home. I see a golden retriever. It's a golden retriever. That's pretty great. So we're in the Android Things room, and I'm here with Ryan and Wayne. Ryan, tell us about some of the stuff that is built on Android Things in this room. Cool. Have you told him? Sure. All right. So over here, we have a simple demo that we are using to show how easy it is to go from prototype to production with Android Things. So at the top, we have a very simple light that's turned on using an Intel Edison kit. Over here, we're using the design, but we used a custom PCB to make the footprint smaller. Therefore, you can fit the overall thing into a smaller form factor at the bottom you can see in the candle. Actually, the really cool part about that demo is that any developer can make a circuit board like that. So in our talk tomorrow, we're going to show how to actually build that circuit board. And you can actually solder this up in your own workshop at home, so it's actually one of the really cool things about the SOM architectures you can do that. Yep. On the right, we have the TensorFlow Camera demo. This is also running on Android Things. We have Raspberry Pi 3 camera. And it's actually fairly easy to build as well. All these parts are off the shelf. So you'll notice that it's running on battery, completely portable. And the best part is it's actually running offline, completely locally on the device. So we're running the TensorFlow model, TensorFlow Inception model you can get online. And then once we download it, install it as an APK here. Now, you don't need online at all. So you don't need data cost associated with it at all. So when I press this button, it will take a picture using the camera module located here. And then it'll be processed on the device. And it'll say what it thinks it is using Android Text to Speech. Tell me about the M&Ms over here. So on this one, this was actually built by one of our external developers in our community, Louis. So the Smile Candy Dispenser. It's powered by Android Things using Raspberry Pi 3. And once you press this button, the camera will take a picture of you, send that image through Cloud Vision API. And if it detects a smile, it'll give you the candy. And we're using our relay module, as you can see here, connected to the motor of the dispenser to activate the dispensing. All right, y'all. Well, I think that's all the Android Things that we're going to check out in this booth. Wayne, before we get going, what are some things that developers can do today to play with Android Things? The really cool thing is all these samples here. We've open sourced all of them. So the TensorFlow image recognition, the LED candle. We've released all the source code on GitHub. The schematics for the candle are also available as actual circuit designs. You can actually make them yourself. So you can try all these things out. And then it's really easy to get started. You use Android Studio to write your code. And it's really easy to get going. So any Android program who's written a phone app now has the ability to make IoT apps as well. So that's one of the really cool things about Android Things is it takes advantage of all of your existing Android knowledge and allows you to apply it here. All right, we're now in the Works with Nest room. And I'm here with Jesse, who's going to tell us a little bit about Works with Nest. Yeah, thanks for stopping by. So Works with Nest is a developer program for Nest. And we have a bunch of different APIs that let you connect into the Nest ecosystem. And there's a lot of ways that you can connect. You can connect to the thermostat, the camera, the smoke alarm, and then also our demand response programs that we set up with utility companies across country. That's awesome. So what are some of your favorite integrations with the Works with Nest program? So some of the really cool ones are over here. Be aware, it's an air quality sensor. And it'll measure the different things in your air. And then how it integrates with Nest is it connects to the thermostat and uses the fan to clean up the air in your house when it detects that levels are high. Oh, that's cool. What else? Another one that's really cool, which is a little different. It's not directly connecting with the they're all over. The product is the Rochio Sprinkler. So I have that in my house. It manages the water automatically. So I don't have to worry about it in my yard. With Nest, it shows up in my home report every month, where Nest tells me how much energy I'm using with my thermostat. But then I also have a list of how much water I've used and compares it to the previous month. Pretty cool. Cool. So that's some of the stuff that's been around for a little bit. What's next? What's like the newest integrations? So some of our new things are with the camera. So the camera is now connected to the thermostat or to the APIs. And we're doing cool new things there. So originally, you could use the APIs and get motion events and then have your products respond to motion in the house. But lately, we've been developing our image vision. And now we can recognize people. So now, works with Nest products can get these people events and do things when they know that there's a person. One example is not necessarily with people, but cool integrations with the camera is Chamberlain and garage doors. So what they do is when the garage door opens, they grab a snapshot from the thermostat and they integrate that in their history UI and their app. That's awesome. Totally. OK, so one last question, because we also checked out OpenThread. Is there anything in here that's using Thread? Absolutely. So super excited about Thread. It's really like the next phase of our development program. So the first one is giving people an easy way to connect to Nest products. And then phase two, we're going to take some of the core technology that we've developed at Nest to build our Nest products and make it available for developers. So we're working with Yale on this lock. Yale's been making locks for 50 years, 100 years. And they're really good at it, but not really a software company. So we've taken some of the software that we use on our Nest products, like Thread, and made it available to them. And it's going to be a really cool lock that's integrated with the ecosystem. And we've actually open sourced Thread. And we have a booth just a couple of booths down where you can see all about it and figure out where to get the code. Awesome. Is there anything else you'd like to tell all the developers out there? Check out the code labs. We have an open Thread code lab. We have one with the camera that integrates with TensorFlow. It's a fun stuff that you can do for the next two days. Awesome. Thanks, Jesse. Thank you. How's everybody doing? You OK? Yeah? That was really bad. How's everybody doing? You OK? That is much better. I'm Paul. I'm Sturma. And what we normally do is we do a show called Super Charged, where we live code things for people. And at Polymer last year, the Polymer Summit, we did live, live, live in front of people live. And we thought, let's do it again. And so that's exactly what we're going to do today. But what we normally do is we normally have one of us code and one of us talk. But today, we thought we'd have both of us code. So not at the same time. I mean, we could. That would be stupid. We probably shouldn't. We shouldn't. That would be stupid. We shouldn't do that. So Sturma's going to code first. Do you want to tell them what we're going to do? All right. So I had the idea that, or actually, I exported my Twitter archive, because I wanted to have it and look through it. And it turns out that the file that contains all my tweets I've ever done is about 16 megabytes worth of JSON, which is a big file. And I'm not going to open it because VS Code doesn't like big files, apparently. But I have the first four tweets and an extra file. So that's basically what we're looking at. And you can see my first tweet apparently was in 2008, where I say something fundamental as, got an account on Twitter. Yay. That's everybody's first. It's the universal first tweet, I think. It's either that or hello world. Like shouting into the empty echo chamber. Nobody cares. And what we wanted to do is, how would you use this file in a web app? Bring that file back, because there's something I noticed here. This isn't an array of objects. Oh, exactly. It's not an array. It's literally just concatenated JSON objects. So they end here. The next one starts. There is no commas. In fact, yeah, the JSON parser is kind of, if you get rid of the selection, it's got a red wiggly under there because it thinks, well, it's not real JSON, right? It's not a JSON file, because there's more than one JSON object in there. So there's going to be some difficulties. So I guess before we lose more time, we just get started. And the first thing we always have to do is type really fast and suddenly we have copyright errors. We like snippets too. So I'm going to work with some script. So what we're going to do is I'm going to work with my part of the program. And I'm going to use the new stuff, which Serma.js. That's me. Brilliant. OK, fine. Fine. Fine. That's the game you want to play. I should call it desktop, because that's my Twitter handle. OK, like it. Follow me. Wow. Always branding. I love it. So I'm going to use a module, which is a new thing, which, A, allows me to be deferred by default, which is something you should always be doing. But also, we get to use the new fancy import, which we're going to use later. So let's write a new file and look at the tweet and fetch the tweets.json file. So I'm not going to put anything else in here. I'm just going to put this here and call it doesserma.js and go in here, go to the networks tab. And if I load this, you can see I just downloaded 16 megabytes of JSON, which is quite a bit. This was really, really fast, because I have a server running locally. The second you actually are on the internet, this is going to take a while. So you would have to wait seconds, if not minutes, before you could do anything with the data. And that's actually a really important point. The web is a streaming medium, and we actually want to treat it like that, rather than saying, I'm going to hold everything back, and then da-da. So what you would usually do is the official sound. The official, I receive the response sound. What you would usually want to do is you want to take respond and then call.json, which turns the response into JSON. Because as we have established, it's not actually a JSON object, it's a bunch of JSON objects. This would fail with, there's like trailing data, please take care of this, so I can't use it. The next option that most people probably are aware of is, not that, got text, which would give you a 16 megabyte sized JavaScript string. And I'm telling you, on mobile, you're not going to have a good time. So we're now going to go into the new territory and use something that probably is not as well known, namely that response.body is a stream. Ooh. And what kind of stream? Thank you. I was waiting for your line. Sorry. It's a readable stream, meaning that it's a stream that you can read. OK. You came in for the education. You get it. I'm going to explain it more. So a readable stream is a synchronous data structure. Some people are going to scope it. It's a little bit like a promise, but that can be recalled multiple times. So every time you get a new result back. It's a pipe of data, right? Yeah, it's a pipe. It's a pipe. It's a pipe. I'm going to assume data off it, right? And every time you get the next chunk in the network delivered. But because it is from the network or because it's a stream, once you get that chunk, it's gone. It's been consumed. And therefore, there can only be one reader at a time. And so what we need to do is we need to get a lock for this reader, which you get by GetReader, which means you have now a reader object and you're the only person who can read from this stream. And what do you do if you want to get rid of the lock? Is there like a get rid of reader? Release lock. OK. I knew that. That seems really OK. GetReader and release lock. We don't need that though. So we have a reader. And as I said, a reader is an asynchronous data structure. And so I'm going to make use of async functions. Because otherwise, this is going to get weird. Because with async functions, we can write code in a linear imperative way, even though it is asynchronous. So we can do stuff that is very old school and usually very hated upon a geographer. It's well true. Ew. Really? Yes. Oh, this isn't production code, by the way. Yeah, it's not production code. How are we going to say that? It will work. Hopefully. Hopefully. But there's, yeah. So what you get, so when you want to read from a stream, you would call your reader and say, read. And since it's asynchronous, we have to await that. It's basically a promise. But with the wait, it just inlines all the handling. And it makes it much nicer. And what we get from there is a value and a done flag. A little bit of destructuring going on there. All the new features, man. All the new features. So if the stream is done, it's end of stream. Then it's done. So done is true and we can just return. Which is why a while true loop is OK, because that is our cancel condition. This is how we break out of it. I'm still really iffy on it, seriously. Anyway. So what is value? Let's just try out what value actually is. So I now console log the value thing. I'm going back here. Going to open the console. And we can see we just get a whole bunch of U and 8 arrays. This is literally the raw data from the HTTP response packaged in nice typed arrays in JavaScript. And you can see that they're all pretty much 32 kilobytes big. But that is more for coincidence than anything else. So for example, the last one is smaller. There's probably some smaller ones somewhere in the middle. I don't know. But it's a number you cannot rely on. This might be an implementation detail of Chrome or my Go server. It doesn't matter. The number not really relevant. OK. So now we are receiving these chunks. And they're 32K big, coincidentally. But the problem is really that we have these whole series of JSON objects in my tweet file. And now a tweet could be, there could be multiple tweets in one chunk. Or there could be one tweet spanning multiple chunks. And that's not really helpful. What we're going to do is we're going to write a so-called transform stream, which is going to transform our stream in a way that the chunks and the JSON objects align. So basically, every chunk after the transform, we can be sure it's exactly one JSON object. Got it. So you could pass that to jason.pars and you would get a JSON object. In the end, we should be able to do that. So instead of just arbitrary splits at 32K, we want it per object. Exactly. So now we're going to use the reference implementation for transform streams. They're going to be in the browser at some point. So a transform stream, to explain, is a readable stream, a stream that you can read, a writable stream, a stream that you can write, put together into one object where you put things in at one end, do some transform, and get it out at the other end. But they have not landed yet, so I'm going to have to rely on the reference implementation. So I'm going to import from transformstream.js. And now we have it and can use it. And let's see how to actually use it. So as I said, response.body is a stream. And if you have a transform stream, you can use pipe through, which means here's a transform stream, pipe that data through the transform, and give me the result. So here we would create a new transform stream, and that transform stream takes a transformer, not to be confused with the cars. So what we have to write is a JSON transformer. You sure you don't want to know the new inside there as well? Let's go three deep. Meet more. I can give you a variable. So this is going to give us the JSON stream. So as a return, we get the transform stream, which is now a stream of individual JSON objects. So afterwards, we're not going to get the lock on our body, but on our JSON stream. And that should work. So let's take a look at the actual juicy bit, which is the JSON transformer. So class, JSON transformer. It has a constructor, which we are going to need, but not right now. And a transformer has to have three methods. It's a start, which is called at the start. We don't need that. There's flush, which is called at the end. And there is attention transform. Oh, that's what we need, right? That's what we need. And the transform, you get two parameters, the chunk that you now have to transform and the controller. And the controller is something it allows you to control your stream. Let me explain with code. I think code is always better than trying to hand wave all the things. It allows you to do stuff like to define what is the output of this chunk. You can have set all kinds of different signals, like back pressure, and your cache is full. And we don't need any of that. We just want to see it. I've got a feeling if you Google for Jake Archibald's streams, you're probably going to get a really good article. I think it's going to appear. It's just, yeah, well, that's the worry if you say streams too many times. I'm going to stop mentioning the word now. So we take the controller. And with NQ, we can put something onto the output queue. So just to show it works, I'm just going to not do any transform. Whatever comes in, I'm going to take it and put it into the output. It's like a null transform. So do nothing. Ideally, everything should work the same. Yes. So it's still doing nothing, just downloading the file. But now we have a point where we can inject our code to do all the transformation-y bits. So how do you do that? And the thing that we have to do now is we have to kind of write our own little JSON parser. Really? It's not as complicated as it sounds. OK. So what we will do is we will collect all the chunks that belong to one JSON object. And once we have those, we'll squash them together, get a string, and put that in the output. And you NQ your iPhone. And then we start over. So in this chunks array, we're going to collect all the chunks that belong to one JSON object. And now we have to figure out how do we know when we've reached the end of a JSON object. And we do that by counting curly braces. So what we're going to do is whenever we get a chunk, byte of chunk, so we're going to loop over all the bytes in our chunk. That could be const. That could be const. Sorry. Sorry. Inadvertent. Ring from. Stop it. Review. Sorry. Carry on. We're going to convert it into a character because we know JSON is always ASCII, so we don't have to worry about UTF-8 unicode shenanigans. Yeah. OK. And then we're going to switch over whatever character it is. And we care about it's either an opening brace. Sure. Or? Closing brace. Closing brace. Well done. Weight off. And we're going to start. And we're going to try to count how deeply nest we are. So whenever we encounter an opening curly brace, we increase it. And whenever we encounter a caps lock, it's in the way. I mean, this does not feel brittle to me. Stop it. I'm cool with this. So when we decrease the depth, we will hopefully, at some point, reach zero, which means we have found the end of a JSON object. So what we're going to do here, we're going to emit. And we're going to talk about this later. If we have reached the end of the chunk without having reached zero, we know that we have to continue into the next chunk. And this current chunk is hard. I like it a little bit. Yes, I do the dancing programming. Do the chunk. OK, fine. It's going to help people understand. Sure, no, I get it. It's great. So when we've reached the end of a chunk but not emitted it, then we need to chunks, push, chunk, push. This is like when I type. I know. Ah, it's not a good thing. I'm the one that does the typos. He's the one that's normally really good at typing. And it's really annoying to me because he's pretty good at it. So we are now in the middle of some chunk. We have reached zero. We know the start of the chunk is part of the JSON object. The rest is part of the next JSON object. So we're going to split that apart. So we have the tail. You intate array. So that's actually an interesting thing because all these typed arrays in JavaScript are basically just a chunk of memory and you can have different views. So we can look at the same memory as a series of floats or a series of ints. But also you can have them in different positions. So we are going to split it apart by just creating new views onto the same memory. So to be clear, rather than, say, copying this array into two separate arrays, you're just creating two views onto the same memory underneath. And that one becomes the end of the object and the next one becomes the start of the next object. Exactly. OK, fine. So what I just realized, though, that for this we need to know the position, the index where we add. So I'm going to rewrite the for loop to do the old. I was feeling really smooth about that. I know it was nice, but, you know, length, i plus plus, done. All right, so the tail goes to i plus 1 because at the i we have the closing brace. So the next one is where we want to cut. And we have the next object where we kind of do the same. And we start after. And if we emit the last parameter, it means all the rest of it. So because i is the curly brace, i plus 1 is the next character. Fine. So and because after this, we want to continue working on the remainder. I'm just going to get a little bit tricky and just say we're just going to overwrite our chunk because that's the next thing we can work on. Well, this kind of stuff really ruins my head. Well, so tail is, as I said, the last bits of our JSON object. So it's part of the chunks array. So now at this point, we know that the chunks array is our next JSON object, but potentially split into multiple chunks. So we need to convert this into a string. So you've got several potentially unit arrays that need flattening down to a string. Yes. Got it. And sadly, to convert a buffer or a typed array to a string, we need to use a text decoder. Sure, we do. Sure. So it's a little bit. Sometimes I wish this would be simpler, but it's just how it is. Now, OK. And what we can do is we can use this.chunks.reduce. We're going to start with an empty string, and we're going to take the string and the chunk. And we're going to concatenate the string with a decoded version of the chunk. And that is going to be our JSON string. Sure. Is this correct? I think it is. And now we have a JSON string, which we can. Is there a German word for the feeling of sadness you get when something is really convoluted? I think you could say something like simplizismus mangelverzweiflung. Pardon? Simplizismus mangelverzweiflung. I love it. That's all my favorite CSS. Tell me what CSS is. Wait. It was kaskadien desilisierungsvorlagen. Yes. More stuff should be done. German always has a word. All right. All right. So we have JSON string, which we can emit. So we now have actually turned it into a string. We can push that out. And now the only thing left to do is to reset our state. So we have to say this chunks is now empty because we're done processing it. That is true. You could also set dot length equals 0. Can you? Yes, you can. Can you say I can do this? Yes, really? Yes, you can. I did not know length of readable. I'm trusting you. Awkward. Awkward. And now this is, and because on the next loop we want to start at 0, but it's a for loop. So I plus plus is going to happen. We have to set it to minus 1. No. Ew, that's gross. All right. You're not chipping that, are you? Yeah, I am. OK. All right, sure. So it's prototype. I'm not going to run this code on my big treat file. I'm going to take it on 4 because otherwise it's going to take too long. It's just no. OK, so if this goes right, we're going to see four tweets. No, we're not. Line 13. People are paying attention. I like it. What is on line 13? Oh, byte is not defined. Yes. So byte is our chunk I. See, it's judging you because you changed my four const to that. And I type out decode. Yay! But it's like it's a bit bigger. So we can now consider we have four individual console logs which contain exactly one JSON object, which means the alignment works. But, and there is always a but. So let's show you why. What if there is an opening brace in a string? See, see, I see. That would not do that. If I do that, we would see nothing because the parser would see, oh, opening curly brace. An object started, but it's in a string and we didn't know that. So what we have to do is it's actually not that hard to fix. We just need to track if we are in a string. Oh, you're kidding me. Oh, come on. So whenever we find a double quote, we're just going to go in string is this. Sure. In string. Yeah. And whenever we are in string, we don't actually care. Now I'm getting that word, that feeling, the German one about the sadness. Does it feel less brittle now? Isn't it nice? No, OK. Yeah, no, it feels perfect, Mary. Great. This should work now. Yay. Cool. But. Backslash double quote. Backslash double quote. Oh, yeah, he's right. Got that. So I guess if I do this, it wouldn't work. Yeah, that's a bad. Let's fix that. Easy, easy. Good shout, though. It's a great shout. So if we find something that is escaped, which we have to escape because it's the escape line. You must. We have to skip the next character. Sure. Aim, skip next. That is fine. Can I revoke your programming license? No. And if we, so if we have to skip the next one, we have to set it to false. Sure, false, yeah, go on then. Yeah. And continue. Sure, yeah. Yeah. All right. And just the ultimate test is going back to the full file and make DevTools grind to a halt. All right? But it's actually a little scroll because we are not blocking the main thread too badly because this is actually kind of OK. Kind of important, actually. You've got a 16 meg file that's being handled on the fly. It's being streamed and transformed on the fly. And you've got, what's the most recent one there? Is that, must be, what, German, in German, yeah, OK. I mean, I was wondering how fast it's actually, I mean, yeah, there is processing going on, but it works pretty well. I mean, yeah. All right, and now. Mike, go on, go on, move it. Right, because normal people, I don't think, would sit there going, oh yes, I am enjoying your JSON coming down and watching it arrive in the console there. Well, I certainly do. I know, right, I mean, know your audience. So we thought the better thing to do might be to actually create some kind of progress style. And that's my job today. So as always, you do the visual stuff. That's how I roll. So I'm going to make an SC dial because it's the car. I think I might just call it arrow twist style, so my branding is OK. I can't believe you did that. That is not. I wonder if you want to call it dial JS or Lewis JS. I want to call it. Or arrow twist JS. I want to call it stick to the script. Arrow twist.js, fine. I approve. I know you do. It's not what I expected from today. OK, so I've got an SC dial. I'm going to make something called arrow twist.js. And you know what? I have a custom elements. I don't like writing these out with all the. I can't type in by heart. I just choose not to. With all those lifecycle callbacks already filled out for the custom elements, 1.0. What's the German for custom or lifecycle callback functions, Sirma? Lebenszyklus-Rückruffunktion. See? Yeah, that would be one. Right, normally we have, I'm actually going to call it SC dial. Normally I go with an unnamed class, but today I'm not going to. That's going to apply to SC dial and SC dial. And I don't need attribute change callback. So just did that just so you have a name for the class because you want to refer to it later. Yeah, I've got a fee. Yeah, we're going to need some constants in here and refer to them as static members of the class. So let me see. What I'm going to do, first of all, is I'm actually going to get rid of this here because I like my log. I know. Well, tough. I'm in control now. I'm the one doing the drawing. Right, now normally you could do something like this with SVG, but today I'm going to use a canvas because I feel like not a lot of people like the canvas. You talk to people and they're like, you know, I did it. But I use the canvas. So you're not going to bend over backwards and try to make it div be around. No, I'm going to use a canvas. Canvas is blazingly fast and I think it's heavily underutilized on the web. And when you use a canvas, you get the context from the canvas like this. WebGL? No. Oh, just 2D. OK, also. So basically, you can think of the context as your pen that you move around on the canvas because you give commands like move to, line to, and you draw on a piece. See, I told you I was going to do some static. Ah, constants. Yeah, size. That makes you a good citizen because you avoid magic numbers. I know, right? I feel really good. This.appenchild, because it's a custom element, I can just call appenchild against it rather than something else, it's an extending HTML element. But, but. What? It's a bug that it works in Chrome, which I always thought was hilarious. You are, per spec, not allowed to manipulate the DOM in the constructor of a custom element. You have to do it in the. Works. It's a bug. Works. Move it to the connected callback because that's where you're supposed to do it. Because technically, you cannot rely on the fact that the DOM is already available in the constructor. It could still float in the ether of custom elements. It works. And then I probably should remove it, but whatever. All right. Sure. So the other thing that we're probably going to do is, so what I want is an API where we can set a percentage value for the dial. As the file is coming down, as we get our 16 megs down, we should set a value. I'm going to say between 0 and 1, I'm for some unknown. We kind of need a way to put that value in from the outside, right? Exactly, right. So I'm going to set percentage. Ah, centers. Percentage. So I guess the reason why you're not just using a straight up property is because we want to tie logic to the fact when somebody changes the value. Exactly, right. So if it was like number dot is, I can never get that right. Percentage, you could just be like, I don't know. Whoa, you could be like that. Throw new error. No. See? See, people don't, there you go. Validation, and also helpful error messages. Well done. Hey, I am all over this. Don't worry. I'll tidy it up before it goes on to the GitHub repo, which it will. Yes, our GitHub repo. By the way, it is Google Chrome slash UI element samples. All our previous code is on there. This one will get up there at some point today when we're done. Right. And feel free to use it and play around. Yep. So when you set the percentage value, so we're going to get a reference to the dial from outside. We're going to say percentage equals 1 or 0.5. We don't want to draw, which we'll draw on the canvas. I wonder what the draw function does. Oh, I don't know. Let's see. So what we'll do is we will begin. So you always operate against the context. So we'll begin path. And then I always say, if you're going to do something like begin path, always use, or is it close path? Always put its corresponding. Because the opposite of begin is close. Is close. That makes sense. Welcome to the web. I know, right? So we want, I'm going to do like a circular dial, right? So I want an arc. So like a normal progress bar, but bend into a circle. Yeah, all the way around. So I love this. So I've got to do x, y, radius. OK, fine. So I'm going to say mid, mid, and mid. Now, I haven't defined mid yet. So that's fine. Zero. Let me guess. It's the middle. Oh, you're so good at this. Fine, fine, fine. 0.4. I'm just guessing an angle, which is in radians. Just because I want to check that it draws something. Because when you're working with a canvas, sometimes it draws nothing. And that's fun to debug. Yeah. So const mid equals scdow. So we're going to go for the size. I'm going to hand tune this. We're getting some good mileage out of the constant. We're getting some good mileage out of the constant. I know, right? It's all over the place. It's going to be great, this. Don't you also want to do like a half? Shush. Shush. See, that didn't draw anything. Do you know why? Because after you've closed the path, you actually have to say to the context to fill the path. Yeah, because path just defines the path. And you have to say, what do you? You can stroke it, or you can fill it, or you can probably do both, I think. Do you know what you need to do as well? You actually need to call draw. What a day. There we go. Hey, we've got something drawing, at least. I mean, you're easily excitable. Now, the thing is, that's not really what I wanted. Because I want to kind of pie kind of. I mean, it makes sense, because you just say, go here, draw arc, and then go back to your start. Yeah. It's correct. And as always, you did the mistake, not the computer. Thanks. So the fix for this is to actually move your pen to the middle, like you were saying before. It's like, think of it, the context is like a pen. So if you move the pen as it were, and you say move to inside the begin path, and you say mid, mid, it draws the kind of pie chart. That's good. Oh, which is good. So now that's pretty good. So what else are we going to do? Let's see. Let's call this the outer arc. I mean, I find it a little weird that it kind of starts to the right, because we all learn to read the clocks. I feel like. OK, fine. Yeah, that's fair enough. So yeah, he's right. You wouldn't normally go out to the side, would you? If you're going to start progress, you start at top middle. Now, one option here would be to take this start angle as being 0, and just do like minus 90 degrees. But in the interest of showing something else to do with the canvas, I thought I would do it like this. We do a rotate. We rotate the canvas, which is going to rotate the coordinate system, and I'm going to do it right. So yeah, the coordinate system basically gets turned. Minus SC dialed up. Let's see, I'm such a grown-up. 90 degrees. Really? Concentrate 90 degrees? Look at that. I was like a proper. But half doesn't get it on constant. No. I mean, no. Return math.py times by 0.5, right. Didn't we just say sometimes the canvas draws nothing? Yes. I know what this is, because I've seen it before. It's because we're actually rotating around the top left hand corner. We're actually rotating around the origin top left. Oh, so we have this thing here, because we're rotating it out of view. So we're drawing up there. Cool. It's just to help people. So you're basically just going to move the origin. Yes. And then go just. So yeah, if you move the origin with a translate, and if we move it to the middle, middle. So we move it to the mid, and then we rotate, and then we move it back again. It sounds odd, but it will work. It makes sense. But otherwise, you would have to. Well, I don't know how else you'd do it. It is. It is right. So now? There you go. Yes. That's good. Here's another thing. I think what we'll do is we'll do an inner arc while we're here. And then we're going to actually wire it up to your code, which is now legacy code. Ew. We started out at bleeding edge, and it's already obsolete. No kidding me. That's yeah. Right. So math.py times by 2, because we want a full circle in the middle. I think it went full circle. Ooh, ooh, ooh, ooh, ooh. This will be fun. This won't get anybody upset. What? We'll call it tau. You flame-baid. Apparently, this has support in this room. Some mathematicians are like, yes. You should use tau, which is the same as 2 times pi. Trolling. Right. So we'll do this.ctx.fill, fill, style equals, let's do a white. Let me go. I'm going to need a fill style here. So for now, I'll set that to back to the black. Now, that doesn't look great because it's the same radius. I mean, no. But if I do mid times by 0.8, look at that. I'm now getting something. Oh, nice. So we get like a little thing that. We're good. We're good. Now let's wire it up to your stuff. Oh, yeah, let's pipe in actual the loading percentage. So we'll do percentage, which will be a value between 0 and 1, times by scdial.dial, which will disappear because the value is 0, which is fine. So actually, this time it's working as intended, even though we don't see anything. Absolutely. So over here in your thing, I'm going to get reference dial equals document.quick. And this is where we can say this is not production code, because now we're kind of mixing concerns. My module was supposed to take care of loading. And now we're also kind of like patching UI stuff in there. But for the sake of how to do this, it's valid, I think. Sure. Keep going, keep going, keep typing. Stop. OK, so if it's done, well, we know that dial.percentage is 1. That's fine. That is correct. And for everything else, we actually want to know how many bytes have come through the wire. So we need the total bytes, so let's say bytes total. The HTTP protocol can help you with that. Yes, it can. With the headers.get, and then we're going to ask for the content length. And then I'm just going to parse in on that. Yeah, because it's a string. And if you divide by string, things are not fun in JavaScript. Right, so we know the total number of bytes if that header comes through, which I'm relying on. Let bytes counted be zero. Sure. Now, if we get down here, we're going to have to say, we're going to add on to the back end. Right, so we can just increase by the string length, because it's JSON that is equal to the number of bytes. Because there's no unicode things. Yeah. Right, it's counted over bytes total. Yeah? Should we divide between zero and one? All right, looks good. Yeah, that's not. I mean, kind of. Sorry, just for the just. We could just say this is intended and ship it. Sure. Do you know what's going on here? The canvas is a state machine, effectively. Every time we call draw, it's rotating the canvas by 90 degrees. Oh, so we keep just rotating and drawing. Yeah, that's obviously the drawing. So what we can do here is we can actually ask, before we do that, the rotation and so forth, we can actually ask the canvas to save its stuff, its state, basically, like what's the fill style, everything like that. We can save it. And this is another one of those moments where if you call.save, you immediately figure out where you're going to call this.ctx.restore, where you're going to bring it back. Because otherwise, you're going to push something onto a stack, and you're never going to pop it. And that won't be fun for you. Oh, so this will basically revert to the previous state, except all the drawing. It has happened in the meantime, and will persist. So here, we should now go round. That's definitely better. But actually, you probably can't tell. I'm going to zoom in. Oh, that's very pixelated, isn't it? It's really jagged. And the reason is because you also have to clear the canvas. Oh, so the canvas, by default, draws actually smooth, which is why this is surprising. But we keep drawing over and over the smooth edges, so it adds up to not smooth. So we'll do the size, and we'll do the size. So we're going to clear it. This has to be one of the most efficient constants ever created. See? Right. Much smoother. Much smoother. Now, while I'm here, since it's not in the middle of the screen, I'm going to move it to the middle of the screen. And I do like doing this. Style, style. There you go. H, M, L, body. Let's do it. I do that all the time. Ah, you want to. Oh, are you going to do the master discipline of CSS of vertical centering? Yay. That's good. When people are like, you can't do vertical centering in CSS, you can say, sure you can. Display flex. Align items, center. Justify content, center. Smile a happy smile. Hooray. Right. Next up, I'm going to go back, and I'm going to change this color from black. And I'm going to do small. I was about to say, it looks a little bit on a chrome. All right. RGB128, there we go. That's a blue color. But let's make it nicer. Let's see, let's do this as a template string. I'll stop it, Paul. There we are. Told you I made typos. Let's do so. We'll do what? This dot percent, whoa. That would be interesting. Yeah, that would be weird. Times by 255. Oh, so we're going to go from blue and no red to blue and full red. Except we're going from black to pink. Because RGB values have to be rounded. Oh, so once you have decimal points, CSS goes like no black. OK. OK. So we're doing all right here, aren't we, mate? That's all right. How are we good? What else can we do? Oh, we could put an actual number in the middle. So to tell you how far we've actually got through the downloading. Let's do that here. So put label, label, label. See how I'm actually doing comments? Because I know when you come back to Canvas code in order. Just like half an hour after this, and we're going to try to upload this, we will have forgotten what it is about. Yeah, it's right after regular expressions where you're like, really? It's like just face on keyboard, and it still kind of works. Brings out my prettiest faces, that one. Right. OK, we're going to do fill text. So we know we want to do that. We should set the color for that. ccx.fillStyle, yeah. Equals. Let's do it as a kind of gray color. All right. Fill text. If person would prefer 333, honestly. Really? OK, fine. Fine. You're not normally that bothered by this kind of stuff. Fine, 333. Slightly darker gray. Percentage times by 100. And we'll do it at mid-mid in the middle of the dial. Sure. Sounds good. I mean, I would ship it. Sure. Do you know what? This is actually a great opportunity to talk about the restore. Because if we do the restore before the label. Oh, it's still turned. It's still turned. So we'll do that. That'll put it back where it needs to go. All right. That's a start. And then we can do, let's see. We should probably round this too. Math.round, yeah, I think you're right. I don't, people don't care about, like, the 15th decimal place. I know, right? It's tiny. Look at how tiny that is. OK, let's make it bigger. This CTX dot font equals, let's have a look. We would want to do, let's make it one of these. And we'll say, oh, no, wait, SC dial dot size. See, it's not so useful. Oh, because you'll make it dependent on the, that is good. Quarter the size, PX aerial. Sure, that's going to be fine. OK, sounds good. Symmetry, I mean. And the thing is, you don't handle negative feedback very well. The thing is, I feel like the bottom left corner of the one is actually perfectly centered. Yeah, it is actually. It's because the text baseline and the text alignment will mean that it is for the bottom left hand corner, which we can change by saying this dot. I thought we'd have to, like, measure ourselves or something that seems doable. That would be awful, you can imagine. No, we just do this, text baseline. I like how one is middle and the other one is centered. Yeah, would be too easy. So Flexbox is a center, center. SVG, yeah, let's not talk about it. There you go. Round and go. But I think what we can do is to turn ourselves onto the label, because we can. I'm going to do this one at 777. Is that OK? All right, I'll let you have it. I mean, OK, let's do this at, like, oh, I don't know. 0.06. I love just guessing numbers. I don't think it, yeah, you can just. Like, number phishing is great fun. You're, like, 0.06. Yeah, why did you stop using, like, constants now? Huh? Huh? I don't know what you mean. All right, percent. That's rubbish. Great stuff. Let's do minus 20 plus 15. Sure. Ooh, close, but not quite. Minus 20 plus 20. Close, but no cigar. There we go. No, I don't like that. Looking good. When is 14? Go on, then. Plus 26. Sure, why not? Why not? I mean, it looks good. I feel pretty good about this. Can we do this with network throttling? Just to have, like, a little bit more time to explain. What? Network throttling. Oh, sorry. More time to explain. Sure, yeah, absolutely. Yeah, because at the moment, this is no throttling. So imagine, then, we were to click on that, and it was regular 4G. Now, this is actually really important, because we're now not blocking, as Serma's code showed you, we're not blocking on this stuff coming down before we show anything. And we're actually able to give the user some kind of information. And if you needed to do something with those tweets, like jason.parsing them, showing them in IDB, something like that, you could be doing that kind of work here, too. We could already be rendering parts of the tweets, because every chunk contains one tweet. So we could start listing things while there's a progress bar in the top. And the main thread is still available, as you can see, because it can draw. Let's do that. In fact, we have just about enough time, I think, to get away with this. So on the connected callback, we couldn't call draw. I'm going to do a request animation frame to call this.draw. And I'm not going to call it anymore. So it's going to go into this kind of busy, it's just always going to draw. And it's going to probably break. I'm not going to spend ages explaining why. It's because the request animation frame, when it's called, this stops referring to the class instance and starts referring to window. So the way to fix that is basically. It's the Lewis bind, as I call it. Yeah. Dot bind. It's weird. But we cover it in every single supercharge, because I always kept doing it. And since that's drawing every frame now, we should be able to do a recording here. And we should be able to see that this is, if it works. It's comfortably at 60 FPS. We have loads of head space, head room, in our per frame budget. And we're getting something to our users really fast. And we are out of time. But just a reminder, we've covered streams, transformed streams. We've covered custom elements. We've drawn Canvas stuff. We've got you something which is taking 16 megs of JSON. And I think we're in under 200 lines of code or something. Yeah. Let me just check. This is a reminder of just how awesome the web platform can be. Because you've got, what, about 110 from me. And you've got about 70 or so from you. So that is about 180 lines of code. And we've got something that I think is reusable and quite interesting. So there you go. Thank you very much for joining us today. We've had a great time. Let's have a little chat about a brand new API that's just coming out on the platform called the Media Session API. I heard about it when I started doing this app. It was mentioned to me by a couple of the Chrome engineers. But Francois, who's on my team, there's a brilliant Google web developer's update where he explains in complete detail about how to set up the Media Sessions API. So check the notes below. We're going to pop in a link for that. But I will just show you briefly what it actually looks like in the context of the app on the phone here. In fact, because you probably can't see it, let's switch across to the direct screen cam. There you go. That's what it looks like on the phone. So as I start playing a video like this, la, la, la, you see that if I swipe down from the top, we actually get a notification which has an icon here and it has play and rewind and fast forward buttons. And you get to configure those yourself. In fact, let me go into another one of these videos where I think I've actually set up to load some custom album art as well. So there's me and Jake. And you can see here now we've got custom album art, which is the picture of Jake. And the previous and next, the fast forward and rewind buttons are actually set to be skip forward 30 and go back 30. So I should be able to tap that and go forward 30, which you can see there. Oh, it's just skipped it right to the end. Whoops, a daisy. But I can replay the video. Don't worry. The other thing that it actually does, which I really like, is if I turn off the screen then on again, you can actually see that the picture, the album art is actually the picture for my phone in the background. That is very exciting, isn't it? So let me show you a little bit as well, since we're here. Let me show you a little bit of the code. It's very straightforward. We have a quick check whether we support the media session API, which is basically looking for, let me show you actually, it is just simply looking for media session in Navigator. And if we have media session in Navigator, then we consider ourselves having the media session API. And what we do is we say navigator.mediasession.metadata. And then we create one of these new media metadata thingamajigs, which is very exciting. ESLint doesn't like it. It doesn't think it's a real thing. It is a real thing. You can totally use it. And you give it things like the title, the album, the artwork. I've only set the 512 and the 256. But very much like your manifest files for progressive web apps, you can set as many of these as you need. And the user agent will choose whichever one it thinks it makes the most sense for the device that it's on. And it will upscale and downscale as necessary. But I currently am just setting a couple of them. I may need more as time goes by. And then afterwards, after we've set up the metadata, we set some action handlers for things like the play, the pause, the seek backwards, and the seek forwards. The thing to bear in mind here is that any that you set will have the buttons appear in the notification. If you don't set one, because there are other ones that you can set as well. And I forget which ones they are, but check out Francois' post. He explains the whole Kit Kaboodle. Any that you don't set won't appear. Any that you do set will appear in the notification. And then somebody can control your stuff from the lock screen or by just dragging down from the top. All very good, isn't it? And very straightforward code to be writing. So a brilliant little progressive enhancement thing that you can chuck on and that I have chucked on my media app. Cool, totally. Hi there, iOS developer. Interested in getting started with the Firebase platform in your app? Well, you've come to the right place. There are two main parts to getting the Firebase platform up and running, adding your app to the Firebase console and installing the SDK. Let's go over these one at a time. For starters, let's go to the Firebase console at this URL here. Depending on when you're watching this video, the UI might look slightly different, but the general concept should remain the same. Now, depending on your situation, you might see a blank create a new project screen or you might see a list of existing projects. Oh, before we go further, let me take a moment to explain the difference between projects and apps. A project consists of one or more apps. Now, all the apps in the same project use the same Firebase database backend. And if you want, you can use features like Firebase Cloud Messaging to talk to all of them at once. You don't have to, but you can, which is sometimes convenient. So if you're a developer that has a cross-platform app, you generally want to put the iOS and Android versions of your app in the same project. Now, that'll give you some nice cross-platform benefits. Your user can access the same data if they switch back and forth between the iOS and Android versions of your app. Things like dynamic links will work across both platforms. You can send one notification to all versions of your app and so on. On the other hand, completely different apps, you should put those in completely different projects. There's nothing gained by cramming them into the same project except tiers and heartache, I guess. So if you're working on a cross-platform app and your Android or web team has already created a Firebase project, you should probably select that project and connect your iOS app in there. Otherwise, if you're the first one to be adding Firebase capabilities to your app, you could be the one to create the new project. In my case, I'm the first app associated with the project, so I'm going to create a new project here. I'll give it a name, and there we go. Once you've selected or created a project, you're going to want to connect your client app. I'm going to select the iOS button here, and I'll give it my app's bundle ID. You are eventually going to need to add your app store ID here if you want features like Firebase invites or dynamic links to work, but you can leave this blank for now and change it later. Now, when you click Continue, your browser should automatically download this Google Services Info.p list file for you. Note that it needs to be named this exactly. So if you get that little one in parentheses after the name like I just did, you're going to need to do a little bit of renaming in Finder. OK, next up, drag the file into your Xcode project, like so. And let me go back to the console and hit Continue here, and it's telling us that this would be a good time to install the Firebase Cocoa Pods. Now, I'm assuming you know something about Firebase Cocoa Pods, but if you don't, here's a little video for you to check out. It's fun. So I'm going to jump into my project directory here and do a little pod init. We'll open up the file, and I'm going to uncomment this line because I am using Swift and this line because my app happens to have a base SDK of 8.0, although at the time of this recording, Firebase is supported as far back as 7.0. Next up, let's add some pods. Now, it's important to remember that to keep your app nice and spelt, you should only install the Cocoa Pods for features that you need. In fact, there is no all-encompassing Uber Firebase Cocoa Pod that installs everything for you. You're going to need to pod install each individual feature, and you can find a full list of the Cocoa Pods and the features they correspond to over here. Now, for starters, I'm just going to add a Firebase slash core, which includes everything needed to get the basics up and running and also enables Firebase Analytics. So now I'll make sure my project is closed and I'll run pod install, and then let's open up the generated workspace. All right, we'll build it and make sure everything compiles OK, and it does. So we can move on to the next step. OK, it looks like the last step here is to add some initialization code. I recommend putting this in your app delegate did finish launching method. First things first, let's import Firebase in my app delegate. Note that this is usually the only thing I'll ever need to import no matter what we've installed. We're doing some pretty nifty work behind the scenes to make sure this works properly. And I know this sounds like the exact opposite of my only pod install what you need advice from like two minutes ago, but trust me here, this makes development a whole lot easier. All right, so next up, we'll add the line for app.configure to make sure Firebase gets set up properly. And that's actually all you need to do. This configure method will take a look at what libraries you have installed, initialize them, grabbing all the appropriate constants from that Google services file that you dragged in earlier into your Xcode project. So we'll give it a quick run. And if everything is set up and working correctly, you should see a few lines in your console about how Firebase Analytics is up and running. All right, so congratulations. You are now up and running with Firebase. So there are a lot of places you can go from here. You can add sign-in using Firebase Auth or get your app talking to the real-time database, or start tracking more of your user's usage with Firebase Analytics. You can check out these links to get started and have a little fun. I think that one of the things that I have applied in my life as I have been evolving is like, feel the fear and do it anyway. Well, I want to tell the whole story. Basically, I am a developer advocate at Google. And then somebody, you know, the head of Web Dev Rail told me that, would you like to go to Latin America? And to do the roadshow, and I said, yes. Anything that I can do to help the region and the developers here to advance and to enable them to create awesome stuff, it actually makes me very, very happy. We've been in Sao Paulo and Rio, and now we're in Mexico City. We're doing one-day events and meeting fantastic developers. The web is a fantastic way to deliver content. You know, you can get content to users with very small downloads using progressive web apps. My life philosophy, my motto, I can have two motos on the web. The first is, keep it simple. The second is, focus on the user. The roadshow is a great way to educate people how to build fantastic user journeys from top to bottom in 2017 and beyond. Most of the barriers are self-inflicted. You really can do most of the things that people tell you not to do on your own. So keep pushing for what you want to achieve in life. For us, it's really interesting to actually get out there and understand the local culture as well. Just email or Twitter or something like that. It's not enough. You need to go out there and see there's actually human beings all over the place and understand both the opportunities they have but also the constraints. There's so many things in life that can be boring. When you actually get out there and you connect to people and you see all the similarities and the things you can do together, there's nothing better than that. Keep on exploring all the time. There's so many things that are terrifying and you don't want to take to risk, but it's almost always worth it. It's great to be back in Mexico City. I've met some really amazing developers here who are just building some really cool things. So it's been an awesome opportunity to connect with folks. Every day when I go into work or start any new project, I always think to myself, no obstacles, only challenges. My life motto is probably passion for your craft. So really believing in what you do and wanting to be the best at it. I can't wait to see what comes out of this, what gets built. This is exactly why we love doing what we do so much to see these sorts of events and these sort of passionate developers and visit these sorts of amazing cities too. We hope to come back, visit more countries and we really want to hear what you're doing, the PWAs that you're making and get in touch if you have something that you think we should look at. It's the morning of day two at Google I.O. 2017. Hi, I'm Timothy Jordan. I'm your friendly developer advocate on the ground here at Shoreline and I'm gonna be taking you around all the sandbox areas just like I was yesterday so that you can have some of the experience as if you were here with me. So let's go play around in the accessibility tent, which is of course all about giving all of your users a great experience with your service. So check it out. Hi Rob, how you doing? Good, how are you? I'm doing great. So can you help me with the accessibility of my website? Absolutely, so what we're showing here is the new Audits 2.0 panel in DevTools. So Audits 2.0 is basically taking the very popular Lighthouse Chrome extension and integrating it directly into DevTools. So this will run a number of checks on your site for rest of web app stuff, performance, best practices and especially accessibility, so. All right, well let's see what's wrong with it. Yeah, so here we've got a page that has some form elements which are missing their associated labels. And so if you want you can actually go and like click on these nodes right here and that'll actually take you into the Elements panel. So here we're showing the new Accessibility tab as well so you can inspect all sorts of information about this node, this little input field right here. That's really cool. Yeah, but it looks like it's not gonna read anything. Right, so right now we don't have a label for this control so a screen reader is not gonna know what to announce. So we actually do have a label element right down here. If you want you can control click on that. Yep, and add an attribute. You can type in four equals, sub dash name. Yep, hit enter. And what that does, now if you click on this input field again, there you go. Now you can see that you've associated the label with the control, it's got a nice name and a screen reader will announce it and tell the user what they're about to interact with. That's awesome. That's easy Rob, thanks. Yeah, absolutely. All right, well we're gonna go look at some more stuff. Thank you so much. Absolutely, happy to help. All right, let's go check out the selfie stick with my friend Austin. How you doing Austin? Good. So you've got a selfie stick which is designed for people that selfie sticks usually aren't well designed for. Exactly, yeah. So we have a pixel here running TalkBack which is a native screen reader to Android and we have a feature that will give feedback when it's in the camera mode to what's in the screen. Let me see what it says. Camera, two faces, shutter button, two faces, double tap to activate, two shutter button is now enabled, photo taken, two faces. Just took a selfie so it's giving audible feedback to what's in the screen if the face is centered and if it's good for selfie. All right, there's one more thing that I wanna check out. It's really cool and it's right over here. Hi Jessica. So this is a spoon for people who have say limited hand movement, right? And I'm just gonna go ahead and try and use it if that's all right. It's the trick is to try and dump the beans out. I can't do this. That's really cool. Thank you so much. I'll put that down. Hi everyone. My name is Ruby Paniker. I'm an engineer working on Chrome. And I'm Philip Walden. I'm an engineer working on the web platform team. Over the last year, we've been part of our metrics team in the web platform developing a set of new metrics and APIs that are user-centric. In that they capture user-perceived performance. We've developed a framework for thinking about user-perceived performance that we wanna share with you today. And Phil and I are really excited to be here sharing these metrics and APIs with you. In our past lives, we've been web developers and we understand the pains from gaps in real-world measurement. And before Google, I worked on web frameworks for apps like search, photos, G Plus, et cetera. And before working on the web platform team, I worked on Google Analytics. So I know a lot about, and I've seen a lot of the challenges around tracking performance in the browser. So this is the goal of our talk today. To help you answer this question, how fast is my web app? You certainly asked yourself this. And this may seem like a straightforward question, but the problem is that performance and fast, these are vague words. What does fast mean? In what context? Fast means different things on navigation or clicking, scrolling or animations. So what is performance and what is fast in these contexts and fast for whom exactly? Right, the truth is performance is hard. We kind of all know this. And for web developers, it's harder than it should be. That's one of the reason we're talking about this. There's a lot of tips and tricks that you might have heard and would not implement it or understood in the right context. They can sometimes make things worse. So in this talk, we don't want to give you more of these tips and tricks. We want to talk about a way to think about performance, a framework, a mental model for understanding performance measurement. And then the hope is that once you understand this model, you have a lot more tools at your disposal to solve performance problems yourself and your own app. But before we do that, let's talk about some myths and misconceptions around performance today. So I would say this is probably the most common myth that I hear, some variation of this sentence. I tested my app and it loads in x.xx seconds. So the reality is that your app's load time is not a single number. It's the collection of all the load times from every individual user. And the only way to fully represent that is with the distribution, like the histogram you see here. In this chart, the numbers along the x-axis show load times and the height of the bars on the y-axis show the relative number of users who experienced the load in that particular bucket. As you can see, while the largest buckets and the most users were between maybe one and two seconds, there were many, many users who experienced much longer load times. And it's important to not forget about these users. So this pattern toward the right is often called the long tail. And unfortunately, it's very common in the real world. And this histogram actually illustrates the difference between measuring performance in two very different contexts. And these contexts are measurement in the lab versus measurement in the real world. And by lab, I mean great tools like DevTools, Lighthouse, web page tests, other continuous integration environments you might have set up. Lab is important. It gives you a sense for how your changes are going to behave in the real world. It helps you catch regressions before they hit your live production site. And they give you deep insight and breakdown so you can track down and fix problems. So lab is super important. It is necessary, but lab is not sufficient. Real world measurement on the other hand is messy. Real devices, various network configurations, cache conditions, all of these different conditions for real users are impossible to simulate in the lab. Real user measurement helps you understand what really matters to your users. It helps capture their actual behavior, which may be different from your assumptions or your lab settings. So to really answer the question of how fast is my app, it's important to measure this in the real world. So in our talk today, we will focus on real world measurement. So coming back to this myth for a second, there's another reason why the statement is problematic. The question, when exactly is load? Is an app loaded when the window load event fires? Does that event really actually correspond to when the user thinks the app is loaded? So I'd argue that load is not any one single metric. It's an entire experience. And so it can't be, sorry, I meant to say it's not one single moment. It's an entire experience and it can't be represented by just one metric. So to better understand and illustrate what I mean by that, I wanna show you an example. I'm gonna play a video of the YouTube web app loading on a simulated slow network. And I want you to pay attention to how the video loads, the app loads, and notice that things are kind of coming in one by one. So can we play the video? Okay, so think about how that felt. And now I wanna play the second video and I want you to pay attention to how you feel watching the second video. Think about the experience. Can we play the second video? So it feels different, doesn't it? I bet some of you were not sure if the video was even playing. And that's kind of the point. When you don't give that feedback to the user, it makes them feel something. So these two videos, as I'm sure you guessed, load in the exact same amount of time, but the first one kind of seems faster. At least it feels nicer because things come out right away. It's like if you went to a restaurant and you sat down at a table, waited for an hour, and then they brought you your drinks, appetizers, entree, dessert, check, and dinner mint all at the same time. Like that would kind of feel weird. You would wonder why they waited until the very end. So again, you might look at this and then you might think, okay, well, we should optimize for the first initial render. Get content there as soon as possible. That's what this proved, right? And again, sometimes that's true, but that's not always true. Sometimes when you do that, you can make things worse in some cases. And cause other problems. So I'm gonna play another example, real life example from Airbnb's mobile website. And so for context, I know personally that the Airbnb engineering team cares deeply about performance and user experience and they try to make their pages as fast as possible. So one way they do this is use server side rendering to deliver all the content in the initial request. And it shows because the page loads really fast even on a slow connection. The problem is that on slower devices that take longer to execute JavaScript, the page is rendered, but it's not usable for a couple of seconds. And you can see that in the videos. Can we play the third video? So as you can see, the user here tried to click a few times in the search bar and then nothing was happening. And it wasn't until maybe the sixth click or so that the component pane from the top scroll down. And so to be clear, this video was from a simulated slow device. It doesn't represent the majority of their users, but Airbnb is committed to providing a good experience for all of the users and they wanted to fix this and they care about this and so they're currently working on a fix this problem. And I kind of just want to mention on a personal note that I'm really happy and glad that Airbnb was willing to let us show this to you. I think it's cool that they want other developers to learn from their experience. So can we go back to the slides? All of these examples that I just showed illustrate why you shouldn't measure load with just one single metric. Load is an experience and you need multiple metrics to even begin to capture it. So this is another commonly held misconception. You only need to care about performance at load time. Now loading is super important, but it's certainly not everything. And historically, we've all fallen into this trap of narrowly focusing on load. And part of it is just our own developer outreach. Our tools focus pretty much exclusively on loading. The reality is that there's lots of other interactions that happen long after load. All kinds of clicks, taps, swipes, scrolls. Think of all the time you spend on new sites in your email on Twitter or Amazon. Load is a really small fraction of this overall user session and users associate performance with their entire experience. And unfortunately, the worst experience stick with them the most. So this is a summary of the problems that we've highlighted today so far. Real world metrics are a distribution. They should be seen on a histogram, not as an individual number. Load is an experience. It cannot be captured with a single moment or a single metric. Third, interactivity is a crucial part of load, but it's often neglected. And finally, responsiveness is always important to users way beyond load time. So these are the questions that we want you to ask us today. And these are the questions that we hope we can answer for you as part of this talk. User-perceived performance is important. What are the metrics that accurately reflect this? How can we measure these metrics on real users? How can we interpret these measurements to understand how well our app is doing? And finally, how to optimize and prevent regressions going forward? So in this segment of the talk, we want to talk about these new metrics and the basic concepts underlying them. So we've all used traditional metrics like DOM content load and window on load to measure load time. The problem is that they don't really correspond to the user's experience of load. They have almost nothing to do with when the user saw pixels on the screen. For example, a CSS style might be hiding the content when DOM content load fires. And even if the content is rendered, interaction can be blocked. The JavaScript might not be there to hook up a critical handler, for example. And these old metrics completely ignore interaction, even though we know that interaction is super important for modern web apps. So what are the key experiences that matter to users and shape their perception? I think it's helpful to frame these as questions that the user might be asking. Is it happening? So did the navigation start successfully? Has the server responded? Is there anything that indicates to the user that it's working? And then is it useful? Has enough content rendered that the user can actually engage with the page? And once content has rendered, is the content usable? Like can they interact with it? Is it blocked? Is something preventing that interaction from happening? And finally, is it delightful? Are the interactions smooth, natural, free of lag or jank? And is the overall experience good? So now let's look at how these questions map to measurable metrics. Here's an illustration of a page's load progress. So the first frame over there is just the blank white screen before the browser has loaded anything. The second frame represents the first paint metric. It's the point at which anything is painted to the screen that the user can see, anything different from what the screen looked like before the response. The second frame shows first contentful paint, the second metric. It's when any of the content is painted. And by content, I mean kind of something in the DOM. It doesn't just have to be text. It could be images, or Canvas, or SVG, something in the DOM that's painted to the screen. In the third, or I should say, the fourth frame, you see some more stuff coming in, but it's not quite enough content to be meaningful. And then you get to first meaningful paint in the fifth frame, where the user can actually engage with the content. Enough stuff is rendered that the user can, what they came for is here, and they can start consuming it. And then finally, the last metric, timed interactive, is when the page is both meaningfully rendered and usable, meaning it's capable of receiving input and responding in a reasonable amount of time. So Phil said that the first meaningful paint is when the page is useful, and the user can engage. This is when the primary content of the page has rendered. But what is primary content? Which elements exactly? Now, not all elements on the page are equal. There are some elements that are important. We call them hero elements. And when these hero elements are rendered, you have arrived at the user moment of it is useful, and the user can meaningfully engage with the page. So here are some examples to show you what I'm talking about. These are hero elements for some popular sites. So for YouTube, we think on the YouTube Watch page, the hero element is likely the thumbnail of the primary video and the play button. For Twitter, it is likely the notifications count and that first tweet. For the weather app is probably the primary weather content, even though there might be tons of other stuff on the page. So when these hero elements have rendered, this corresponds with first meaningful paint, and the it is useful user moment. And you might notice that some of these hero elements are content-based, and some of them are more interactive components. Like in YouTube, for example, the hero element is rendered when the thumbnail is loaded and the play button is visible, but it's probably not actually usable until the JavaScript that controls the play button has run and enough of the video has buffered to actually be able to start playing. If hero elements are interactive, then not only does rendering them matter, but also when it's usable, when it's CDI is. However, there are times, as we mentioned, when interactivity can be blocked. So to understand why important elements might be blocked and not interactive, think about a time when you were in a long line somewhere, let's say the grocery checkout or the bank, you're standing in line and there's one or two customers who are confused or they're angry, and they hold up the line causing a long delay. This is what long tasks do on the browser's main thread. These are tasks that run long. They occupy the main thread for a long time, and they basically block all the other tasks in the queue behind them. And scripts are the most common cause of long tasks, like all the work that scripts do in terms of parsing, compilation, eval-ing, et cetera. So if you've used DevTools, you're familiar with all the primary type of work, style, layout, paint, script. It turns out all of this happens on the main thread, and it also so happens that most interactions, things like taps, clicks, and even animations typically also need the main thread. So you can see how this can be a problem. A long script is running, begging the main thread, and the user is trying to interact, and these interactions are basically waiting in the queue. And this manifests as jank to users as delays and click jank in scrolling or jank in animation. So you might wonder how long is long, what is long? And so we define long to be 50 milliseconds. Scripts should be broken into small enough chunks so that even if the browser is idle and a user happens to interact, the browser should be able to finish what it's doing and service those inputs, that interaction. And so 50 millisecond chunks will ensure that the rail guidelines for responsiveness is always met. Now you might have heard a lot about 60-fips and 16 milliseconds, and some of you might wonder why isn't this 16 milliseconds? And so the reason is, yes, if you are animating, then 16 milliseconds is important. But animation issues are a small subset of responsiveness issues at large on the web today. And if you know you are animating, then yes, you have to share the 16 milliseconds budget with the browser. Now long tasks are the cause of most of the responsiveness issues on the web today, and scripts are by far the most common cause of long tasks. So just to recap, this table shows how each of these metrics map to the user question from before. So the question, is it happening, maps to the metrics for first paint, first contentful paint? Is it useful maps to the first meaningful paint and the hero element timings? Is it usable maps to time to interactive? And then the last one, is it delightful maps to what should we just mention long tasks or maybe more accurately, the absence of long tasks? So you might be wondering how metrics like first meaningful paint or time to interactive can work for every app. And you're totally right. One size cannot fit all. We actually spend a lot of time in our metrics team trying to develop these generic standardized metrics that work for every app. And what we've learned is that it's incredibly hard to do that. And that also makes it hard to standardize. That said, there is value in these generic standardized metrics. And so these baseline metrics that work for the majority case, let's say 70% to 80% of apps out there. And we have made such metrics available in our tools. Like you might see them in Lighthouse, DevTools, web page test. And we are working to consolidate these definitions. Down the road, we expect analytics to start surfacing variants of these metrics. The main thing to understand for these out-of-the-box generic metrics is that don't assume that they accurately capture the use is it useful and is it usable moments for your apps. Try them out. See how well they work for you. And when it comes to real user measurement, we encourage you to supplement these metrics with your own custom user metrics or customize these metrics and make them your own. Make sure that they work really well for your app. And we'll show you specific tips for doing that later. So now that we understand and have these metrics, the question is, how do we get these in JavaScript? That's the most important thing to measure on real users. Historically, we've used, like we said, metrics like DOM content loaded and window load, primarily because they were easy to get in JavaScript. I assume every web developer here knows how to find out when window load happens or when DOM content loaded happens. But these other metrics have traditionally been a lot harder, sometimes impossible, to get in JavaScript. And trying sometimes to find them can lead to problems. This code sample shows how you would detect long tasks kind of before these new metrics. And this is kind of a hack. So what this code is doing is it's effectively making a request animation frame loop. It's doing measuring frame after frame after frame. And it's comparing the timestamps from the current frame to the timestamp on the previous frame. And if this is longer than 50 milliseconds, then it's considering it to be a long frame. But there's a lot of problems with this method. I mean, it kind of works, but it adds a lot of overhead. It prevents idle blocks. It's not great for battery life. And it doesn't even tell you the source of the problem. You might know that there was a long frame, and so you can assume there was a long task, but you don't know what script caused that long task. And this isn't just a hypothetical example. This pull request on the AMP project is basically them taking that code out because they realized that it was more trouble than it was worth. The number one rule of performance measurement code is that you shouldn't be making your performance worse by trying to figure out how good the performance is. So these hacks show the need for real APIs built into the browser so the browser can tell us when performance is bad. So for web performance APIs on the browser solution to real world measurement, these are standardized APIs. So they're available in multiple browsers, not just Chrome. And when available, we definitely recommend that you use these APIs. In practice, though, you will use a combination of these APIs as well as your own JavaScript polyfills. And the reason why polyfills are necessary is because the implementation timeline on browsers will vary. And we are asking you to customize and supplement these metrics. So these are kind of the core building blocks as we see it for web performance. We have high resolution time, which you might be familiar with, from your use of performance.now. Performance Observer is an important piece. It replaces the old performance timeline. And it overcomes its limitations, such as no polling. It's a low overhead API. And it avoids race conditions from a shared buffer. So this is kind of what the usage of Performance Observer looks like. And it also happens to be the code that replaces the hack that Phil showed you just a little bit earlier. So Performance Observer usage is fairly straightforward. You create a Performance Observer and make a callback. And then you say observe with expressing interest in certain entry types. And as entries of that type become available, the callback is invoked asynchronously. And there are many different entry types. Long tasks is what we show in this example. But this could just as well have been resource timing or navigation timing or paint timing, which is a new metric we've introduced. This also serves as a really good example of long task usage. You can basically use this code to understand responsiveness issues. This also serves as a really good example of long task usage. You can basically use this code to understand responsiveness issues on your app. The callback is called asynchronously when the main thread is observed to be busy for more than 50 milliseconds at a time. And long task is available in Chrome stable today. So I encourage you to try it out. So this table shows what our recommendation is for how you would track these metrics in your applications. And just to reiterate, having these tracked in your applications is what allows you to measure these metrics on your real users, not just running it in the lab. So first paint and first contentful paint can be measured with performance observer with the paint entry type. This is available in Chrome Canary today. Long task can be measured with performance observer also since Chrome 58. That's Chrome stable right now. For hero elements, it's a little bit trickier because you have to identify what your hero elements are. And you basically have to write some code to figure out when that's visible. And I should mention that along with this talk, I'm going to be publishing an article on developers.google.com very soon. It'll be up when this video goes up. That goes into more detail on how to do all of these things. So you don't have to worry about if you're taking notes or whatever. Also, I should mention that we're working on a native API to make this easier, where you can annotate, tell the browser, what are the hero elements, and the browser would tell you when they're loaded or when they're rendered. For first meaningful paint, at this point, before we develop a standardized metric, we think that you should use hero element timing as a substitute for first meaningful paint. The first meaningful paint metric is very, like we said, generic. It will try to be one-size-fits-all. Hero elements is for your site, and so it will always be more accurate than first meaningful paint. And finally, TTI, we released a polyfill today, actually. For the TTI polyfill, it's on GitHub, and you can go try it out right now. To give an example of what the usage looks like, you essentially import the module in JavaScript, and then you call the get-first-consistently-interactive method, and that returns a promise, and the promise resolves to the TTI metric value in milliseconds, and then once you have that, you can send it to Analytics. Did you get a sense for what the polyfill does? I should mention that the get-first-consistently-interactive method takes an options object, so you can figure it for your site. And what you can do is you can pass it a lower bound. The polyfill will assume the lower bound by default is DOM content-loaded, but you can give it a better metric for your site. So the way this works is you have the main thread with long tasks and short tasks, and you have the network timeline, and then you have your lower bound, which by default is DOM content-loaded. But the polyfill does is it uses these resource timing and long task entries to search forward in time for a quiet window of five seconds, at least five seconds, where there are no long tasks and no more than two network requests. Basically, it's saying, once we get to that quiet window, we think that the app is most likely interactive now, and then it considers the moment of interactivity to be where the last long task was. So that's a bit of how this polyfill works. Again, you can pass it a custom lower bound for your site, and one example of what you might want to use is the hero element timing. That would be a great example. You also might want to pass, basically, the moment all of your event handlers are added, because if your event handlers have not been added yet, the site is probably not interactive yet. So Phil showed you how long tasks can push out your time to interactive. But there's lots of other interactions that we're asking you to care about, maybe on learning, like plicks and taps. And delays in these can basically cause pretty bad user experiences. So you probably wanted to know when these important events are delayed. And ideally, there would be a first class platform API that would answer this question. And we actually are working on such an API. But today, you can actually use this code sample to understand the gap. You can basically use the difference of event.timestamp and the current time in your event handler. Now event.timestamp is our best guess of when the event was created. And so this can be the hardware timestamp, or when our best guess is when the tap, you tap the screen. And this difference will tell you how long the event was spending waiting around on the queue for the main thread. Now here, if that difference is more than 100 milliseconds, we send it to analytics. Now we haven't shown this here, but you can also correlate this back to your long task observer. Like you can actually look at what long task happened in this time when my event was blocked waiting. And those are likely the culprits. So once you've measured these key metrics and sent them to some analytics service, you want to report on them to see how you're doing. That will allow you to better answer the question, is your app fast? So this is just one example of a histogram that I threw together from TTI data for an app that I maintain using the polyfill that we just showed you. And the point is not to look at these numbers or compare them, but the main point that I want to make is when you're tracking your performance metrics in your analytics tool, then you can drill down by any dimension that your analytics tool provides. So in this case, we can see the difference between performance on desktop versus mobile. You might also want to consider the difference between one country from another country or geographic locations where maybe network availability is not as great or network speeds are not as high. It's important to know how those difference manifests across in the real world on real users. In cases where you can't show a whole histogram, I recommend using percentile data. So you can show the 50% the median number. You can also show things like the 75th percentile, the 90th percentile. These numbers give a much better indication of what the distribution was. And they're much better than just averages or just one single value. So a really important question is, do performance metrics correlate with business metrics? And again, if you're tracking your business metrics in an analytics tool and your performance metrics in an analytics tool, and this shows the value of tracking the stuff on real users, then you can see and you can answer this question. All the research that we've done at Google suggests that good performance is good for business. But the really important thing is, is this true for your users, for your application? So some example questions you might want to know. Do users who have faster interactivity times buy more stuff? Do users who experience more long tasks during the checkout flow drop off at higher rates? Like these are important questions, and once you know the answers to these questions, you can then make the business case for investing in performance. I hear a lot of developers saying they want to invest in performance, but somebody at the company won't let them or won't prioritize it. This is how you can make that a priority. And finally, we haven't talked about this yet, but you may have been wondering, all of the data we've been showing is for real users who made it to interactivity. And you probably know some users don't make it there. Some users get frustrated with the slow load experience and they leave. And so it's important to also know when that happens, because if it happens 90% of the time, the data that you have will not be very accurate. And so you can't know what the TTI value would have been for one of those users, but you can measure how often this happens. And perhaps more importantly, you can measure how long they stayed before they left. So we've discussed a lot of specific metrics and APIs, and we've shown you code samples. And so now we kind of want to back up a little bit and provide some higher level guidance on how to best leverage these metrics and APIs. So one great thing about everything we've introduced today is that all of these are user-centric metrics and APIs. So by definition, improving these will improve your user's experience. So the first piece of wisdom is drive down first paint and first contentful paint. And all of the traditional wisdom for fast loads applies here. Remove those render blocking scripts from head. Identify the minimum set of styles you need and inline them in head. You might have heard of the app shell pattern that helps improve user-perceived perception. The idea there is very quickly render the header and any sidebars. Now, first paint and first contentful paint are important, but they are certainly not sufficient. It's really important to improve your overall load time. So it's not just enough to be off to a good start in a race. It's really important to make it past that finish line. And time to interactive is the finish line for loading for interactive apps. So more specifically, minimize the time between first meaningful paint and time to interactive. We saw in the Airbnb demo it was important for users to interact with that search box. Now, to shorten your time to interactive, identify what is the primary interaction for your users. Don't make assumptions here. Do they tend to browse or do they tend to interact with a certain element right away? And then figure out what is the critical JavaScript that's needed to power that interaction and make the JavaScript available right away. One common culprit we've seen are large monolithic JavaScript bundles. So splitting up JS, like code splitting, will take you a long way there here. And the purple pattern kind of fits in here, specifically the first, the P and R of purple. Ideally, ship less JavaScript. But if not, at least defer the JavaScript. There's tons of JavaScript that the user is never going to need, all those pages that they're not going to visit, all the features that they're not going to interact with. If there is a widget in the footer that's below the fold that they're unlikely to interact with, defer all of that JavaScript. Third thing we have is reduce long tasks. Cracking down on the long tasks will really help responsiveness on your app overall. However, if you really need to prioritize, at least think about long tasks in the way of those really critical interactions. On load, it's long tasks that are pushing out time to interactive, or long tasks that are in the way of the checkout flow or other important interactions for your app. Scripts are by far the biggest culprits here. So breaking up scripts will certainly help. And it's not just about breaking up scripts on initial load. Scripts that load on single-page app navigations, like going from the Facebook home page or the profile page, or clicking around, like on the checkout for Amazon or the compose button in Gmail, all of this JavaScript needs to be broken up so it doesn't cause responsiveness issues. And the final thing we have for you today is holding third parties accountable. Ads and social widgets are known to cause the majority of long tasks. And they can undermine all of your hard work on performance. Like you might have done a ton of work to split out all your code carefully, but then you embed a social plugin or an ad, and they undo all of that work. They get in the way of critical interactions. So to get an idea of this, we're actually doing a partnership with Soasta, a major analytics company. And so they're doing a bunch of case studies and there's some preliminary data that came in. They picked a couple of their sites, their customers, who had third party content. And on the first site, they found that 93% of long tasks were because of ads. On the second site, they found 62% of long tasks were about evenly split between ads and social widgets. Now, long tasks API actually gives you enough attribution to implicate these third party iframes. So we encourage you to use the long tasks API, find out what damage these third parties are doing on your apps. And once you've optimized your app, you obviously want to make sure that you don't regress and go back to being slow. You don't want to put a bunch of work into this and then have it all be for nothing if one new release turns everything bad. And so it's critical that you have a plan for preventing regression. So this is a workflow that I promote. You start off with writing code. You implement a feature, fix a bug, improve the user experience in some way. And then before you release it, you test it in the lab. I assume lots of people do this. You run it through Lighthouse, you run it through DevTools. Make sure that it's not slower than your previous release. And then once you release it to your users, you also are going to want to validate that it is fast for those users that you release it to. You can't just test in one. You should these things complement each other. You should be testing both in the lab and in the real world. And so for some automation ideas, the best way to prevent regression is to automate this process. You're probably going to slack on it a little bit if you don't have it built into the release and automated. So Lighthouse runs on CI. And there's actually a talk tomorrow afternoon by Eric Bidelman and Brendan Kinney that kind of goes into how to do this. And I recommend checking that out if you want to learn how to run Lighthouse on CI. If you're using Google Analytics, you can set up custom alerts that trigger when some condition is met. So for example, you could get an alert if suddenly the number of long tasks per user spikes, maybe a third party you were using changed their JavaScript file, and things got worse and you didn't know. And so this is a good way of finding out that stuff. So getting back to the original question, how fast is your web app? In this talk, I hope we've given you enough of a framework to think about performance in a big picture in a user-centric way. I also hope we've given you enough specific tools, metrics, and APIs that you need to answer this question for yourself. We know the situation isn't perfect. We know we have more work to do, and Shuby's working on this, leading her efforts here at Google on the standard side, and so she can talk about some of the things that are coming down the road. And so this is our final and last slide, and I just want to say that yes, we know there are gaps and there's a number of APIs that we're working on. We'd love to have a first-class API for hero element timing. The idea there is that you guys can annotate the elements that matter most for your sites and then the browser can put those times on the performance timeline. Secondly, we are working on improving long tasks, mostly by improving attribution. We really want to tell you which scripts are causing problems and more detailed breakdowns, so you can actually take action right away and fix those issues. Secondly, we want to really have an API for input latency, so you don't have to go through all those workarounds that we showed you for MN timestamp. Ideally, for your important interactions for your app, you should be able to know, like, how delayed they were, which long tasks were in the way, and when the next render happens. And then there's other inputs that we haven't even touched on that are in our backlog, things like scrolling and composited animations. And finally, I just want to leave this with saying, you know, we've said a lot today, but we really want this to be a two-way dialogue. We want to hear from you. We want to hear about your frustrations. Don't be quiet about, you know, those gaps in measurement and those frustrations with performance. Try out these APIs and polyfills, and please file bugs on the spec repos on GitHub. This is actually the best way to report issues and make feature requests. And if you're working with analytics, like whether it's a different team or a third party, push on your analytics to adopt these new metrics. Ask them for these histograms, like Phil showed you, and we are pushing on analytics to an R end. Start the chromium bugs on performance. This is actually a signal we use for prioritization internally, and we need these signals to make a case for working on measurement. And finally, as Phil said, we have all the links in the article that he will publish shortly, and they will also be linked from the video. Yes, so thank you, and this is how you can get ahold of us, and these mailing lists are how you can submit feedback. Hey, everyone, Sam here. In this web series, we'll solve common web problems with standards. These techniques are part of the web platform and work with any framework or library. Okay, let's go. If you're animating some elements, whether you're using CSS, the web animations API, or libraries which just use request animation frame, make sure you help your browser stay speedy by letting it use your 3D graphics card. Sam, what are you talking about? When you look at a website, it's made up of layers stacked on top of each other, and your graphics card won't need to redraw for changes to only some certain CSS properties, specifically opacity and transform. These two properties let you move stuff around and change how visible it is. You can animate, you know, width, color, or other CSS properties, but do it sparingly and not in response to user action, where you want to be snappy. To keep things fast, if you have elements that move around a lot, you also want to set the will change transform or opacity CSS property. This is a hint to the browser. This element should be a layer all the time. Otherwise, when you upgrade or downgrade to a layer, you'll incur a cost. The browser has to redraw that specific part of the page. But if you give everything in your page a layer, this might bog down your browser as it struggles to compose them all together. This is a complicated topic, and you should check out more documentation to learn more about it. And most importantly, your site doesn't need to be perfect. It won't crush your user's experience to make a few mistakes here and there, but it's good to keep on top of it. Check Chrome's rendering section inside developer tools once in a while just to see what's happening and how your site's going. This was layers the standard way. See you on the next tip. The only thing that evolves faster than technology is our expectations. We want everything better, easier, now. Suddenly downloading an app feels like it takes forever. And in many parts of the world, data is still at a premium, with one megabyte costing up to 5% of a monthly wage. Let's face it, though. Until now, the alternative to native apps hasn't been great. Progressive web apps can now deliver mobile web experiences with a native-looking feel, offering features like real-time push notifications, adding a site to your home screen so you can easily jump back to it with a single tap, even when you're offline, plus the ability to make quick payments on the go, all from your browser. This is the next generation of the mobile web. So what are we waiting for? Let's go and build something great. I'm Wojtek Kalciński. This is Android Tooltime. And let's talk a bit about the Espresso test recorder and how it can help with adding UI tests to your app. But first, a short explanation for those unfamiliar with Espresso. Espresso is a testing framework designed to provide a fluent API for writing concise and reliable UI tests. However, it is often the case that developers are reluctant to add UI tests to their apps or simply don't have time to learn the framework. This is where the Espresso test recorder comes in. It lets you create and add UI tests to an existing app in an interactive way. You may have previously seen the beta version of this feature, but in Android Studio 2.3, we're promoting it to stable with a few enhancements. To get started with the test recorder, click on Record Espresso Test under the Run menu. The device selection dialog pops up, and after you make your choice, the test recorder runs your app in debug mode. Simply progress through your app's UI as a regular user would by clicking buttons, swiping and typing into input fields, and all those actions will appear in the test recorder window. You can also click here to add an assertion to your test at any time during recording, which will trigger the test recorder to dump the current view hierarchy. To select the view you want to assert on, click on the screenshot that appears in the recorder window and choose between the assertion type from View Exists, Doesn't Exist, or check that it contains the specified text. When you've finished recording your test, the test recorder generates the equivalent test code to run your actions and assertions and puts it in a new file in your Projects Instrumentation test folder. It also checks if your build file contains the required Espresso dependencies and adds those if needed. When you look at the source file that Espresso test recorder created, you will see that it's perfectly normal human readable code. So if you need to further customize your tests or alter them when your app changes, you can simply open the file again and make the alterations you need. As you can see, the Espresso test recorder is very simple to use, but it does come with some limitations. As of Android Studio 2.3, only a few most common assertions are available through the recorder UI. So if you need anything more complicated than that, you will need to edit the generated code by hand. Also, at this stage, the test recorder cannot handle all situations where additional synchronization is needed to deal with delays and async operations in your apps. I highly recommend getting familiar with the Espresso idling resource API and using that in your tests to signal when a long running operation happens. For advanced users who want to tweak some aspects of test code generation, there's a settings page for the test recorder in Android Studio Preferences. Here you can change the maximum view hierarchy depth that will be used for view identification and if app data should be cleared every time you record a new test. The Espresso test recorder is a great way to start adding tests to your app, whether you want to learn Espresso by examining the generated code or simply to quickly build a test suite which you can customize later. We look forward to your feedback on our social channels and happy testing. Consider the simple URL. A few years ago, these were pretty straightforward. You clicked on one and nine times out of 10, you went to a web page. Then things changed. People started using their mobile devices for, well, everything and these devices in turn started supporting the idea of deep links. Click on one of these deep links and it could take you not just anywhere on the web but anywhere in an app as well. So you could use a deep link to point directly to a specific restaurant inside a reservation app or give your new customers a personalized welcome based on the link that brought them to your app in the first place. At least that's how they worked in theory. In practice, deep linking had issues. The same link wouldn't necessarily work on an iOS or Android device and they behaved very differently or didn't work at all for users who didn't have your app installed. And for people who didn't stall your app through a deep link, all of that great link info was typically lost during the installation process, leaving your personalized warm welcome out in the cold. So while deep links were great in theory, their uses were a little more limited in practice. Enter Firebase Dynamic Links. Firebase Dynamic Links are deep links that work the way you want them to. So you can create one single link that behaves one way on iOS, another on Android and even a third on a desktop browser and it will take you to a place that's appropriate to that platform. You can also set up dynamic links to change their behavior depending on whether or not your user has your app installed. For users who don't have your app installed, maybe you send them to your website, maybe you take them to the Play Store, or maybe you show them an interstitial describing the benefits of your app before you take them to the app store for a smoother transition. More importantly, these links can survive the app store installation process. So if your user installs your app when clicking on a dynamic link, all of that information is still available to you when your user opens up your app for the first time. So what does this mean? It means you can use Dynamic Links the way you've always wanted to use deep links. You can use them in marketing campaigns from email to social media to banner ads to heck, even QR codes. And in addition to install attribution tracking, you know, the kind that lets you know which campaigns are getting you the highest quality users, you can also give your users a customized first time experience based on the campaign that brought them there. So if a user installs your music app because you showed them an ad for classical music, you can make sure your app takes them right to show fans latest hits when they first open it up. Dynamic Links are great for sharing too. Your users can use them to share recipes, links to their favorite level in your game or even coupon codes. In fact, Dynamic Links are the technology that powers Firebase Invites. And because Dynamic Links are a Firebase product, you can see their stats directly through the Firebase console. Find out how many people clicked on a link or use Firebase Analytics to find out which of your users first opened your app through a particular link. To find out more about Dynamic Links, check out the documentation here and give them a try and deep link away. Okay, Google, what's the temperature like at Mount Everest? The temperature there is minus 14. Ooh, I better pack a jacket. Oh, hi, I'm Wayne Pekarski and today I'm gonna talk about the Google Assistant and how you can develop your own actions to be a part of this new ecosystem. At Google, we've been providing assistance to users for years across many of our products, but we think there's much more we can do to help people get things done right when they need it in a conversational way. And that's why we're building the Google Assistant. The Google Assistant can help users get things done throughout their day, whether they're at home or on the go. And it powers devices like, for example, the Google Home, a voice-activated speaker. To better serve user requests, the Google Assistant needs to work well with an ecosystem of everyone's favorite services. Actions on Google allows you as a developer to integrate your services with the Google Assistant. And that is what we're gonna explain how to do in this video. Conversation actions enable you to fulfill a user's request directly via a two-way dialogue. Users don't need to pre-enable skills or install new apps to interact with any actions you build. When a user asks for your action by name, we'll connect them with you immediately. Let's first go through a detailed example of a user interacting with a conversation action. Think about something as simple as helping a user choose what to have for dinner based on their mood and the ingredients they have around. Let's call this action personal chef. The user first needs to invoke your action with something like, okay, Google, let me talk to personal chef. The Assistant will then introduce your action and now the user is talking to you directly. From this point onwards, you get to interact with the user and have a conversation. Okay, Google, let me talk to personal chef. Sure, here's personal chef. Hi, I'm your personal chef. What are you in the mood for? Well, it's kind of cold outside, so I'd like something to warm me up, like a hot soup, and I want it fast. All right, what protein would you like to use? I have some chicken and also some canned tomatoes. Okay, well, I think you should try the chicken tomato soup recipe that I found on example.com. Hmm, sounds good to me. So this is a pretty rich interaction. Think about all the sentences I spoke and how the action needs to extract the meaning out of this. How would you implement this? If you're an expert in the area of natural language processing, you can use the Conversation API, which allows you to process the raw strings that contain the spoken text from the user. You can then use the Actions SDK that includes all the tools and libraries you need to build the actions. However, if you don't want to process the user's transcribed speech yourself, you can use one of the tools that have integrated with Actions on Google. One of these tools is API.ai, which provides an intuitive graphical user interface to create conversational interfaces, and it does the heavy lifting in terms of managing conversational state and filling out slots and forms. This means you'll no longer need to process the raw strings. API.ai can do this for you. To handle a conversation, you use the API.ai developer console to create an intent. This is where you define the information you need from the user. For our example, finding a kitchen recipe, this would be the type of food, the ingredients, the temperature, and the cooking time. You then specify example sentences. API.ai parses these sentences and uses them to train its machine learning algorithm to process other possible sentences from your users. You don't have to write regular expressions or a parser. You can also manually set what the acceptable values are for each piece of information. Once this is done, API.ai uses these definitions to extract meaning out of spoken sentences. The user can provide information naturally, out of order, all at once, or in pieces. The action can ask follow-up questions as needed. Pretty neat, right? Once you've set up everything in the API.ai console, you can then test it out immediately with example sentences. Then you can test your project with the web simulator, preview it on Google Home, or deploy the full project to Google, all from within API.ai. Next, you can connect up in an optional web hook to your intent to allow it to interact with a backend server. When all the details you need are filled in, your web hook is called with the appropriate details provided as JSON data. You don't need to worry about parsing strings or dealing with responding back with follow-up questions for the user. You can also develop the web hook using the language and hosting platform of your choice. It's just an HTTP callback. So API.ai makes this really simple. It's easy to get started and you can have a prototype working in just a few minutes. You should check out our screencast video where we show all the steps to make this happen. So the Google Assistant is the next big opportunity for developers. By developing actions on Google, you'll get cutting-edge experience in natural conversation interfaces and be ready to actively participate in the emerging space of AI-first computing. In addition, you'll be able to help shape the platform and grow your audience in all the devices and contexts where the Assistant will be available in the future. And thanks to conversational interface building tools like API.ai, as well as Google's unique understanding of the user's interests and context, you'll be able to create frictionless, intelligent experiences for people that engage with the Google Assistant. You can find out more about actions on Google by reading the documentation at developers.google.com slash actions. We also have an actions on Google developer community on Google Plus, so you can ask questions and share your ideas with everyone. We look forward to seeing what you build and I'll see you next time. Hey, everyone. So I actually have a sixth sense for donuts. I know where they are and I'm sensing they're right over there and all I have to do is use the Android Pay to get one. Shall we do it? Yes, we should. Hey, how you doing? Great, how about yourself? I'm doing fantastic. So what do I do here? Just tap your phone with the Android Pay loyalty card on it and we'll get you a donut. There it is. The green one, clearly. Thank you so much. Thanks, guys. Okay, so Dennis and I clearly have been using Android Pay a few more times. Dennis, tell me about what just happened there when I tapped my phone on. So we're doing actually tap for treats. To go to stations right now, you basically tap through NFC and basically it's a loyalty card that's set up that basically you can collect points off of every time you do a tap. So you go to the stations, tap. We'll give you a nice donut, some other treats. After you get four taps, you can come back to the station later on in the day and you can get a free Android bot. So this is something that people are doing all around the festival today? Correct. And I can see as I've been passing by throughout the day that it's also very, very popular and I don't think it's just the donuts and I think it's also pretty easy. Yeah, it's super easy to do. Literally take seconds to actually recognize through the NFC tap. You know, people obviously love Android bots as well so that's probably another big incentive of why people are doing it. Oh, and wait, there's more than just donuts, right? There is more than just donuts. You actually have a custom bot that you can get once you tap four times. Come to this terminal, people can actually get a nice free custom bot that's specifically Android Pay on IO 2017. That's really cool. Yeah, here you go. Oh, really? Thank you. All right, thanks. All right, my hands are full. I'm gonna take care of this and we'll get to something else. Welcome. Thanks for joining us this afternoon. My name is Megan Lindsay and I'm the product manager for WebVR at Google. Today, I'm here to talk to you about WebVR. I'll show you the opportunities that opens up for web developers. How this is gonna benefit the VR ecosystem as a whole and what others are already doing with it. Then my colleague, Brandon Jones, will demonstrate how easy it is to build a cross-device WebVR app by doing it right here on stage. WebVR enables web developers to build fantastic cross-platform, cross-device VR experiences. Here's a quick overview of what WebVR is all about and an introduction to our recently released WebVR experiment site. VR should be accessible to everyone because it has the potential to let everyone explore, play, and create in amazing new ways. But right now, VR is pretty complicated. To make awesome VR stuff, developers might have to learn a new language and then spend a bunch more time to make that stuff work on multiple headsets. And then when we wanna play with their awesome VR stuff, we've gotta have the right headset. VR should be easier so developers can make something quickly and share it with everyone, no matter what device they're on. Kind of like how easy it is to share stuff on the web, but with VR. Well, that's the idea behind WebVR. It's VR on the web for everyone. Here's how it works. Say you're in a browser like Chrome and you come across a WebVR experience. You just tap the link, put on a headset, and boom, you're in VR. Developers can build WebVR things the same way they build web things with JavaScript. And since it all works in a browser, it's easy to make it work for all kinds of VR devices, whether it's someone using their phone, their computer, or their entire room. Developers are already building and sharing awesome stuff with WebVR. We've started showcasing their work on a site called WebVR Experiments. It gives you a glimpse into the kind of stuff that's possible. You can play simple games, see the world in a new way, explore interactive stories, play with a friend, or lots of friends. Each experiment comes with open source code to help others make new experiments, and developers can submit what they make. All of this is an effort to make VR more accessible so anyone can build and everyone can play with awesome VR stuff. So come and start playing at webverexperiments.com. I wanna tell you why we at Google care about WebVR and why we're investing in it. As Clay Bavor said in the keynote yesterday, immersive computing is going to change how we play, work, live, and learn. We're at the start of the next computing revolution. Many of you have seen the technology adoption curve before. 2017 is shaping up to be a pivotal year for VR where it's moving beyond innovators to early adopters. This is a time of opportunity. However, one of the largest barriers to even broader VR adoption and more user engagement today is content. Content is absolutely critical to the success of any new ecosystem. Giving users great, diverse, and plentiful things to choose from will keep them coming back. I believe that the open web is exactly what virtual reality needs to take it to the next level, and WebVR is the first step along that path. WebVR opens up VR to the largest developer platform in the world. You, web developers, you can build for VR for the first time. The web's an open ecosystem that we at Google strongly believe in and support where developers from around the world work together to innovate in a standardized, interoperable way. The web isn't controlled by any one company and it's unique in providing access to content from any device through any web browser. There are no walled gardens here. What this means is that WebVR simultaneously decreases the barrier to entry and extends the reach of your content. Using WebVR, you can start developing for VR with gradual investments by progressively enhancing your existing websites. You can light up your site with VR when an immersive experience adds something special from breaking 360 news on the ground to exploring your next home. With WebVR, you can build your experience just once to reach all VR headsets and the billions of mobile and desktop users, giving you access to the broadest audience possible. And you gain all the benefits of the web by making your content searchable, linkable, and low friction with no installation required. Sharing is as easy as a link. So the potential of VR goes well beyond gaming. What kinds of VR content just makes sense to do with WebVR? Your imagination's the limit, though I believe the first wave of WebVR content will be the use cases that are already first and best on the web. Ephemeral content found primarily through search and social media, short form media content and important, but perhaps less frequent tasks where you may just not want to keep a nap around. So let's take a look of some of the things that others are already doing with WebVR. Matterport has created technology that allows capturing real world spaces in 3D to view them virtually for industries such as real estate, travel and hospitality, and architecture, engineering and construction. Matterport customers like Sotheby's Home Visit and Mansion Global have scanned nearly half a million places and make these available to their users with Matterport's web player. The web player lets the user navigate through the 3D virtual space on their phone or desktop. Before WebVR, users were required to download a separate app to view the full VR experience. This created a lot of friction and resulted in significant user drop-off. But now with WebVR, this friction has been eliminated. The user can step right into the home they are looking at directly from the website. And when the user exits VR, they're still on the original website rather than in a separate app. Matterport supports WebVR for Daydream View and Cardboard support is coming soon. With over a million scenes created and posted by their community, Sketchfab is the world's largest platform to publish, share and discover 3D content online. With WebVR, any Sketchfab model can be viewed and manipulated in your VR headset. Content creators or enthusiasts can use Sketchfab to share or embed models anywhere on the web, enabling them to be explored either on a 2D screen or in VR. Pouster creates custom experiences for movies and music, helping with discovery of major entertainment products. With the rise of virtual reality, Pouster used WebVR for the broadest audience reach and created experiences focused on movie websites, showtimes and ticketing. Here's a look at what Pouster has done recently. So, Movie Studios saw over five times more movie theaters selected inside VR than on the regular websites. Audiences viewed the 3D trailer and the 3D gallery images and they converted to seeing the movies in 3D rather than in 2D in the actual theaters. And finally, from filmmaker Christopher Nolan comes the epic action thriller Duncork opening worldwide this July. Here's a preview of what you'll see in Duncork. We'll fight on the beaches. We'll fight on the landing grounds. We'll fight in the fields and in the streets. We shall never surrender. G13, experience it on IMAX July 21st. Just as the film offers a first-person perspective, Warner Brothers wanted technology to offer a deep, immersive perspective on just what happened at Duncork. They brought this vision to life as one of the first collaborative VR experiences on the web, showing the depth of soldiers camaraderie through a cooperative experience between two people. Working together to survive the evacuation, each player will become both the rescuer and the victim. Here's a taste of what's coming for Experience Duncork. Experience Duncork will be releasing in June and will be open to everyone supporting 2D devices as well as VR headsets. A teaser of the experience is live today, so check it out. Other areas we're seeing particular interest in WebVR include news, e-commerce, interactive VR films, education, art, and custom business solutions. We are eager to see what web developers use WebVR for next. So WebVR content's arriving, and WebVR browser support is already here. In Chrome for Android, we've released WebVR support as an origin trial for Daydream View and Google Cardboard. Our friends at Mozilla, Microsoft, Oculus, and Samsung have all released or announced coming support for WebVR, bringing it to Samsung Gear VR, Windows Mixed Reality, Oculus Rift, and HTC Vive. In Chrome, we're continuing to improve and extend our WebVR support. In our latest release of Chrome for Android currently in beta, we've significantly improved performance, making it more consistent and stable overall, and making it easier to reach target frame rates by adjusting rendering settings. We've also released WebVR support for Chrome custom tabs, enabling you to enhance your native Android app with your WebVR content, too. Looking forward, we have support for desktop headsets in development. And we're bringing great WebVR content right to the Daydream home screen. Stay tuned for more on this soon. We've talked about progressive enhancement of VR content and how this is a superpower unique to WebVR. Let's dig a little deeper into what that actually means and how others have solved it. Weather.com recently released an interactive WebVR experience called The Birth of a Tornado. They applied progressive enhancement and responsive design principles to ensure their experience can be used on any device by optimizing the interaction model for the device being used. On desktop, you drag with your mouse to change the viewpoint and click to interact. On tablet, you change the viewpoint by dragging with your finger and tapping to interact. On mobile, the phone's accelerometer is used to provide a magic window into the VR experience. For Google Cardboard, head movement is used to gaze and target and tapping the button to select. And Daydream View uses the controller for interaction. Birth of a Tornado also works with Samsung Gear VR and the HTC Vive. The model used in Birth of a Tornado can be used with many WebVR experiences. And we have a library to help make this easy that Brandon will show you a bit later. This is just one way to support cross-device experiences. Another example is Dance Tonight. Some of you may have experienced the Dance Tonight project at I.O. yesterday evening, also built in WebVR. Dance Tonight is an ever-changing VR experience made by LCD sound system and their fans. It's made entirely from VR motion capture recordings of fans dancing to a new song by the band. Another special thing about the project is that it works across devices but playing to their individual strengths. On desktop and mobile, you get to be in the audience. On Daydream, you're on stage. And in Room Scale, you're a performer. If you didn't catch this in person, it'll be available online this summer. Your input choices and supported devices may differ for your WebVR project, though we recommend starting with a goal of universal access as a best practice and just see how far you can go. While WebVR content is still best experienced immersively in a VR headset, most people have still never tried immersive VR at all. WebVR content will be their very first hint of what they're missing. The great WebVR content that you create will be the reason a new user decides to pick up their very first VR headset. I hope you're as excited about WebVR's potential as I am. Now I'd like to introduce Brandon Jones. Brandon started WebVR as his 20% project several years ago and co-authored the spec. Today, he's going to build a cross-device WebVR app for you live on stage. Please welcome Brandon. Thank you, Megan. So Megan talked about the principles of progressive enhancement. That is making pages that can be used on the desktop and mobile devices as well as across multiple VR devices. That can seem very intimidating, but the right tools can make it fairly simple. It all starts with creating some great WebGL content. WebGL is an API for rendering 3D graphics to the browser and is supported across all platforms today. There are many great WebGL tools and frameworks out there to help you bring your ideas to life. From there, turning your WebGL page into an immersive WebVR experience can be as easy as adding a few lines of code. To show you what we mean, we're going to build a quick WebVR experience on stage today. The app that we're going to be building will be a 360-degree photo viewer, which is a great fit for WebVR. These type of photos are easy to create with many cameras available that capture them and provide a fun experience that you don't get from traditional photos. Best of all, they can easily be viewed in 2D and browser with a click and drag or magic window controls, while VR can optionally be used to provide an enhanced viewing experience for users with the right hardware. 360-degree photos also represent a class of content that's difficult to get users to install a native app for. Because of the content simplicity, the overhead of an install is probably enough to discourage most users, given that they likely only expect to spend a few seconds looking at each image. It's very likely that most users would never get past the app store link. Ideally, they could fluidly step into VR and out of VR with very little overhead, view the images quickly, securely, and then move on without having to uninstall anything afterwards. This sort of ephemeral experience is what WebVR excels at. So now we're going to switch over to the code and actually build the experience. Now, we're starting out here. Can we get the laptop up on screen? Thank you. We're starting out here with some boilerplate code. We're using 3JS. It's not the only framework that you could use for creating WebVR and WebGL experiences, but it is a fairly common one. There's also frameworks that are expressly for WebVR content, such as A-Frame or React VR. But because 3JS has fairly wide developer acceptance already, we're using that today as our example. Now, the boilerplate that we're starting here with is fairly simple, so I'm not going to cover it in detail. This is the type of thing that you would see in a 3JS tutorial zero. And what it produces for us is a black screen. Well, that's OK. That's a great starting point. Thank you. That was awesome. So because we're creating a 360-photo gallery viewer, we need images. Now, normally, these would come from, of course, a database, a CMS of some sort, but we're just going to hard code them in for the sake of example. And then we need a way to view them. Now, because it's a 360 image, the method that we're going to use is to create a gigantic sphere, in this case, about 500 meters in radius. Invert it, and that's what the viewer's skill here is doing, is it inverting on the x-axis so that all of the faces point inward, and then keep the camera for a scene at the very center of that. That way, when you look around, you're seeing this sphere all around you that's practically at infinity. Next, we need to put our image on the inside of that sphere using the 3JS mesh basic material. This loads up what's known as a texture in WebGL and will apply it to the inside of that sphere. And it just so happens that 3JS's default coordinate systems work out really well for equirectangular images, which is the default that most 360 cameras spit out. Finally, we need to combine the geometry and the material together into a 3JS mesh, which is the basic primitive that it uses to render, and add it to the scene. So once we've done that, we can switch back to the browser and see that we now have a photo. Unfortunately, there's no interaction, so we can't tell that it's a 360 photo. We'll fix that by pulling in what's known as the WebVR polyfill. The WebVR polyfill is a JavaScript implementation of the WebVR API that's targeted primarily at mobile devices. It uses their accelerometers to provide basic head tracking used for a cardboard style experience. It also happens to provide us with a basic emulated click and drag mode on desktop that we can use to get basic functionality on this desktop computer. Now, in order to make our application responsive to the WebVR polyfill, we have to add a 3JS extension called VR controls. We'll attach this to the camera, and then this makes it so that any head motion that happens on your headset is automatically applied to the camera itself. In order to make sure that it keeps updating with the head motion, we also need to add an update function to the animation loop. Once those two elements are in place, we now have basic click and drag functionality, and we can see that we actually now have a complete photo viewer for a single photo. But that's not terribly interesting. We want a gallery. So the next thing that we're going to do is provide a 2D version of the thumbnail gallery that allows us to switch between images. We'll loop through the image gallery that we had loaded up previously, load a texture for each, and then pass them to this add to gallery 2D function that we're about to define. Here we're going to use a little bit of basic HTML manipulation to create a container div, add a simple class to it. I'll leave the CSS as an exercise to the viewer. Append the image that's associated with our texture to that container element, and then add a click handler. When we click on this thumbnail, we're going to swap out the texture on our gigantic viewer sphere with the texture that's associated with the thumbnail. And this will give us the basics of iterating through the gallery. Now, I should note, you can see that here, I should note that this is actually a terrible practice because normally you would want to use smaller images for the thumbnail to not impact loading time. But because we're doing everything locally here and for the sake of time, I'm skipping over that. But you can see here that as we click on each of the images, we can cycle through the various items in the gallery, and they're all viewable. At this point, the 2D site is done. We've done everything that we need to work both on desktop and mobile. But we're here for WebVR. So let's figure out how we allow people to dive into VR from here. The next thing that we're going to pull in is a utility called WebVR UI. This is a library created by the Google Creative Lab that provides a button that advertises WebVR support to your users. It will also communicate to them if they don't have WebVR support. In order to add that to our application, we need to go in, create an instance of the WebVR UI button, append it to the DOM, and then in this case, we're going to ask it what VR device it's associated with and cache that off for later use. If we switch over here now, you can see that we now have the button that normally would tell us that we can go into VR. But because we're on the desktop site, we can't actually go in yet. That is, we don't have the hardware connected, so we can't go into VR here. We would be able to go on mobile, and I'll switch over there in just a second. Now, even if we could go in, because we have the correct hardware, we haven't actually wired up any of the VR rendering yet. We'll do that with another 3JS utility called VREffect. This makes it so that the content that would normally go through the renderer and show up on screen will actually be rendered twice for a stereo view using the correct parameters that is going to query from the web VR API. We also need to update the animation loop to make sure that we can properly handle when we're in VR mode versus non-VR mode. We do this by asking the EnterVR UI if the users click the button and if it's presenting. And if so, we'll render the scene using the VREffect. Otherwise, we'll render using standard 3JS render. And then the last thing that we have to do is make sure that our standard request animation frame is actually using a VR-specific variant if it's available. This makes sure that if we're on a desktop device where the VR headset runs at a higher frame rate than your average monitor, like 75 or 90 Hertz, we're running at the same frame rate. Otherwise, the user will experience a lot of stuttering in VR and come away possibly sick. So we will update that. And at this point, we'll need to switch over to our Android device to see the rest of the experience. All right, great. So you can see on Android, we now have that nice magic window interaction mode where we're able to spin around and see the 360 photo without going into VR at all. This is great if you just want to showcase, one, that there's 360 content here because the user's natural hand motion will give them a hint that there's more to see. And two, it just gives people a preview, if they're, say, sitting on a bus and maybe don't necessarily want to blindfold themselves. However, once we hit the Enter VR button, we can now switch into VR mode. And at the moment, we're configured to use a cardboard device. And you can see that we would also get the nice stereo view. Now, these images aren't stereo, but you could render a stereo view of the scene. And it would work correctly in any cardboard device. So we've now created a basic web VR enabled 360 image viewer. But once again, in the VR side, we've only done it for a single image. And that's not great, especially when you're using mobile VR. You don't want to force the user to put the phone into a headset and take it out repeatedly in order to navigate between different elements in the scene. So ideally, we'd like to take the 2D gallery that we've created here and pull it into 3D to allow the users to select between the different thumbnails there. Now, this gets a little bit more complicated than the DOM elements, because we are dealing with 3D, so a lot more math is going to be involved. But the basics are still pretty simple. We're going to be using spheres, once again, just like the larger viewer, to represent the individual thumbnails. They're just going to be much, much smaller this time. And then down in when we're looping through our gallery, we're just going to add items to the 3D gallery as well as the 2D. That looks like this, which now we start to get into a lot of math, because it is 3D. Trig is kind of par for the course. But skipping over the exact details of what we're doing here, once again, we're creating the texture to go along with the image. We're associating it with each of our thumbnail spheres. And then the positioning code here just puts them in a semicircle around the user's waist, somewhere that's kind of non-ontrusive but easy to reach. So if we switch back over to the Android, we should now see, yeah, we have this nice semicircle of the same thumbnails, once again. Now, this is cool, but that's probably not the experience that you want to leave your users with most of the time, because we're doubling up on the thumbnails in the non-VR version. So over in the code, we'll just add one more line to say, if we're not in VR mode, let's just hide the gallery. Makes things a little bit cleaner. So next up, now, we've got the thumbnails, but we don't have a way to interact with them. And we're going to add that by using a library called RayInput. RayInput is the library that was used by the weather.com example that Megan was talking about earlier. And what it does is provide a single unified model for cardboard, daydream, or higher end desktop experiences with six-stop controllers. In all cases, it gives you a cursor and a ray that are based off of whatever the user's capabilities are and uses that to cast into the scene, find an object, and allow you to click on it. So to start off with, oops, went a little too far, to start off with, we have to instantiate the ray input. We provide it the camera so that it knows where we're looking. We set the size. That's just a little bit of bookkeeping. And then add the meshes that are associated with that into the scene. This is so that we can get the cursors that are associated with it in the ray itself. Like the VR controls, we also have to update this every animation frame to make sure that it stays in sync with the controller movement in this case. We also want to make sure that we know when we're actually selecting each of the thumbnails. So we're going to modify their opacity whenever the cursor is over the top of them. We'll start them out with a lower opacity. And then we'll have some event handling here that makes their opacity higher as the cursor hovers over them. One bit that I skipped here. We do need to loop through all of the thumbnails in our gallery and let ray input know that they are selectable. Otherwise, it will also try to select the larger sphere in the back, and that doesn't do us any good. And then finally, the last piece is that we say when we have clicked on whatever input, whatever the primary input is for our control mechanism, we're going to do the same thing that we did for the 2D gallery, which is swap out the texture for that thumbnail for the larger sphere. Now let's see how that looks on mobile. Well, let's save it first. Then we'll see how it looks on mobile. OK, so you can see, because I skipped over this, we no longer have the image spheres in 2D. But if we jump into the VR mode, and there we go, we now have a nice cursor that can swivel around and select the different spheres based on our gaze. This is, once again, a cardboard mode. And if we clicked on it, it switches the thumbnails. So now we have a fully functioning gallery that the user does not have to leave VR for in order to switch through images. Now, to demonstrate that this works correctly with more complex input mechanisms as well, we're going to come out here. Well, that's not what I wanted. Sync this up with a Daydream device. And then, the normal part of the Daydream entry flow is that we have to sync up the controller, which takes just a moment. And then we should be back into our same experience. With now, you can see a basic ray-based selection cursor that does not depend on the movement of my head, but can still be used to do basic selection from the gallery. So that's it. In about 150 lines of code, we've created an experience that works on desktop, mobile, and VR, both cardboard VR, Daydream VR. And if we were to put that on some of the larger desktop systems, it would even work with an Oculus Rift or a Vive. So let's switch back to the slides. OK, so to summarize some of the recommendations that we covered during the development, at this early stage, we should be focusing on apps that can be used in 2D with VR as an enhancement while the VR ecosystem is still growing. We should also strive to allow users to stay in VR for as long as possible, as frequently switching between the 2D and the VR modes can get a little tiring. Finally, there's a variety of input methods across all VR devices. And using library-like ray input helps normalize that into a single interaction model that's common between all modes. So now I'm going to turn it back over to Megan, who's going to tell you a little bit more about the future of Chrome and VR. Thank you, Brandon. Everything up to this point has been about bringing VR to the web. Now I want to talk about bringing the web to VR. Today, when you encounter a web VR link, you drop your phone in the Daydream Viewer, and then when you're done, you take the headset back off. Soon, you won't have to take it back off to continue your browsing. As we announced in the Daydream keynote this morning, we're bringing the full Chrome browser and the entire web to VR for Daydream View first. You can use the Daydream controller to navigate regular web pages and follow links. And for web VR experiences, you get transported into fully immersive worlds. You'll be able to watch videos in a large, screen, theater-like experience. Plus, Chrome and VR is the same app that you use for browsing in 2D. It shares all of your tabs, bookmarks, and history. You don't have to re-log into websites and VR, things just work. VR browsing is coming to Chrome for Android later this year. So what's next? Take a look at our web VR developer portal with some great tutorials and case studies and the helper libraries that Brandon showed us earlier. Check out the full set of web VR experiments online and consider submitting your own. You can also try out some of the web VR experiments here in person at I.O. in the experiments area. Thank you so much for joining us today. And I can't wait to see what you come up with. And keep their data secure. Firebase supports lots of different ways for your users to authenticate. If your users want to authenticate with their email address, you can build that for them. Firebase Auth has built-in functionality for third-party providers such as Facebook, Twitter, GitHub, and Google. It can also integrate with your existing account system if you have one. You're given the choice about how to present login to the user. You can build your own interface or you can take advantage of our open-source UI, which is fully customizable and incorporates years of Google's experience in building simple sign-in UX. No matter which one you use, once a user authenticates, three things happen. Information about the user is returned to the device via callbacks. This allows you to personalize your app's user experience for that specific user. The user information contains a unique ID which is guaranteed to be distinct across all providers, never changing for a specific authenticated user. This unique ID is used to identify your user and what parts of your backend system they're authorized to access. Firebase will also manage your user's session so that users will remain logged in after the browser or application restarts. And of course, it works on Android, iOS, and the web. That's Firebase Auth, allowing you to focus on your users and not the sign-in infrastructure to support them. Thank you for joining us for today. India is coming a long way, as I just mentioned. Today, India is the second largest country in the world in terms of number of developers. Soon, it's going to be number one. What we want to invest in is actually training the faculty from your colleagues. Your potential is so great. And what Google is doing to help catalyze that innovation is it's really an exciting time for these campuses. We are really trying to provide the best possible experience to teachers in these faculty hubs because the first step to training 2 million developers is to train the teachers that are going to teach those 2 million. Industry, as of now, demands a lot of updated curriculum, developing 2 million Android developers. Being working in a technical university, you can contribute hugely on developing those million app developers. So we're excited that all the raw materials are there to create an innovation revolution in India. I really think the students are going to make some great things. And I can't wait to see what comes out. There's a lot of potential in India. And we need to take it forward. With Google, we can provide rich opportunities to all. That is the essence of Google program, which I have seen. This is a good move. And this program will definitely be useful to the students because app development is going to rule the world for the next few years, really. I welcome you all in this Android development environment. An app is basically a solution or a medium by which you get a solution for a lot more users. You have to figure out what kind of benefit the person is going to get by using an app. I feel, personally, the phone is not a phone. It is something that can change people's life. With Google, we can provide rich opportunities to all. That is the essence of Google program. There's a lot of potential in India. And we need to take it forward to get everyone to start thinking about Android and developing for Android. We're at the cusp of a revolution. We encourage you to continue learning, continue developing, and now go build some great apps. You have the talent. And that is the need. Bring it on. LinkedIn is the premier social network for professionals. We are consistently one of the top apps on Google Play. My name is Pradeepra Dash. I'm the engineering manager for LinkedIn infrastructure team for the flagship apps. We have a million plus reviews. We are consistently four plus stars. Our users are really appreciative of how stable the app is, how it really helps them bring their professional profile forward. My name is Drew Hanny. I'm on the Android infrastructure team at LinkedIn. My team is responsible for the overall health of our Android application. We take care of releases to Google Play, our testing pipeline for the app, our build pipeline. One of the tools that my team has been excited about is the APK Analyzer tool. Because we spend a lot of time paying attention to the size of our app, and it required some expertise to figure out what was causing your app to be a certain size. But now that we have the APK Analyzer tool, we've been able to expand that knowledge throughout the entire team. LinkedIn was a early adopter of Gradle. So adopting Gradle for Android was a pretty natural fit for us, because we can share a lot of our custom plug-in logic that we've built. And we can also get consistent builds across developer machines and our build service. One thing that's really benefited us is the Lint system, which has a lot of built-in checks for common problems. And we've really appreciated being able to write our own custom Lint checks for LinkedIn's specific internal libraries. With so many developers checking into the code base, one thing that helps onboarding someone new is having a consistent code style throughout the code base. Android Studio lets us find a custom code style that each developer can add and automatically have their code formatted in the LinkedIn style. One of the things that we really appreciate is the open nature of the platform, where not only has the feedback that we have given back to the developer community in Accepted, but also being worked upon. In the last couple of years, my team has become way faster, much more agile, and being able to code easier and faster on the Android Studio. This is Ask Firebase, a show where we answer all sorts of Firebase questions on whatever medium you ask questions on. We take those questions. We give you answers. It's a show. Let's get into it. Ask Firebase. Take one, Mark. Hey, folks. My name's Abe Haskins. I'm a Firebase engineer here in San Francisco, and I'm here to talk to you about Firebase for Unity. If you haven't already seen it, I did a Getting Started video for Firebase in Unity. It'll be linked below. But in the meantime, I want to talk to you about some of the great questions we've gotten since we released the Firebase SDK into general availability a few weeks ago. Also unrelated, but if you want to call me Abe, that's great. You want to call me Abe. You want to call me literally anything. I'm OK with it. As long as it starts with an A and has some letters after it, we're all good. Let's dive in. Hey, a**. That's not nice. Let's get started with the first question from my laptop battery is dead. All right, first question. On our last YouTube video about Unity, $7.01 asked, can I use this to build out WebGL games that I've developed in Unity? And the answer is no, you really can't. Since the Firebase SDK for Unity is built on the Android SDK and the iOS SDK, we don't have the ability to build out for WebGL or PlayStation 3 or some of these other platforms that Unity supports. You'll find that some of the features do work. And even in the editor, they're stubbed out, so you'll be able to use them and test your code and compile it and everything like that. But you won't get the same functionality you would if you built out the game for the device. Thanks for the question. Next question. On the Firebase mailing list, DeepPixel asked, why does my app report libapp.so cannot be found? This is actually a great question and it's related to another issue we've gotten on GitHub from Secura777. He said, why is Firebase app not able to be found? These issues are both related to the timing of when Unity imports the dependencies it requires to run the Firebase SDK. In some environments, you're just gonna luck out and you're not gonna have to deal with this, but it just depends on your specific game and your specific operating system. So what you can do is, one, make sure you're on the latest version of the Unity plugin. We've pushed some changes that should help this for a lot of people. And two, you can go into the assets folder and go to play service resolver, Android resolver settings, and there's an enable auto resolution checkbox. This is just gonna do some extra things to help you get those dependencies imported and help control that timing so everything's gonna work well for you. All right, and if you do that, your issue should be solved. Thanks a ton, Secura, and DeepPixel. Next question. The next question is one we've been getting a ton. A lot of people want to upload different assets for their game into Firebase Cloud Storage. They wanna upload images and text, all of these great things, movies, et cetera, et cetera, all those things you need for your game, you know, you know the stuff. They wanna upload those into Cloud Storage and then retrieve them in their game and just use them in the easiest possible way. The Firebase Unity SDK is really good when you're dealing with these storage assets because you have a lot of control. You can download them as streams or you can download them as a byte array. But you don't have to really do all of that if you're not interested in dealing with those more complex flows. The absolute easiest way to get an asset out of Cloud Storage rendered or consumed in your game in some way is to use the triple W package and the get download URL async method that we offer. With that method, you'll just get a normal URL. It's a public URL that you can consume and pull down just like any other URL. And the triple W package in Unity makes it really, really simple to take that URL, download it and turn it into a material, turn it into an audio clip, even a movie. So any of those assets you upload in Cloud Storage or your users upload, if they're uploading profile icons or anything like that, just pull them down with triple W and you'll be good to go. If you want to find out more about the triple W package or how it works with the get download URL async method, you can check out our documentation or the Unity documentation. Both are great resources for this. Thanks a ton for that question, everyone who asked it. I appreciate it. Next question. You guys have no idea how much I spend on conditioner to get this volume. And the final question, the big one we all want to hear about Cloud Functions. I'm sure you heard at Google Next we announced this awesome thing, Cloud Functions. How does that work with Unity? What can we do with these two things put together? And the answer is you can do like anything. If you check out the repo we have, we have a function sample repo and this isn't specific to Unity. It's just our general function sample repo. It has 26 different examples of things that functions can do. And every single one of those you could do with the Firebase SDK and Unity with Cloud Functions. And that's because Cloud Functions ties into your real-time database. It ties into analytics and storage and all these other Firebase services that are supported in Unity and it lets you execute code and change things way back on our Cloud so you don't even have to think about it. So if you want a game that interacts with an external API or you want a game that has some custom authentication that you've brewed up in your awesome game development shack, you can do this with Cloud Functions and you don't have to worry about scaling or anything like that. So you can absolutely use Cloud Functions with Unity and it's highly, highly recommended. So go check out that repository. It's got a ton of different samples and every single one will work great with Unity. Thanks for the question. All right, everyone, thanks for watching. That's gonna wrap up this Ask Firebase featuring Unity. If you have any more questions, obviously, you can leave them in the YouTube comments below, post them on Stack Overflow, tweet them on Twitter with that hashtag AskFirebase or literally reach out to us in any way. Tweet at me, tweet at Firebase. Shout it at the top of your lungs near a Google office. We will hear it. We will answer your questions. We'll see you next time. Oh, hi. I'm just building a paper prototype here in the design tent and I'm gonna get this design reviewed in a few minutes, but first let's take a look around, shall we? All right. Hey, Liam, how are things? Good, how about you? Doing great. I'm gonna get a design review. I'm a little nervous, but we'll save that for a few minutes from now. Before I get there, I kinda wanna know what's going on with these posters. It looks like you had a design sprint. Sure, so these posters are kind of showcasing the work that we did at the dimension sprint up in SF. It was a three-day event where Google brought a bunch of designers and developers together to kind of ideate and come up with some solutions around these five categories. Cool. Can we see some of this stuff? What is this? Sure. So this one is called Bay Window and basically the idea is to keep two people connected to each other. It's a live wallpaper in the last three. Oh, I get it. Bay, like BAE. Okay. Yeah, exactly. So you basically pair with your partner and then you can upload photos directly to their live wallpaper. And if you go a while without connecting, then you kind of see these two circles like start to drift away a little. Oh, that's not good. All right. What's another one of your favorites up here? I think this one is probably one of my most favorites out there. It's around self-activation. And the idea here is just kind of a community where you can share experiences and get some support. It's made to kind of serve the needs of people who are looking for a safe place to talk about experiences they've had, kind of document their own progress of working through those at their own pace and just keep track of how they're doing. So it kind of allows you to like track how you're feeling each day, some thoughts or experiences you might have throughout the day and just how you're making progress. It's a beautiful design too. All right, let's look at one more. Sure. So this is one that I helped out with actually. It's around productivity. So the idea here is that a lot of people, especially if you work in a consultancy setting, you have a lot of different tasks and project managers and tools that you use. You might spend a lot of time like jumping between those to figure out what you're supposed to be doing for the day. So this is kind of an idea that uses Android's ability to share data between apps, to collect all the tasks that are assigned to you and then allow you to kind of make your schedule for the day, regardless of where that work is coming from. That's cool. And is this actually like a work in prototype here? Yeah, working-ish. So you kind of just triage the things that you want to add to your list by like swiping in one direction or the other. That's fun. So you make it like all the way through your list and then you kind of have your list here. And if you want to, you can reorganize some of them just based on the order that you want to knock them out in. It's a really beautiful design. This is very inspiring. It's very material, too. Yeah, so a big focus of Dimensions was also just what makes Android unique as a platform and trying to solve for these categories based around those ideas. That's awesome. All right, Liam, thank you so much. I think I'm gonna go find Yasmin around here somewhere. Okay. Hi. It's going well. Yasmin, these are my friends. Hi friends. So, I'm a little nervous. It's my first design review in a long time. I made a, okay, this is my app. It turns out if you spend a few minutes every day writing down what you're grateful for, you live a more stress-free life. So I figured maybe I could build an app that's a gratitude journal, right? And this is what I'm thinking. This is the home screen. And when you bring up the app, it just shows like something that you're grateful for, which is rated based on a thing I'll say later. And so it just like lists that. So you can kind of remember the things that you did. And then you go into this list view. And this is like your journal. It's chronological and it's super simple. It just shows the things you're grateful for and the day. And if you double tap it, like you add hearts to it. And you can add as many as you want. So like as you go through your journal, you can say, oh yeah, I remember that and I love that. And the more hearts it gives, the more it shows up on your home screen. Oh, so it's like, you can see the memories that are like most important to you and bring your most joy. You can reinforce them. And then if you hit the plus icon, you go over and you just type in what you're grateful for. And then it just automatically puts it in your journal. So you just go in any time and say, you know what, I'm grateful for this thing. Yeah, no, that sounds awesome and I love it. No, it's simple. That's the best way to start. Can you tell me about the other navigation that you have going on here? So these are just the buttons. And I'm not really sure what to do here because I put three of them each place, but maybe instead it's like a button that expands into the options. But these are just for getting to like the list view, the home, and this is a plus for, that's actually a little house. I know it's hard to say, a little house. And these are like three lines indicating the list and that's just a plus for like adding. Yeah, definitely. So one of the things that you might want to consider doing is using a floating action button as the main component, which you kind of have going on right here, but instead of you move it out of the bottom navigation and add it to the top so it's always floating in there. One of the things with action buttons, a lot of people think, all right, let's add them everywhere, but it should be tied to an action so you shouldn't use it for navigation, mostly for something like creating a post. One thing that you could add is if you were adding, as your journal, as it expanded later on, you would tap the add button for adding a new thing that you're grateful for, and potentially maybe then you could add one where it's like photos. So it's like maybe you want to do a text one or a photos one or a video one so that can give you quick access to adding a new journal entry. Awesome. One potential to think of. I mean, guess, now that I'm thinking about it and you say that, maybe I get rid of the navigation buttons and the way you get between this and the list is you just scroll down. Yeah, you could, you could, because there isn't going to be a whole lot, at least in the beginning. One of the things you might want to consider adding is some form of search, because you'll probably want to search through all your memories and remember what made you happy. Okay, is there anything else about this app you want to tell me? No, I love that you started off with sketching first. That's one of the things I really encourage everyone to do, because then you're not married to this solution because you can go on and think, hmm, maybe I didn't really like how this is working. Like let's think of some other iterations and you haven't spent time coding or designing it within a sketch or anything. So this is a great way to start. Awesome, well thank you so much. All right, well thank you. That was really painless, I'm happy. You're good, you're good, all right good. We try to make it easy for everyone, so yeah. All right, well that's the design booth, everybody. I'm going to go start on my app. Hi everyone, thanks for coming. I hope everyone's having a really great Google I.O. My name is Justin Finiani and I'm the tech lead for the Polymer Tools team, which means that right now we are going to talk all about developer tooling for web components. So the Polymer Tools team maintains a whole suite of tools for web components, developers and users, from linters to test runners to an entire build system. So obviously we're going to talk a lot about tools, but tools aren't used in isolation. Tools are used on things and they're used to accomplish certain tasks. So we really can't talk about tools in isolation either. We have to talk about all three, tools, tasks, and things. Things like web components, obviously, and Polymer, but also progressive web apps, service workers, testing, the purple pattern, bundling, offline, ES6 compilation, HTTP to push, and something that's maybe the hardest task for some developers sometimes is just getting started. So that was a lot of concepts and words I just flashed up on the screen and that's not even the complete list. So what does all this mean that we have all these capabilities and responsibilities to deal with? Well, on the positive side, it means that the web is now a capable application platform. We can build amazing and truly native app like user experiences, but all these capabilities also means that there's a lot of complexity to manage and this can be pretty challenging. So this is where tools can help. Tools can help make this manageable and they can help guide you towards harnessing these features effectively and easily. And so on the Polymer tools team, we develop our tools with a few goals in mind. First, we need to meet the unique needs of web components, users and developers. And when I say that, I do mean web components developers, not just Polymer developers. Our tools are designed to work with all web components. So these needs include things like linting in a world of extensible HTML, documenting all of an element's API surface like attributes and events and styling or bundling HTML together for production. Then we wanna take advantage of the new features the web platform has to help deliver a great user experience and a great developer experience. Features like service workers, native imports, HTTP to the template tag and so on. Next, we wanna enable and automate the best practices that the community and our teams have been putting together to develop and deploy great apps. Our tool should guide you down the best path by default. And finally, we wanna make all of this extremely easy. Our goal is to make our tools as powerful as necessary, but as easy to use as possible. Anyone should be able to create a fast loading HTTP to pushing service worker enabled offline capable progressive web app with web components that delights their users with a few simple commands. And that's where Polymer tools come in. So now let's do a quick overview of the tools we maintain and how they relate to each other and some of the tasks that you need to handle as a web developer. Here's a little diagram I drew of how our tools are put together. Our tools are organized around a set of core libraries that are task specific, things like initialization, linting, local serving, building. And powering many of these libraries is a static analysis system built specifically for the web called the Polymer Analyzer. You'll notice that these tools are laid out here roughly chronologically in the order that you're gonna need them during development. And this matches your development cycle from getting started into your editing and running and testing cycle and then through building for production. So we can take a tour of these functions and the tasks they handle going through the development process step by step. Let's start with the Polymer CLI since most of our libraries are integrated and brought to you via the CLI. The CLI is our multi tool for web components and progressive web apps. It has a number of commands to help you through all phases of development. And we put this tool together and we put everything together into one tool to make these commands easy to discover and easy to use together. And I'm very, very happy to say today that we released version 1.0 of the Polymer CLI earlier this week after a year of development. This was a lot of hard work put in by a great team of people and we think it really helps web developers. Thanks. Okay, so the CLI can be installed from NPM by running npm install dash G polymer dash CLI and this gives you the global Polymer command. From there you can run Polymer help and get a list of all the commands that are supported. So I'm gonna go through the most important of these commands but before I do that I wanna go over a couple of key concepts that we're gonna refer back to as we go along. Things like the Polymer Analyzer that I mentioned, the App Shell Architecture and the Purple Pattern. The first is the Analyzer. So I mentioned earlier that our core libraries are powered by the Polymer Analyzer. The Analyzer is an engine that can parse and understand HTML, JavaScript and CSS and it follows the imports between these files to determine your entire project's dependency graph. And by doing that it knows exactly what you use and where you use it. We then use this information for linting, building, generating docs and so on. Next is App Shell Architecture. So this is a pattern for structuring single page apps so that they're well organized, fast and can be loaded incrementally. Within our tools we break this down into three special types of files. The first is a single entry point which is the initial file that's loaded by the browser. The entry point bootstraps your application and in client side routed applications it's typically loaded from many different URLs maybe slash home slash profile or blog post URL. And it has to be very, very small because this file might not be cached well because it's served from all these different URLs. The entry point then loads the shell and the shell contains your common application logic. Maybe shared UI components like headers and footers and menus, but also the router. And when the shell loads the router boots up, looks at the URL and decides what view to load. And then we have fragments. Fragments are usually these lazily loaded views but they can also be other lazily loaded components or libraries. Together these pieces let you build an app where you load just the files needed to render the page the user is currently looking at. And last we have the purple pattern. This is a pattern that the Polymer team developed as a way to describe how to serve and render lightning fast progressive web apps. Purple stands for push your critical resources, render your initial route, then in the background pre-cache the remaining routes. And finally, as the user navigates your app, lazily load the routes they go to on demand either from the cache or from the network. And if we put all of these concepts together, we can look in an example app structure. So here we have the app shell up at top which lazily imports a couple of views which then import the rest of their dependencies. And some of our tools need to understand a little bit about this structure. So we let you describe it pretty simply in a file we call polymer.json. And we have fields for the entry point, for the shell, and for the fragments. And then when we talk about purple loading, what we mean is that when a user visits a particular view you wanna push all of the resources that view depends on these ones highlighted in pink here. And that way they load as fast as possible. And then in the background you wanna pre-cache the remaining views and their dependencies. Okay, so that's the end of the key concepts. Now let's get into the actual tools. And let's start off with the init command. I'm gonna go to a little demo here and pray to the demo gods. Okay, so switch to my computer. All right, hello. All right, so I'm gonna start off with an empty directory here and we're gonna initialize a new application with the polymer init command. So you see we get a menu here of different kinds of templates. We have the element template, the application template, polymer starter kit, and a template called shop. We're gonna choose the polymer to starter kit, which is a really good starting point. That's why it's named starter kit. And you probably didn't see there quick but it created some files and now it's installing all the dependencies with Bower. And this is somehow working over the network so I'm really excited. All right, so there you go. It installed everything. We can look at the, yeah, yay for the network. So we can look at the files on disk here. It's kind of hard to navigate there so we'll open it up in my editor and we can see that it gave us, let's see a source directory. This looks like we have some elements. It gave us an index.html. This is the entry point I was talking about. It just sets up some metadata and we have our polyfills and imports the app, creates the app. You can see we also create a manifest for you. This is required for progressive web apps so by default we're creating a progressive web app for you. We even have a service worker down here and then as I mentioned the polymer.json file where we describe our fragments. So looking at the files on disk is mildly interesting but what you probably care about is looking at how this template actually looks in a browser. So we're gonna use our local dev server with the polymer serve command. And that's gonna start up a server for us. We're gonna take this URL here and copy it into our location bar and here we have an app. So this is the polymer starter kit. You can see it has a number of different views. These are lazily loaded views so when I click on them they're being fetched over the network. And if we open up dev tools we can see here that the app is progressive, that's nice, the menu turns into a drawer. And we can go into the elements panel and introspect what the app created for DOM when it booted up. So here's the main application element. So one nice thing about web components being native to the platform is that Chrome's dev tools now becomes a web component developer tool. Because Chrome understands web components the dev tools understand it too. So we can see here if we drill into the application element that we can see the shadow root. So now we can see how the internals of our custom elements are put together and how they compose. And we can also interact with these elements in the JavaScript console. So $0 would get us the last element and print it on the console. We can see that this element, it's not just a unknown HTML element or a div or something like that. If we look at the constructor we can see that it's actually the custom element class defined in our application. And because this element, oops, there we go. Because this element has an API that actually lives on its DOM node we can interact with it. So notice how we can change pages here. We go to view one and view two. That's driven off of property on the element. And we can go ahead and set that to view two. And when I hit enter here this is actually gonna change pages and change to view two. So dev tools now gives you a way to interact with the structure of your app as custom elements. It can be very powerful. All right, so that's a knit. And let's go back to the slides. Slides, there we go. So that's a knit. You saw the built-in templates that it has but we also allow you to install third-party templates. So this is based on the Yeoman Generator and any MPM package that starts with Generator, Polymer and knit that's installed locally will show up in that menu there. And we have some really nice templates that are available on MPM. One that has a custom build so you can use Gulp. Another that doesn't even use Polymer just shows you how to use vanilla web components. One that shows you how to use Redux with Polymer. And another one that shows you how to use the Vadin elements. And then you can, of course, publish your own. All right, next let's move on to linting. So linting is done with the Polymer lint command. And I have another demo for you if we switch back over to my laptop. Laptop, there we go. Awesome, thanks. So we can run the Polymer lint command and we can see here that, well, we didn't ship you a template that has lint errors. So that's good. Yeah, yay. So we can go ahead and make some errors and see what the linter does. But seeing lint warnings on the command line isn't that exciting. What I'd rather do is show you how we've integrated the linter and the Polymer analyzer into the IDE with our set of IDE plugins. So here I've opened up the project and I'm gonna go into one of the elements that we have. This is the main application element. And so we can look around and one of the features that our IDE plugin has is hover over documentation, not only for JavaScript, but for HTML. So here I'm hovering over DOM module, which is a built-in element. And we can see we have nicely formatted docs with syntax-highlighted samples. So that's DOM module. We can come down to the main template for the app and see if we can look at app header layout and get the documentation for that. So that's really, really useful when you're using elements for the first time. And so one thing you might wanna do when you're editing is use a new element. So we're gonna use one of our favorite elements here, PaperButton. And hopefully you can see that on the screen, but we have a green squiggly for a warning there and it tells us the element PaperButton is not defined. And the plugin knows this because we haven't imported that element yet. So we need to go fix that. We're gonna go up to the top of the file here where our imports are. And I'm just gonna copy this PaperIconButton import and change it to PaperButton. All right. Oh, but you see we have a warning here too that says unable to find the file because I didn't remove this last PaperIconButton. So let's do that. And hopefully you notice how fast that was. The Polymer Analyzer and the ID plugins are built to incrementally analyze your files. So we're not reanalyzing the whole project here, just the file we edited. And now if we go down to where PaperButton was used, we can see that there's no warning. And if we hover over it, we get some documentation. So this will tell you how to use the element. And we can see that there are some attributes here. So we might wanna use one of those attributes like raised. All right, so if we start typing raised, we see that we get documentation for the attributes, including the data type. And if we hit return, we get the code completion for that too. So this is a really nice productivity booster for when you're writing HTML templates. Okay, so you're using this new element PaperButton. You might be curious about how PaperButton works. And so we've also added jump to definition. If you hit F12, we're gonna be brought into the PaperButton definition here. This is actually in our third-party dependency folder. And this even works for attributes. So if I highlight raised and hit F12, we're gonna go to the definition of that attribute within the element. So this brings a really advanced level of IDE integration even to HTML template editing. And this also works with all web components, not just Polymer. If you use JS doc in your web components, they too will show up in your IDE this way. Okay, and that's the IDE demo. Back to the slides. All right, so you saw some of the features there. We lent HTML and JavaScript, produce errors and warnings. We have hover over documentation, code completion and jump to definition. For the warnings that we produce, we have undefined elements, properties and attributes. We'll tell you about in-vit valid binding syntax and invalid HTML imports. We also have rule sets, which we've introduced recently. So we have a Polymer one rule set and we have a Polymer two rule set. We also have added this Polymer two hybrid rule set. So in the Polymer team, we tried to make it very, very easy to upgrade from Polymer one to Polymer two. And as part of making that transition easy, we invented this thing called hybrid elements. And these are elements that work in either Polymer one or Polymer two. And the way they do this is by using that subset of features, which is available in both versions. So this linter rule set right here will warn if you're using anything that's only available in Polymer one or vice versa. So this should be a big help when you're upgrading from Polymer one to Polymer two. The Polymer IDE plugins are right now available for VS code and Adam. And you can install them via the normal methods for installing extensions. And that does it for the linter. Okay, let's move on to serving. So we have a development server built into the CLI. We do this because when you're developing from a local file system, loading things from the file URL hits a lot of security roadblocks. So most people need at least a static file server. And we added a couple of conveniences on top of that. The first is for reusable components. So HTML imports work a lot like ES6 modules where you have to import by path. And the way we do this is we import by relative path that reaches up out of your package and down into a sibling package that's your dependency. And so if you have a URL that reaches up out of your package, that might not actually exist. So what we do in PolyServe is we remap everything to look like they're siblings so that you can access your dependencies without having to do a build step. The other convenience we add for applications. So if you have a client-side routed application with nice URLs, you might have some URL with a long path that the client can handle but doesn't actually exist on the server. So rather than sending a 404, what we do is we send back that entry point file that you set up in your Polymer JSON and it gives that file a chance to boot up, load the router and handle the URL and try to render that URL if it can. And finally, we've added auto ES6 compilation. So all browsers these days, all the current Evergreen browsers support ES6 but we also support some older browsers that don't in turn support ES6 like IE11. So what we do is sniff the UA string on the server and if we detect that your browser does not support ES6, we'll automatically on the fly compile all of your JavaScript using a very standard preset to ES5 and if this isn't working for you for some reason, Chrome DevTools can do some funky things and pretend to use another browser, be another browser. You can always set a flag to either use the auto compilation or always compile or never compile. All right, so that's the server. Next, let's move on to testing. Everybody should be testing. So we built a test framework into the CLI and you run this by running Polymer Test. And this uses one of the oldest tools that we have on the Polymer Tools team, Web Component Tester. And this is a series of helpers that are designed to make testing web components more convenient. So we bundle in popular libraries like Mocha, Chai, Lowdash and a few others just to make it convenient to write tests. And then we also have this concept in Web Component Tester of HTML test suites. So because custom elements live in the DOM, they're often created by actually being in the markup of your application. And so it's very natural to write tests where you want to write actual markup to test how your element operates, right? And so we have HTML test suites and then on top of that we've added a declarative test fixture helper called appropriately enough test fixture. So here we can look at what an example test suite looks like and you can see that at the top, we set up our environment. We import Web Component Tester and we import the element that we want to test. And then next we have a test fixture and this has a template with an element in it and here we want to test that an attribute sets a property in that element so our markup contains the attribute. And then when we run our test logic, we call this fixture function which looks up the fixture by ID, stamps it to the document and then returns your reference to the content of that fixture. And then finally, you can run whatever test logic you need. Here we're testing that an attribute set on the element actually reflects to a property. So this makes it very, very easy to write Web Component Tests. You can launch it with just Polymer Test which will try to find every browser on your system and run it in all the browsers or you can choose one browser with a dash L option. And this will get the test runner started, launch a browser window which combines all the results of all your test suites into one window and then gives you a fairly standard report where yay, all your tests pass. And hopefully you have more than two, which I have here. Okay, so Web Component Tester is actually built on the same web server as Polymer serve. So this means you get those same conveniences like packages being mapped to siblings or the entry point fallback routing for applications and auto ES6 compilation so that it's easy to test on IE 11. Okay, now let's get to building for production. This is the largest or most complex part of the CLI that we offer. And you do this by running the Polymer build command. But before we get into the details of build, I want to talk about some principles that we have in the build system. The overarching principle here is that builds are optional optimizations on your project. We want your project to work as it is as source on disk. And this is so that you can have a really fast edit refresh cycle without building. So one reason why we do this is because we want to get out of the way. We want this to be fast. But another reason is because you might be using other tools that require building in order to run your project. Something like a compiler like Babel or TypeScript or maybe a SAS or some other processor. So we don't want to have any kind of conflict there so you'll be able to use those and our tools don't have to run. Also something that's kind of unique with our tools is that we don't build based on the file locations on your system. We don't build based on globs or some complex configuration. We build based on the dependency graph. So once we're able to find the entry point of your application, we can find all the files that we need to process in the build. And our build system has a bunch of built-in optimizations. We want these to be easy to use, best practices for progressive web apps. So we have minification, bundling, compilation and then fancier stuff like service worker generation or push manifest generation. And our build system forms a pipeline, somewhat like what you would use in Gulp. In fact, we let you use our build system from Gulp. And the way it works is that the files in your project come through into the analyzer and that actually discovers all of the files that are part of your project. It then splits them out into your first party sources and your third party dependencies just in case you want to process those differently. And then it feeds them into an HTML splitter. We like to write scripts and styles inline on the Polymer team, so this lets us extract those out into separate files so that they can be processed per file type. So then we go into those per file type optimizations, we rejoin the split files that we created and then finally we feed them to global optimizations like bundling or service worker generation. And we control all of this with the build section of our Polymer JSON file. Here you're able to specify one to many different builds that you can create while you run the build command. And we have options for all of the different optimizations here, and they're pretty simple. You can just turn them on and off. And next up, I'm going to run through all these options and show you what they do. But while I do that, I want to show you why they matter. Why is it important to run these optimizations? What do they do for you? And in order to do that, we're going to measure. We're going to measure how long it takes to load after we've applied these different optimizations. And of course, for that, we need a test subject. And so we're going to use the shop example application. This was a demo built by the Polymer team in order to show off and prove out these best practices that we determined are good for PWAs. Shop makes a great example because it's a full-fledged e-commerce site. It has a home page, it has product listing pages, product detail pages. It even has a shopping cart in a complete checkout flow. It pretty much does everything but actually ship you the shirt. And so we're going to measure things in Chrome DevTools. And I want to mention something that's really, really important if you're using DevTools for performance measurements. And that's that your development workstation, even your notebook, is likely a lot more powerful and has a lot better internet connection than your users. Most applications these days, their users are possibly mostly coming from mobile devices and are on pretty bad networks. So it's really, really important that if you're going to measure in DevTools, that you turn on network and CPU throttling. So here I've chosen the regular 3G profile, which gives us 100 milliseconds latency and 750 kilobit per second download speed. And then very, very importantly, I turned on CPU throttling, 5x slowdown here. And this is important because your mobile devices just can't parse and execute JavaScript nearly as fast as your laptop or desktop. So once we've set up our test, are we OK? Check. All right. Yay. Thank you for fixing that. Awesome. OK. So before we start diving into numbers, we need a comparison point here. So I took the shop application, and I actually deoptimized it a little bit. And I made it so that it eagerly imports everything. It's no longer lazily importing the views. So this is kind of like a naive structure for building an app. And it doesn't have any minification, and it doesn't have a service worker. So I ran this through DevTools, and I got some numbers. We have 5.9 seconds for first meaningful paint on the initial visit, and 4 seconds for first meaningful paint on a reload. So we're going to apply some optimizations one by one and see how well they do against this baseline. And the first one we're going to do is minification because pretty much every app does minification. You would be kind of crazy to go to production without doing it. So this is kind of like the standard optimization. It's great because it makes all files smaller. And in our system that splits apart HTML, we also minify inline scripts and styles too. And so the theme I'm going to go with here is that we try to make all of these optimizations as easy as possible to apply. And so we have a couple of options that you can add to your Polymer JSON, which will turn minification on for the different languages. And if we turn minification on, we can see that we've brought our initial load first paint time down to 4.4 seconds, which is 25% better than what we started with. So this is a no-brainer. That's a big advantage, and we should definitely do that. OK, next we're going to look at an optimization we call insert prefetch links. This adds link rel equals prefetch tags to all the entry points and fragments in your application. And the benefit of this is that when you're about to load a view and it has a bunch of dependencies that need to be loaded, this tells the browser upfront the entire list of what it's going to load. So it can download them all in parallel. This reduces a lot of round trips that would be required to load a resource, parse it, find an import, load another resource, and so on. And so you can get some of the benefits of HTTP2 push by doing this, not all of them, but some. So this is good for environments where maybe your server doesn't support it. And again, this is very easy to apply. I just turn the insert prefetch links option to true. And then we can measure the results. And you see we've gone from 4.4 seconds down to 3.1 seconds. This is another 30% improvement. So this is a great optimization that's easy to apply to your project. Next, let's look at service worker prefetch. So service workers are a really, really powerful tool. They're essentially a background worker, a network proxy, and a programmable cache. But with all that power comes very little guidance and structure on how to use them. Coding one of these by hand would be very complicated and tedious. So we auto-generate one for you. And the one we generate does two things. First, it pre-caches all the dependencies of your application in the background. So you get very fast transitions when your users navigate. And second, it makes your application automatically offline capable. The way we do that is we do the same entry point fallback routing that we do on our dev server. If the user goes to a URL while they're offline that's not in the cache, instead of sending a 404 as a response, the service worker will send the entry point, which allows the router to boot up and it to handle that URL and render the view you went to. Again, we want to make this as easy as possible. So you just put add service worker true in your Polymer JSON. And if you do that, you see that, well, our first meaningful paint number didn't really change. It went from 3.1 to 3.2. That's basically in the noise. It's the same. But look at what happened to the first meaningful paint repeat visit. That went from 2.6 seconds down to 0.9 seconds, which is a 65% improvement there. That's huge. And this is a benefit for some of your most important customers. These are your repeat visits or your signed-in users, people who like your app, people who maybe have installed it to the home screen and are tapping on the icon and expecting a native app like Fast Load Experience. Maybe faster than native app, actually, sometimes. Next, let's take a look at lazy loading. Lazy loading is not so much of a tool but a technique. But I want to talk about it here because of how much work we put in our tools to support lazy loading. And the idea with lazy loading is that you only import what the current view needs to render. And you import everything else on demand as you need them. In a Polymer, we have a couple APIs to do this. One is the import href function, which adds a new HTML import to your document. And the other is the thing we've released recently called lazy imports, where you can declaratively describe the lazy structure of your application. And if we apply this to our other optimizations, you see that we've now brought down the first meaningful paint number to 1.9 seconds. And this is another 40% improvement over previous. And the first meaningful paint on the repeat visit stays there at 0.9, lightning fast. So that's good. And finally, I want to talk about bundling. So here's where we've taken care to play well with lazy imports. We do smart bundling. So bundling normally merges all your dependencies into one bundle. And then the more advanced tools like us and Webpack will actually, there we go. I was promised a shout out, will actually bundle things into multiple bundles if they can detect the lazy structure of your application. So our bundler is lazy import aware. It generates multiple bundles depending on this lazy structure of your application. And the way it works is by analyzing the dependency graph here. So what we do is we look for every file. What is the combination of entry points that require that file to load? And then based on the unique set of entry points that we discover, each one of those becomes a bundle. And so now we can get a very fine grained bundling that works well with your lazy import structure no matter what you're lazy importing. The only problem with this is that it's possible to create too many bundles and have a negative impact on your performance. So what we've added on top of this is this idea called bundle strategies. And a strategy takes a bundle manifest, and it modifies it and returns a new bundle manifest. And the one we've included in the CLI has a heuristic where it says, if any entry point is required by more than, say, two, or any bundle is required by more than, say, two entry points, it combines those all into a shared bundle. So you get one per view plus one shared bundle. And this ends up being a pretty good option. And we also make this incredibly easy to use by just simply setting the bundle property to true in your Polymer JSON. And if we apply this on top of all your other optimizations, we get our first meaningful paint number down to 1.5 seconds, which is really fast on the 3G network with a slow CPU. And we still keep that blazing fast 0.9 second repeat visit time. So altogether, we've had a 75% reduction in first meaningful paint time or four times faster, all for just setting a bunch of options to true. OK, so next, I want to talk about compilation. It's not really an optimization, but it's required for older browsers. So with custom elements, we have this interesting situation where custom elements have to be ES6 classes. But we support some browsers that don't support ES6. So on one hand, they have to be on ES6. And on the other hand, they can't be ES6. And let me show you why this is true. When you write a custom element, you extend HTML element. This is a built-in class very similar to Array or Map. And when you extend a built-in element, you have to have a real constructor with a real super call so that the system can initialize that built-in object properly. And when you compile a constructor to ES5, you don't end up with a super call. You end up with something like this where you're calling the constructor, the super constructor, like a function. And if you try to do this in a browser that actually has that built-in, it'll throw an error. So again, we have this thing where custom elements have to be ES6, but you can't run ES6 on IE11. So we try to take care of this for you in your tools, in our tools. And what we do is we recommend that everybody write their elements and distribute their elements to Bauer and MPM or whatever as ES6. And then you only compile them, if necessary, at the application level. And this is because the app is the place where you know what browsers you need to support, what environment capabilities you're targeting. And so the app is the thing that knows what compilation needs to happen. And the Polymer tools help you do this because, since we're based on the dependency graph of your project, we can compile all the JavaScript that's reachable from your application. So you can create two builds, one that's ES5, one that's ES6. And if you have a smart server, you can serve ES6 to all the modern browsers and ES5 to browsers like IE11. But not everybody has a smart server, so we've made it possible to produce universal builds that you might deploy to a static file server like GitHub Pages. And we do this with something we call the ES5 adapter, which patches up the custom elements environment on browsers that natively support it and forces them to kind of accept ES5 subclasses of HTML element. And again, we want to make this extremely easy to do, so all you have to do in your build configuration is set compile to true for JavaScript. OK, so that's what build does. It does minification, bundling, compilation, all this stuff. And even though we've tried to make each individual item here as easy as possible to turn on or off, it's still a lot of things to understand, keep in your head, and to configure. So we want to make this even easier. And so recently, we introduced build presets. And we include what we think are the three most common and useful presets for you to use. We have ES5 bundled for your older browsers or for the universal build. We have ES6 bundled for newer browsers, but maybe your network or your server doesn't support HTTP to push. And we have ES6 unbundled for that full purple incremental serving. And so this is what happens to your Polymer JSON file when you use presets. You can go from specifying each one of these individually to just specifying the preset. And again, the theme here is we want to make this as easy as possible. It shouldn't be difficult to build an incredibly fast app even for emerging markets. OK, so that takes us through our whole development cycle from getting started all the way through building for production. Or does it? You can't just build for production, right? You have to actually put your app into production. And so we're adding a new step to the tool chain now, which is targeted at deploying your app. And so now I'm happy to announce that we have a new initiative that we're calling Purple in a Box. And this is a series of purple reference servers. They're smart servers that can do differential serving and serve ES6 to the browsers that support it, ES5 to those that don't, and use HTTP to push depending on whether the browser supports it. We're building out configuration that's generated based on the CLI and your dependency graph for these servers. And we have initial versions for Node that works well on App Engine and also for Firebase. The Node version is the first one we're releasing. It's in a preview state, but you can look at it now. This works for Node in App Engine. And you can look at it at the GitHub Organization Polymer and purple-server-node. And you can also install it right now on NPM, just NPM install purple-server. So this server is CDN and EdgeCache friendly. So even though it serves different resources to different browsers, they all exist at different URLs so they can be cached aggressively at the EdgeCache. And it also is designed to work with HTTP to push proxies. This is what App Engine does, where it takes an HTTP one server. And if you specify certain headers in your response, it'll automatically turn those into HTTP to push requests. All right, so now that really brings us to covering the complete development cycle, all the way from getting started through editing and testing through production. And that's basically my talk. And if there's any one thing I want you to take away from this talk, it's that Polymer tools make it really easy to start, develop, and build with web components and Polymer so that you end up with a fast, purple-enabled, offline capable, progressive web app. And that does it for me. Thank you, everyone, for coming out. I'm going to be in the mobile web sandbox, which I think is right over there. If anybody wants to ask questions, please come by and say hi. I'd love to hear from you. And you can find me on Twitter at Justin Fignani. Thanks a lot. Ask Firebase 17 AMB cam, comment late. Hello, and welcome to this week's episode of Hashtag Ask Firebase. You've got a lot of questions. We've got a lot of answers. Why don't we get started? Doug, start us off. This one is from Jordan on YouTube who asks, how come when I make changes in remote config, they sometimes don't show up right away in my app? Oh, all right. That is a very good question, Jordan. So the answer, basically, is that the remote config library does a whole bunch of caching on the client by default up to 12 hours before it goes out and hits our servers to get new data. And this is generally sort of a way to keep the service free. It kind of helps make sure that your clients don't accidentally DDOS our service. And it turns out I have an entire video all about it that you should check out right here. We'll just link to it here. All right. Great question. Thank you very much. Let's move on. Next question is from Casey Atwell42 in Twitter who asks, asks Firebase, why in my app does it say UI View Controller is not convertible to user status View Controller when I try to run it? Oh, OK. You know what? I've actually seen this problem before. So the issue, Casey, is that on line 96 of your user status View Controller, you forgot to add a question mark. It's a common thing. It happens all the time. But that was easy, huh? I hope they're all this easy. All right, let's move on. Next question. This one is from Alex on YouTube. And come to think of it, a whole bunch of users on Quora and our Firebase talk group and Stack Overflow, who all want to know, hey, hashtag, ask Firebase, how can I send a notification in response to some data changing in my real-time database? That's a great question. You used to have to set up your own server to do something like that. It was kind of hard. But now with Cloud Functions for Firebase, you can set this up on Google Cloud Servers. You write code, deploy it, trigger on a database right, and then you can send FCM messages to your users, as simple as that. And where would a user want to go if they wanted to find out more about Cloud Functions? They should go to the Firebase page at firebase.google.com slash doc slash functions. All right, we'll put a link like, should we put a link here? Here's a link. The link in the description below. Great question. Thank you very much, everybody who asked. Let's move on to another. Next question. This one is from Marie Waller, also on YouTube, who asks, hey, hashtag, ask Firebase. In Picasso's girl before a mirror, what is the mirror supposed to represent? Oh, OK, great question. So I know some people think that the mirror, the reflection, is sort of supposed to be the girl's self-perception. You know, the way she perceives herself versus the way the world around her sees her. There's another camp of developers who thinks it might represent her blossoming femininity. But if you check out the Firebase FAQ section, we actually have a section all about this. You'll see that the answer is, hang on, I got it right here. All right, mortality. It's mortality. Thank you very much for the question, Marie. Let's move on. This next question comes to us through our Firebase Google group and a user who asks, hey, hashtag, ask Firebase. I want to record the names of the elder gods in Firebase. Ooh, what's the best way to do that? Doug, you're kind of an expert on this. Why don't you take this one? Yeah, so you might think that storing the names of the great old ones would be done with just a bunch of strings, so it would be a candidate for a typical NoSQL database. But it turns out, storing the symbols for the name of IHORT, the god of the labyrinth, on a normal hard drive will cause all physical storage within half a league to transform into carnivorous flies that will consume the flesh of our data center engineers until not but bones and beef remain. So don't do that. Yeah. Use cloud storage for Firebase instead. Yeah, that's really what it's made for. Yeah. Awesome question. Let's move on. Next question. This question is from Kelly S on Twitter who asks, hey, hashtag AskFirebase. I have an app where I'm using Firebase Cloud Functions to read in an image from Cloud Storage for Firebase and then asynchronously apply some machine learning to identify the content of those images. Now my question is, can you teach me how to Dougie? Teach me, teach me how to Dougie. Well, I'll be asked. This really seems like a better question for Doug. Doug? Next question. OK, this last question is from several of you who have asked, hey, AskFirebase. What ever happened to David East? Well, David is on assignment right now, but he's assured us that he'll be re- My clue. What was that? I don't have much time. You have to help me. They've got me trapped in. Do whatever you have to to get us back. It's madness here. It's all just one repository. Right now. Shh, guys. Go. Sorry, we had some technical difficulties there. Anyway, David is off on assignment, but he's assured us that he's really interested in doing more AskFirebase episodes with you just as soon as time allows. And he's definitely not being held captive in some re-education facility until he learns to pronounce Jif correctly. Don't you mean give? That's all the time we have for today. I'm Todd Kerplman, the only host there has ever been for this episode of AskFirebase, asking you to keep sending in those questions with the AskFirebase hashtag. And we'll see you soon. The Firebase Notifications Console lets you re-engage your users quickly and easily. With it, you can manage and send notifications to your users easily with no additional coding required. Messages can be addressed to single devices, Firebase Cloud Messaging topics, or devices that you select using powerful analytics tools. So for example, you can send a message to all of your users who have made an in-app purchase, giving them a special offer, allowing you to re-engage with them. The Firebase Notifications Console integrates with analytics, so you can measure the effectiveness of your messages and explore insights based on your users' activities. So you can grow your application by easily engaging your users through the Firebase Notifications Console. Welcome to Coffee with the Googler. Today I'm going to be chatting with Shanaia King-Roberson. And we're going to be taking a look at this new course called Firebase in a Weekend. It comes from Google and Udacity working together. And if you're not familiar with Udacity, they're the experts in online training. We're going to have a whole lot of fun chatting about it. And I hope you have a whole lot of fun watching it. So Shanaia, welcome to Coffee with the Googler. Thank you for having me. I know you've been really busy with getting lots of courses out, new Udacity stuff. But you have a very special course to tell us about today. Could you enlighten us? I'm really excited about it. Yeah, we've been really busy. We're actually launching a new Firebase course. You know Firebase, right? Kind of. I've heard of it. Oh my gosh. Firebase is our new mobile back-end service. It allows anyone to use, create quickly a mobile back-end for an Android or iOS app and web as well. And web, can't forget web. Yeah. This course is going to focus immediately on Android and iOS. And so we're really, really excited about it. Cool. Now, so this course, it's called Firebase in a Weekend. Oh yeah, this is actually a little bit different than any of the other Udacity courses that we've created. There's a lot that comes into mobile development. There's a ton of different aspects. This course is called Firebase in a Weekend because we want new intermediate advanced developers, anyone to be able to get up and running on Firebase as a mobile back-end in one weekend. In one weekend. Yeah, you want to take a look? Sure. Let's dive into the Firebase real-time database structure. All data is stored as JSON objects. You can think of the database as a cloud-hosted JSON tree. This means that your entire database is stored as a single JSON object. I was just joking earlier. Of course, I'm familiar with Firebase. And it's a suite of technologies. And some of it's really to help you develop better apps. And some of it's to grow your apps and then to earn from them. And I know in this weekend, you're focusing on the development of the app. So you get that foundation. Right. And there's a bunch of technologies there. Which ones does the course actually cover? So it covers authentication, which is really exciting. It covers the real-time database, security, and what we're really excited about is, like you just said, getting all of the foundations set up. Because Firebase is, like you said, a suite of options. But what's really exciting is we actually teach you to get all those foundations set up, ready to go. So you can really just focus on the really fun things. The UI design, the engagement, the earning, the all of those different things. Sweet. So it's a sweet, sweet. Right. So typically with Udacity, you do this by building an app. I assume there's like an app that's going to be built in this? Oh, yeah. Absolutely. So we're really excited because we are actually teaching people to build one of our favorite demo apps, which is called Friendly Chat. Friendly Chat. Friendly Chat. It is a messaging app. And so by the end of it, we are able to see messages coming back and forth. You're able to authenticate users. Multiple people are able to chat with each other. And also, we teach you a new library, which is called Firebase AuthUI, which makes authentication UI really easy and really simple on both Android and iOS. So you have all the foundation get up and running and chatting with your friends, learning how to use all of the Remote Config, Messaging, all of those different things right off the bat. OK. You mentioned Remote Config. And that's one of my favorites, right? Yeah. When with Remote Config, it's very, very simple for you to just change your UI on the fly because it's a server-side variable. Yeah. So instead of you needing to deploy a new code base, if you need to make changes or you need to make tweaks, you can do it by reading these variables out of the cloud. It's mind-blowing this scenario. There's going to be, it's so much better for developers. It's much faster and simpler. And so what we really try to do with Firebase is teach an end-to-end hand-holding. There's a video instructor so that people feel as though they're interacting with a real person teaching them how to build this app rather than just reading the documentation. Right. And one of the things that I really like about how Udacity do it is that it's a very socratic method. Right. It's not like your typical video training where you sit back and you watch a screencast for two hours and then hopefully you got it all. It's, how would you explain it? Yeah. So the Udacity is actually really good for engagement. You watch short chunks of video, so two to three minutes, maybe five tops. But then it's immediately followed by an assessment. What we want to do is make sure that in every node, you get a chunk of information. You really learn that piece. You really, like, you can embody that piece before you move forward. OK. And then you're able to move back and forth rather than trying to scroll through a two-hour video to find that one sentence that you're looking for. Everything is broken up into nodes, which makes really helpful. Yeah. And personally for me, because I don't have great retention, that like when it's short like that and then it's like challenged with a little bit of a quiz, it forces me to think. Right. And I might only spend like a half an hour learning in a particular day if I'm doing something like this rather than three or four hours. But I retain so much more in that time because it's much more intense. It's much more concentrated. And I still remember some of these like so. Right. And you're super hands on with it because you're actually building, you know, you're using GitHub. You're learning the different things. You're looking at the diffs. You're seeing all of these different changes and they're small changes. So you really understand and see the progression from start to finish rather than just trying a bunch of things out and seeing, does this really fit together with this? It takes you in a linear order. So you kind of get a round view of what's happening. So the obvious next question is, where do we get started? How do I get this course? How do I learn it? And how much is it going to cost me? Absolutely. So developers can access the course at no cost to them. OK. They can just go to udacity.com slash Google and click on Firebase in a weekend. So the entire course, no cost. No cost to them. Wow. So I can learn all of these things in a weekend. Absolutely. Wow. I've got to go try it out. And there's one other thing that they will learn, and that's how not to cook a burger. Yes. Veggie burger. A veggie burger. Yes, of course. And you'll have to watch the course to find out. Absolutely. So thank you so much, Enne. Of course, thanks for having me. This has been so much fun. I'm really looking forward to seeing the finished product. I've seen little bits of it and I taught some of it. But I just can't wait to see the finished product. And developers, I think, are going to love it. We're really proud of it. Thank you so much, Lawrence. Thank you. I've been playing games all my life. It's my passion. I also learned how to program computers. And then in 2001, we started the first video games company for mobile phones in Spain called Microjokes. In 2013, my studio was acquired by a big company. Some of the guys and myself, we decided that we should do something fresh, something new. And we found it on the run. Titan Roll is a real-time strategy game. It's considered it as a mobile. Mobile is a massive online battle arena, but especially designed for mobile devices. The game is today as it is, thanks to the Early Access program. We changed many things from the learnings, from the community. Since we launched the game on Early Access, we got more than 2 million installs on Android devices. We started in the Early Access program back at the very beginning of it. The difference between the Early Access program and a traditional soft launch is that the user are actively giving the team feedback. So you don't only check the metrics you have, but they also provide possible solutions. So you end up by doing the game players want to play. The thing about not having the ratings, but do having the constructed feedback was very good. The Early Access was a great opportunity for an indie developer, someone starting and very key for us in Amnesty. When we started with the Early Access program, we approached it in different stages. So the idea was at the beginning to focus on the engagement of the games. Once we started that out, we focused on the retention of the game. And finally, we focused on monetization to do a valid product for the market. We managed with the Early Access to improve our retention in a 41%. The engagement by 50% and the monetization by 20%. From the very beginning of the program till worldwide launch of the game. I feel very happy working on the video games industry because it has been my passion since I was a child. And it's really inspiring that through Omnidron, we have a real chance to shape the new era of the video game. Hello. How is everyone? Hi. My name is Seth Thompson. And I am a product manager on the V8 team and Chrome. So V8 is an engine, and it's the engine that runs JavaScript in Chrome. And our mission is quite simple. So we want to speed up real-world performance for modern JavaScript. And we want to enable developers to build a faster future web. And there's two parts of this mission that are important. The first is that the JavaScript that V8 is optimized for is the JavaScript that you as developers are actually writing. And it's the JavaScript that includes new language features as they get introduced, new patterns of application development, new idioms. And then the second is that as we as an engine participate in the TC39 Standards Committee and develop tools and give guidance, it's that all of this goes towards a faster future web. So we'll talk about more of all parts of that in a little bit. But first, I wanted to start with some fundamentals of a JavaScript engine. Specifically, V8 is a just-in-time compiler or a JIT. Now what this means is that when JavaScript is sent to the browser, the browser has to execute this code as it runs it or immediately. And in order to guarantee maximal performance, the engine wants to transform this code, this JavaScript, into machine code, native code. But because it's doing this all as soon as you load the page, it needs to do it just-in-time or at runtime. And there's some fundamental trade-offs at play here. I think one of the things I'd like to do is shed some more light on what we mean when we say that an engine runs JavaScript fast. Because there's a lot of different ways to run JavaScript. So the first fundamental trade-off is that, in general, the more optimization an engine performs, the faster the machine code it generates, so the faster that code can potentially run, but the longer the initial delay. Because remember, all of this compilation and this optimization happens after you load the page and the browser sees the JavaScript for the first time. So there's a trade-off there. The top peak speed once the program starts running versus the initial delay when it gets started or startup. And the second trade-off is that, in general, in a JIT, the more optimizations an engine performs, the more memory that the engine consumes. So anytime someone says that their engine is five times faster or 5% faster, you should think about faster in what dimension, or how does that number get translated into a position in this problem space or this trade-off space. So let's examine this in a little more depth. Here are these constraints as I've laid them out. Generally, an engine can have fast startup or high peak performance, and specifically, an engine can make decisions about executing a particular function with this granularity. So it can immediately run it or it can make optimizations and run it faster but pay the cost of making those optimizations up front. And the second is that an engine can have a low memory footprint. You could think of an interpreter, which has a very low memory footprint. But that comes at a cost of the max speed as well. So memory and speed also are a trade-off to make. So let's say that I wrote a web page, and all it did was run one line of JavaScript, foo. Now, we don't know exactly what foo is doing here, but I would be willing to bet that a JavaScript interpreter, which you might have a visceral sense of something slow, but I would be willing to bet that a JavaScript interpreter can execute one function much faster than an optimizing JavaScript compiler, which takes the foo function, looks at it, turns it into native code, and then performs multiple optimization paths over that native code before it can even execute it. So all of this is happening when you load a page or start a JavaScript, executing a JavaScript file. So to put that into context on our little chart here, if you knew that you were just executing one function, you would probably want to optimize for a fast startup, not peak performance, because you're only running this function once. So the time that it takes to make it fast to run multiple times, you would have already paid the cost by running it once. So what if this exact same function, though, is run 10,000 times? Does the trade-off change at all? So in other words, if we know that we have to make foo fast and it's going to be used 10,000 times, then in this case, it's worth taking that initial startup delay to optimize our native code for foo because we'll amortize that cost of startup over the next 10,000 executions. So in this case, for a code pattern like this, you want to optimize your compilation of the foo function for peak performance. And if this is a desktop browser, you can rest assured that there's enough memory to compute lots of these optimization passes. But what if this exact same code is run on a low-memory mobile device, let's say an Android device with low RAM? Well, then if taking the memory to compute multiple optimization passes and generate a lot of machine code is going to be the difference between your device being under memory pressure and closing a bunch of background tabs, in that case, you might want to actually sacrifice the peak performance of executing this JavaScript for a low-memory footprint. So you can keep multiple or more tabs open in your browser. So I would argue that on a mobile device, although you ideally would like peak performance here, you have this other constraint, which is that you'd prefer low memory usage. And finally, what if that same line of code is on a server in a file run by Node.js? Well, in this case, your server only starts up once. And then it keeps running on whatever machine is receiving requests from your users. So in this case, you don't really care about the startup cost at all. You'd like the engine to take as long as it can to optimize this function. Because once the engine or the Node app is up and running, you want each of those requests to be served as fast as possible. But again, if this is an IoT device, maybe you're running Node on something that's also memory constrained, well, here you might have to sacrifice some of that peak performance for a low-memory footprint. So the reason I go through all of these examples is to say that the same three lines of JavaScript or the same function of a single function of JavaScript requires many different types of optimizations or many optimal ways of executing this function, depending on the context, depending on the device it's running on, whether it's running on the server or client side, how much memory there is. So as an engine or as Chrome developers, as we're developing V8, we want to put together an engine that spans that entire trade-off space and is able to use heuristics to know whether it should be tuned for fast startup or peak performance or low memory. So over the last year, and actually even beyond that, for the past two to three years, V8 has been working on an entirely new execution pipeline. So what this means is that the compilers that V8 previously used have been completely replaced by a new execution pipeline. So let me quickly walk you through the history just to give you a sense of how much machinery there is behind a JavaScript engine, how many moving parts there are. In 2008, V8 started with a simple code generator, and it generated semi-optimized machine code. In 2010, we added an optimizing compiler. Now remember, an optimizing compiler is the one that takes more time to start up because it's computing optimization passes, but then generates code that, when run multiple times, is very fast. In 2015, we realized that our first optimizing compiler wasn't extensible enough. It didn't support the full JavaScript language, and we knew we needed something for new patterns of JavaScript, things like Asm.js, eventually WebAssembly. So we created a second optimizing compiler. Then we added an interpreter. I'll talk about all of these things in a second. And finally, we get to the present day where we evolve from that first yellow compiler and crankshaft, we just remove those from V8. So today, we have two parts of our engine which are completely new. So what are those parts? Well, the first part of the all-new V8 is Turbofan. And Turbofan's an optimizing compiler. We've been working on it for over three years. As an optimizing compiler, I mentioned that it is designed to be able to squeeze out the most possible performance from the machine code that it generates. Now, it's also designed to be extensible from the beginning. So we were able to implement all of ES2015, which are the newest JavaScript features, in Turbofan, and as well as follow-on features from ES2016 and ES2017. And Turbofan supports the entire language. So JavaScript primitives like Try, Catch, and Finally can be optimized for peak performance, where historically, they weren't. What all of this means for you as a developer is that the new V8 has fewer performance cliffs. You're less likely to run a single function, have it be fast, make a change, and suddenly wonder why it's slow. And that's because the engine now supports a more diverse set of workloads. So just to recap, Turbofan is optimized for peak performance and multiple optimization passes, even if it takes memory. But we've also added Ignition, because we know we need to serve these use cases where the startup of the code or the initial execution is fast and there's a lower memory footprint. So Ignition is an interpreter. And contrary to popular belief, it's not necessarily slow if it's used at the right time. And Ignition generates a byte code, which then runs. And we've noticed that it's particularly beneficial for loading heavy pages fast. And it's integrated with Turbofan to make adaptive optimization simpler. So this means that if we start executing a function with Ignition, we can watch and see whether it's used often and use a heuristic to say that we should probably send that function to Turbofan and optimize it for max performance. So Ignition is optimized at this end of the spectrum. And when you put these two things together, you end up with v8, an all-new v8, which can target multiple places along the spectrum of low memory, high memory, fast startup, peak performance, depending on the workload, depending on the heuristics that we see as we execute your code, depending on the device that your code runs on, depending on whether it's embedded in Node or embedded in Chrome. So this means that real-world JavaScript is faster. The engine can run in a much lower memory footprint. There's fewer performance clips. It's a more well-rounded engine. And it's better tuned for Node.js than our previous configuration. Finally, there's a third new part of v8, and that's what we're calling Orinoco. Orinoco is a new, mostly parallel and concurrent, compacting garbage collector. Our previous garbage collection was not always parallel, and Orinoco expands our ability to perform garbage collection across multiple threads to make for faster pauses when we're cleaning up the memory of an application. So all of these things come together into this new package, but I think, and I've sort of mentioned many of the different dimensions on which you can compare the performance of JavaScript. But I wanna talk a bit about how we benchmark JavaScript or how we tell whether we're getting faster or not at real code. So v8 has started measuring the performance of real page loads. We have a system which can record user actions so we can set up a benchmark that loads a page, scrolls through the page, potentially watches a video or reads a news article. And all of these simulations, we can then run benchmarks against. And we're happy that after optimizing for these real world web pages, we saw a 25, a 20, 35% improvement on the speedometer benchmark, depending on the platform, over the course of the last year. So what this means is that by optimizing real web pages, we were able to deliver improvements on a benchmark like speedometer. But not all benchmarks are good. And in fact, if we could choose, we would always just run against real web pages. The reason that we use something like speedometer is because it runs in multiple browsers. So you can compare between engines. So here's the performance over the last year for speedometer. And one of the downsides of benchmarks is that of, excuse me, these traditional benchmarks, not real world simulations, but the benchmarks that you would run in a browser tab to compare engines, is that they're not always emblematic of the types of JavaScript that you're writing. So at the very beginning, we talked about these four different ways that you could write code on a server, on a low memory device. And the octane benchmark was tuned only to exercise the peak performance of a compiler. So we believe that chasing octane or optimizing in particular for octane led engines down a path where they over-optimized for peak performance and under-optimized for things like low memory usage and fast startup. So this year, we announced that we retired octane because we felt that it wasn't yielding the right decisions for engine optimizations. And I mentioned speedometer earlier as something that better approximated real-world websites. And the reason it better approximates them is because it includes applications, to-do MVC application, to be specific, that implement the same to-do application across many frameworks. So speedometer includes Angular and includes React. And what we've done is we've worked with WebKit, who have originally implemented speedometer, to add even more frameworks. So I'm excited to announce that speedometer 2 has just been committed to the WebKit code base. And it expands the frameworks that it tests. So now it tests Angular 2 rather than Angular 1. It adds Preact, Vue.js, Inferno. It adds ES2015 code. It uses code with bundlers, Webpack. And it's updated all of these frameworks to the latest version. So while no benchmark is a perfect approximation of real-world code, we hope that speedometer 2 will be a better way to compare engines across browsers. And those are the frameworks and the tools that are included, the bundlers that are included. So speedometer 2 is coming soon to WebKit, and you can find it, or you will be able to find it on browserbench.org. So one of the things that I mentioned in that past section when I was talking about important parts of a performance story is ES2015. So ES2015 is, and the newer features, are the latest version of JavaScript. So ES2015 features are things like promises, rest, and spread operators. Array iteration becomes a lot easier with ES6. ES2015, excuse me. And when ES2015 was initially implemented, there was a slowdown on the ES2015 code. And this is because engines take a long time to optimize particular code patterns to make them fast. So when ES2015 was first introduced, it was actually a lot slower than ES5 code. Well, over the last year, we've been using a tool called 6-Speed, which compares ES2015 code to the transpiled version, or the ES5, equivalents of accomplishing the same action. So an error function is compared to an anonymous function. And we've been using this tool to optimize the biggest performance differences between ES2015 and their transpiled equivalent. So we worked on optimizing four of. So now, using the four of keyword, is as fast as writing a simpler JavaScript loop with a var. We worked on improving object.assign. Object.assign shows up everywhere in React and Redux code especially. We worked on improving iteration and destructuring. And we also improved the performance of spread calls. And by doing this, we decreased drastically the slowdown of ES2015 transpiled to ES5 compared to ES5. And you can see here that over the past roughly six months, we went from the average ES2015 code being almost three times slower than ES5 code to the present, where we've almost reached parity. So what this means is that there are fewer and fewer reasons not to use ES2015 code natively when you can, when your user's supported or on node on the server. And we also, I wanted to highlight a couple language features in particular, which got special attention because they're so useful, but there was a lot of performance on the table to be had. So generators are now two and a half times faster and async and await, which is a very useful idiom for turning a promised based then style code into something that looks a little more synchronous. Async and await is four and a half times faster than it was in the previous six months. That's, it is a big deal, yeah. Now underlying all of this is our promise implementation. And for a while, native promises were actually slower than promises that came from a library like Bluebird. Well, I'm also happy to announce that over the past year, we've improved promise speed by four times. So native promises are now something that are able to be included in real world code without worrying about their performance impact. So I've talked a lot about language features and the different places that you can run JavaScript. And one of those environments is Node.js. So Node.js is obviously a server side language and it node embeds the V8 engine. So over the past year, we've actually invested a lot more in the Node community than we ever had previously. V8 is now represented on the Node core technical committee and you can find us on GitHub, on the Node repository, working through issues that come up under the V8 engine label. And these are things like regressions. Somebody notices that their Node code slowed down for some reason. We're working with the Node team on releases and making sure that as soon as a new version of V8 is available, it can get upstreamed into Node.js as fast as possible and tested for release. And we've also worked on performance optimizations specifically for Node. So in addition to exposing JavaScript features, Node also has a rather large standard library and some of the APIs in the standard library, things like buffers, we've had to do specific performance optimizations for. So we worked on faster instance of, we worked on a buffer.length regression, we've worked on supporting long argument lists in Node. And in general, we've made sure that let and const are as fast as their var equivalent. We also know that certain libraries are used in Node throughout the ecosystem and the through library which is used for creating streams primitive. We also spent time optimizing to make sure that streams in Node were fast. And all of this is summarized by what we call the Acme Air benchmark which is a benchmark that starts up a Node server and tests sending thousands of requests to that server. It involves a database, it's a big app. That benchmark showed a 10% improvement when we launched Turbofan and Ignition. So it goes to show that the improvements that we made towards making our engine more well-rounded did in fact yield faster node performance as well. So we're really excited about that. And V8 is part of Chrome. When you're debugging JavaScript in Chrome, you can use DevTools. So over the past year and a half or two, we've been working on making DevTools support Node.js. And I'm happy to announce that it's now easier than ever. I'm actually gonna show a little demo right now of where we are right now. So, you know, traditionally when you're writing a Node application, it's difficult to debug things. So it's not quite as simple as it is client-side when you can pop open the inspector and navigate around your web page. So for this demo, I'd like to go through a new command line interface called Emoj. And it's a really cool program. It's an open-source program. I just found it. When you run it and type something, any sentence, so say hello, it comes back with a bunch of emojis that correspond to the sentence. So it's really cool. It's an open-source project. And in this case, or for this demo, I wanna figure out exactly how it's implemented. So to debug any Node application with DevTools, all you have to do is pass a flagged node dash dash inspect. And what this does is opens up a port, a debug port that can communicate with the DevTools instance, and you can debug in DevTools. Now previously, you had to paste this relatively long URL into Chrome in order for this to work. But now, if you go to Chrome Inspect or about colon inspect, you can see that Chrome will automatically detect any running node instances on your computer. Yes. So let's go ahead right there and click Inspect. We get a dedicated window that opens up, and here we can see our CLI. If I look into my files, into my sources, I see that file that I just executed, and it's right here. So what's really powerful about this is I can come in and use any of the debugging features that have been introduced in the last six or so months. One of those that's really powerful is inline breakpoints. So here what I'm gonna do is I'm gonna set a breakpoint on this line of code, and this is the code that fetches those emojis from the server that it's using to perform, I think it's a machine learning. And what I really wanna do is I wanna know what the server is returning, and I think it's returning an array right around here in the middle of the line. But if I normally would just break on this line, it would break at the beginning of the line, and if I advanced, it would have completed the fetch already. What I really wanna do is break right here. So with inline breakpoints, which is a really cool feature of DevTools, I can do just that. And now, I can also use this feature to debug node code. So with that breakpoint in place, let me try this again. I'll run hello, and I get something back, I get paused on a breakpoint, and I'll just show a couple quick new DevTools features. One of them is that this call stack is supports asynchronous code execution. So you can see here that it actually traces processes through a variety of async functions. It can trace promises being resolved. But I wanna come down to the scope here, and I can see that this array variable, the array that it returns is actually 10 emojis long, and so now I can see, okay, what this is doing is slicing that array and only returning the top seven results. So relatively easy, easily, just by passing the inspect flag and then opening up Chrome and clicking connect to node, I can jump into the execution of a node program and use all of the DevTools features to debug it. So I was interested in the debugger today, but the JavaScript CPU profiler is available for node, the memory profiler is available, and the console is available. Let's see if I can see process.versions.v8. Yes, this is node, it's running this version of v8. So all of this is immensely useful, and it's just one of the new features that we have for debugging node with DevTools. I won't demo it now, but I'll briefly say that we also have a new dedicated window for node, so you can actually close Chrome, you can open this node debugger, you can add in the port, and it will always stay connected to whatever node instance you're running, even if you're running multiple node scripts at the same time. So yes, that's exciting stuff. Okay, let's go back to the slides briefly. In addition to these inline breakpoints that I just talked about and this integration of node and DevTools, the v8 team has worked with the DevTools team to support a number of really useful features for writing JavaScript applications. So one of the themes of Google I.O. this year on the web track has been optimizing the performance of progressive web apps and JavaScript by simply shipping less code. If you're using a bundler like browserify or webpack and you're requiring many modules from MPM, it's very easy to end up in a situation where your app bundles, ships across the network to the client, parses and starts up way more JavaScript than you actually need. It's just simply how the bundler included an entire library, let's say, even if you only used one function from it. So I'd like to do another demo here and show how DevTools can help you find this situation and fix it if you're working on an app with lots of dependencies. So I'll briefly show you that I've got an application here and I'm gonna serve it on a little server that watches my code, sends it to browserify, recompiles it, fairly basic stuff, but I can show you that I'm requiring something, I'm requiring load-ash as a library and a bunch of other dependencies. So here's the app. It's a GitHub repo formatter. So if I type in helloio2017, it'll give me a little slug that I can add into GitHub. It's just kind of useful. Let's say I wanted to look at the performance of this. Now, browserify turned my JavaScript into a single file, a bundle here, and it's actually quite large. It's about 6,000 lines. Now, I could go and manually figure out exactly which part of those 6,000 lines I actually used. But instead, I'm gonna use a feature called DevTools Coverage. Now, you might be familiar with coverage from test coverage. You run a coverage tool to figure out whether you've tested all the parts of your code. Well, this type of coverage tool is a little bit different. Instead, it tests of the JavaScript that the browser saw how much of it was actually executed, how much of it was just dead weight or dependencies that weren't used. So if we go under, you can press escape to bring up the drawer on DevTools, and we go under the Coverage tab right here, this is new, so you'll have to do this in Chrome Canary. And if it's not down here, it'll be under More Tools than Coverage. This panel allows us to load our page and check which JavaScript is actually executed. So what we have to do is hit record, and then refresh. We can type in something, and now you can see here that we've, if we isolate just the bundle file, you can see that we've only used about a third of the JavaScript that we're shipping. What's more of this tool allows you to actually look at the source and see which functions were called and which weren't. So although there's some green here, which is code that we ran, there's also a lot of red. And I actually know exactly what it is. All of this red code is functions from the load-library that I didn't use because actually I'm only using one function. This is the function that's actually turning the string hello Google I.O. into what they call kebab case. But anyway, so I'm only using one underscore function here. And I happen to know that actually I can, rather than loading the entire load-library, I can load just kebab case. In other words, I can trim my dependencies down to exactly what I need. And in this case, when I save it, my process will recompile my bundle. If we record and reload, we can see that now our bundle is only 1000 lines rather than 6000. And the percentage of bytes loaded, actually I think I need to clear this first, okay? The percentage is now much, much smaller of code that's not actually being executed. So this was a trivial toy example, but you can imagine running this on a very big code base where it's not easy to know which dependencies are used and which aren't or which parts of dependencies are used. And this should help you make sure that anything that you're shipping to a client for a particular page or particular route is just what's needed to execute the functionality that you need. So that is code coverage in DevTools. Let's go ahead and go back to the slides. If you are familiar with the performance panel in DevTools, the performance panel is another way to instrument and investigate the performance of your app. And you can go to the DevTools documentation and find a number of really good tutorials for using the DevTools performance panel. A couple other things we've added are include line-by-line profiling. I think that was the first of these features. And in addition to seeing the coverage of which code was executed, you can record a performance profile, look back at sources, and in the gutter see the time that each function took to execute. You can use that to identify a particular bottleneck in your source. We've got code coverage which just was launched and async debugging. I showed a little bit of this in the node demo. Async debugging has gotten a lot simpler more recently. So V8 and DevTools are constantly working to make sure that debugging and making fast applications in the first place is easier and easier. There's one last thing that I wanted to touch on. And this was mainly a talk about JavaScript. But WebAssembly is a really exciting new technology that V8 has recently added support for. So WebAssembly, if you're not familiar with it, is a new language for the web. And specifically, it's a low-level language designed to execute near-native code. So WebAssembly is ideally suited for a library that you might otherwise write in C if you were in a different environment than the web. So for the first time, you can use WebAssembly to compile C and C++ programs and run applications like desktop suite applications, like a graphics intensive game or a video editor that previously would have been constrained by the performance of a dynamic language like JavaScript. So we're excited to announce that V8 is supported in Chrome. And I think one of the most amazing parts about WebAssembly is that it's not just a Chrome technology. In fact, when I see this slide, I think the thing I'm most excited about are the other browsers here. So WebAssembly is also launched in Firefox and it's currently in preview in technical builds of Edge and WebKit. So WebAssembly is poised to become a cross-browser solution for running native code. And it's the first new language that we've introduced on the web that has this sort of cross-browser support. It doesn't require any plugins and it uses the regular web platform APIs that you're all familiar with. And we launched it in Chrome 57. And as I mentioned, you can compile C and C++ to WebAssembly within Scriptin. Already, we're beginning to see some very incredible demos. This is a demo of the Unreal Engine running in a web browser. And we also have on the WebAssembly website a Unity game as well as a bunch of community projects that have already started being created. Just the other day, I saw a video editor running with real-time video effects and it was very 30 frames per second. So if you're interested to learn more about WebAssembly, I encourage you to check out the IO recording of a talk that I believe happened yesterday by Alex Dinello. And soon for WebAssembly, we will have more startup performance optimizations and some more features that enable things like multithreading, more advanced native code features. So to summarize or to pull this all together, if there's one thing to take away from this talk, it's that the V8 engine and really JavaScript engines in general need to be well-rounded engines. They need to be able to run lots of different types of code fast in lots of different environments. And the constraints of those different environments might change the types of code that need to run. So V8 today, with the ignition interpreter and the turbofan optimizing compiler, is well-equipped to run code at both ends of these spectrums, of both of these spectrums. And I think with WebAssembly, this diagram can be expanded a bit. WebAssembly works alongside JavaScript. And will allow developers to push that peak performance angle even farther. So WebAssembly just expands the possibilities that a JavaScript or an engine can run code in the browser. So that's my talk for today. If you are interested in some of the things you heard today and care about the types of optimizations we're doing or even the language features that JavaScript offers, I encourage you to go take this survey. We put it up at bit.ly-slash-v8-lang-survey. And this gives you a bunch of upcoming features or proposals and asks you to rate them, how exciting they are to you. And this will help us decide what we come on stage next year and talk about. So that's all for today. Thank you very much. And my name is Seth Thompson. Hey there, we're in the AR and VR tent right now. And I'm going to play with some of the coolest, latest stuff from Tango and Daydream. Want to come along? OK. See, that's the Earth. I don't know much about this. But my friend, Aetan, over there, maybe you can come explain it. All right, so here we are at the Tango booth at Google I.O. And we're going to show off a product that we announced today called AR Expeditions. And this is all about helping kids in schools learn in a different way in a more immersive and intuitive way. So we're actually going to bring things to them, in front of them, compositing virtual objects into the real world that enable them to just see the sun and the earth and the moon and some other things too. So Brandon over here is going to help us out with that. That's awesome. Let's check it out. Yeah, so what we're looking at here is we can actually see the earth that's been placed in our actual physical space. So we'd map the room, we understand where we're at, and we can actually allow students to all experience the earth together and look around and find the five different hurricanes that are around the earth. We can do things like switch to the moon. And you can see actually the approximate location here that Neil Armstrong landed on the moon initially. Pretty awesome. Pretty awesome. We can also see the sun. And can we actually go around this side? So we can see, if we go around this side, the different layers inside the sun and how hot they are. That is crazy. Here we can see the inner planets. And students can actually go and explore and see earth and its relative size to Mars or Venus. We can also see other planets like Jupiter or Saturn. And then my favorite is actually looking at the asteroid belt, because it seems like you're flying through the asteroids. It's pretty fun. And then finally, on our exploration of the solar system, you can see dwarf planets, such as humaea or Pluto. And humaea is really interesting because it's this kind of smashed looking planet, something that would be really hard to get a sense of if you weren't being able to see this placed in your physical space and compared to the other planets. So that's expeditions they are. That's expeditions they are. Thank you so much. That is really cool. I'm just going to play with this for a little while. You guys don't mind, do you? Let's go back to the asteroid belt, because that one's the best one. So you can kind of feel like you're flying through space. Take a look. Oh. There it goes. Whoa. Well, and what's really cool here is that we're all playing in this same space together. Yeah, this is great. Actually, there was an attendee just earlier that was asking, why would we want to bring this in the classroom and not just have them work on their laptops? But what's nice is, instead of being heads down and exploring this on your own, or passively learning, you can actually go and move around the asteroid belt. You can move around Jupiter or Saturn. And you can do that not just by yourself, but with the teacher or with the other students. And that's really kind of the power of tango being able to have this shared space and use it as this educational tool. Awesome. Thank you so much. Thank you. All right, let's go check out some more cool stuff. Here, we're showing up a partnership with Gap, where they want to allow people to see what clothes might look like on mannequins, but in their home. So you can bring kind of their whole product catalog to your home. And so what we've got here is a bunch of options. Maybe I'm looking for a dress. So I drop it into the world, select a size. I'm a size five. Oh, I'm sorry. I'm sorry, Timothy. So we've got a size six mannequin here. And you can change the size of the dress. So you can see how an extra small would look, a small would look, or a medium. But sometimes I want to see what a medium would look like next to an extra small. So I can plus the plus icon. And now I have two mannequins in space. And you can see one of them has a medium, which is a little bit looser, more drapey. And the other one has an extra small. And you can get right up in on them. You can see the texture of the fabric. And they actually do a very realistic physics simulation, which is pretty cool, to get the drapeiness correct. So this is just one way to bring shopping into your home. That's cool. Can I? I think what's really neat is you could walk around it like you do in a store, which feels a lot different than just pressing buttons on a web page. Yeah, it gives you a much better sense of scale and how the garments is actually going to look on someone who's roughly your body size. So here we have a storytelling experience where we're bringing some characters from the Wizard of Oz into the space. And we're doing some special things. We're really working on showing off how well we can composite them into the space and light them relative to the space. You'll see kind of what I mean in a minute, but you can take pictures with the lion or the scarecrow or the tin man. And yeah, it's a really interesting and social AR experience. Awesome. Well, maybe we can show everybody what you see. Well, I just kind of stand over there. Sure. OK. Here, let me hand you the microphone. Awesome. All right. So here we can have you take a picture with a character. So here you're kind of behind the lion. You can stand next to him. We can see that the lion is here. And so now I will take a picture. Ready? Smile. Yeah, do a flex. Nice. That's a good picture. Cool. So we've got a few pictures of you with the friendly lion. So this is kind of what the intro experience looks like. So if you want, you can even hold it. And you can kind of look around. And you'll just get taken through each character. So the mode we used earlier was just so you could take selfies. But this actually shows the characters doing something. You can see his writing on the actual wall there. So that's the kind of registration that we have to the environment in this scene, which is really great. Portoto. Yeah. Tornadoes are unfortunate. That's awesome. All right, well, I think we have one more Tango experience to check out. How are you? I'm good. How are you, Vivek? I'm good too. Thanks. So here I'm going to demonstrate the Constructor app. It's built on the hardware and software stack provided by Tango. So this is one of our new phones that is being released by ASUS. It's coming around this summer. So as you know that the Tango phones, they have a depth sensor for sensing the 3D environment. It has a fisheye camera, which is used for tracking. And that in combination, we are able to accurately get the 3D position off the phone as you move around. And so let me demonstrate one of the apps that we've created for this purpose. So what we're seeing right now is the raw output of the depth camera. And as you move around, you can sort of make out the objects. But what it's basically doing is that each pixel is not a color value, but rather how far that particular object that you're seeing is from the camera. And we have this recording mode where, once we start the recording, what it does is it takes all of these depth images and it combines them, so sort of like stitching them together into the 3D space to create a mesh. And you can sort of walk around and painting the surfaces. That's really cool. It was very easy. Yeah, it's as easy as using a regular camera. And once you stop the recording, it'll do a little bit of processing. And then what you have is the 3D mesh of the environment that you just scan. So you can imagine if you spend a little bit more time, let's say 10 minutes or so, comprehensively scanning the environment, then you can get much more better models. And I'll show you an example of some of the models that we have created before. So here's a two bedroom apartment that was scanned in about 15 to 20 minutes. Just walk around using the same process that we just did for a few seconds. And voila. That is really cool for that. Yeah, that's pretty exciting stuff. So let's look at another model that we scanned. We can also use this for scanning outdoor environments. So now remember that the camera, the depth camera, works using IR light, infrared light. And since there is a lot of ambient infrared light in bright sunlight, that it would interfere with it. But if it's a cloudy day, then you can reasonably scan outdoor environments. And here is somebody who scanned the outside of their house. And you can see they did a really good job in scanning it. And the result is also quite impressive. Since this phone is also a Daydream-ready phone, we can also, we have this Daydream button here. So once you click on that, you can pair it and put it in one of these Daydream headsets and look at these things in VR. That's in a nutshell of what our app can do. That's perfect. Thank you so much. You're welcome. OK, so we're going to go check out some Daydream here. We're here in the AR and VR tent still. And now we're going to look at Daydream. I'm here with Brian. Brian, how are you doing? I'm doing excellent. So you're going to help me experience some Daydream. Yeah, that's absolutely right. So Daydream, it's our mobile virtual reality platform. What we're going to be looking at is virtual virtual reality. It's the experience that Clay talked about during the keynote where you go further and further into VR from inside VR, just like inception. That's awesome. I've got to try this. It's pretty rad. OK, so all I need to do is put on the headset, right? Yep. I'll get you set up here. And in just a moment, you'll be going into VR inside of VR. OK, now while you're setting that up, Brian's actually going to grab his phone and outside the kind of headset experience to give you an idea of what it feels like or what I'm experiencing with the headset. Sound good? All right. And here's your Daydream controller. While I appreciate your selection, I must insist you choose one of the three test objects. Now, pick it up and stick it to me, quickly crunching. That was really fun. How'd you like it? A little strange. The content of it, it was strange. If you butter toast in VR, you butter toast in real life. Is that how it works? And all the headset? I'm sorry, what you were saying? I was just saying, if you butter toast in real life, you butter toast in VR, it's all confusing. I don't know what it means. It's crazy. Well, and the headsets just kept coming and you went deeper and deeper into VR? Exactly. The actual experience is about two to three hours long. And you just continue going further and further into virtual reality. There's a couple of different things that are going on here where, at the very beginning, you choose between a potted plant, a fish, and a ball. And no matter what choice you make, you're always wrong. So it's part of that illusion of choice that virtual reality can give you. And a lot of the things that the game designers chose to do to give you that illusion of choice. That's awesome. That was a lot of fun, Brian. Thank you so much. Yeah, thanks so much. My name is Ewa Gasperovic, and I'm a developer programs engineer here at Google. And this is Jeff Posnick. Hi, everybody. I'm also on the developer relations team at Google. Jeff is one of the creators and main contributors to the Workbox Library. Today, we are going to share with you our story about transforming the womentechmakers.com website into a progressive website. But before I get to it, let me tell you what brought the two of us together on that project. Some time ago, I was talking with a friend of mine, Megan, at work. And I knew she was involved with the womentechmakers program, so I asked her, how was it going? And she told me with a lot of enthusiasm in her voice, Ewa, it's going great. The program is growing, and it's amazing because it's so community-based. There is so much going on, but actually, sometimes it's also challenging. It's hard to keep everyone in the loop. Given that everyone is going mobile these days, we're thinking whether we should have an app for it. And it made me think, if an app was the way to go, how about a progressive web app? It also made me personally curious. I knew quite well by then how to write a progressive web app from scratch. But transforming an existing life site is entirely different story. And I know some of you are bothered about it as well because you approached me during this conference already and asked about it. So I'm very happy that I'll be able today to share this story with you. Luckily for me, this is also where Jeff comes into the picture. Around that time, he was working on a workbox library. When he learned about my migration plans, he approached me and he said, listen, Ewa, me and my team are working on a set of tools that would make your migration a bliss. Would you like to use it? And of course, I said yes. And that's how me and Women Techmakers became a guinea pig for the workbox project. So that's what we settled on. We settled that Women Techmakers will get a progressive web app that Jeff will provide us with tools. And I will get my discovery process about how it feels to migrate a website that I can share with all of you today. And this is exactly what we're going to do today. Our intention is to give you some insight and some tooling so that you can attempt migrations in your own projects in future with confidence. OK, let's start. First, I would like us to look into the decision process behind how we decided to go for a progressive web app. But in order for you to understand our decisions, you need to know more about Women Techmakers. Women Techmakers is a Google global program for women technology. It is actually a great pleasure to talk about it here at IO because this is where all of this started. Six years ago, here at IO, the first Women Techmaker event took place. It was an event to bring support and community to women who attended the conference. And I'm very happy to say that this support yielded results and our number grew. In the old times, around 2013, there were only 8% women attending the conference. And this year, we aim to have 25% of us attending. And I'm happy to say that we made it. Since then, the Women Techmaker have grown from a humble once per year event to now hosting hundreds of events for over 70,000 women worldwide. There is Women Techmakers Scholars Program for college students, a membership to help women get the career support they need, and many more initiatives. It is now part of the global movement for women in technology. It's not focused in Silicon Valley only or even US only. It supports women in 160 countries. That's a lot of diverse environments. Because of that, Women Techmakers had to get creative at how they can support such growth at scale. When Women Techmakers was entering new markets, it also started targeting new audiences. And these new audiences often meant more users on mobile devices and on less reliable internet connections. The program leaders wanted to make sure they have the infrastructure that supports this diverse environment. And it seemed that a progressive web app might be just what they need. A progressive web app is a web application that uses modern network capabilities to deliver app-like experience to the users. It should be in particular, reliable, which means it should load instantly even in uncertain network conditions. It should be fast, which means it should react to user's actions on all kind of devices. And it should be engaging, which means it should feel like a natural app on the device providing an immersive user experience. You can see how those features corresponded with the Women Techmakers goals. By making the site reliable and fast, we would improve experience for users on mobile and on slow internet connections. And by making the app always just one click away for the user on their mobile phones, it would help to keep users engaged and involved even more in the community. It would allow us to improve user experience and stay frugal at the same time. We could keep current infrastructure, have a single code base, and support only one platform going forward. So that was the decision. We will migrate this site to a progressive web app. So how did we do it? How did we approach our migration? Well, this is the process we roughly followed. First of all, we needed to understand, deep in detail, the current state of our site. In order to do that, we use Lighthouse. How many of you know what Lighthouse is or use it? Yeah, you've been to the mobile web tent, hopefully. OK, so Lighthouse is a Chrome extension that all of you can install in your browser that allows you to measure how close your web app is to a progressive web app. When you run your website to Lighthouse, it gives you back a report and a score that summarizes the state of the app for you. When we started the process, our score was about 45. And our goal was to get as close to 100 as possible without changing how the site looked or worked in general. That's why we called it migration. We didn't want to alter the site too much. We just wanted to make it smoother in different environments. If you're curious about your own site scores, you can try out the Lighthouse in the web tent. And if it goes green, it means you have an amazing, amazing website. So here's how our Lighthouse report looked like. You see the score there. But apart from that, you also see a list of feature or traits that the website should have in order to be considered progressive. The green ones means we're doing well on those. And the red ones mean there is a rear for improvement. And one cool thing about this list is that actually each of those is an expandable section. So you can click on it and see more information about the particular feature. And also find links that lead you to resources that tell you how to actually implement that feature or get better on it. And this makes it super cool tool for people that are less experienced with progressive web apps. Because as a matter of fact, you don't even need to know what features you should implement in order to get progressive. Lighthouse will simply tell you. As you can see, our Lighthouse was 45. And we were presented with a list of options. So what do we do next? Well, we needed to prioritize. Because we didn't have all the resources in the world and we didn't have all the time in the world, we needed to make sure we focus on the most important things first. And this is why I put a women tech makers logo there. Because that's the moment where you take the technology you have, like a Lighthouse report, and bring it back to organization or to business people. Because those are folks that know best what's good for the users. And important thing to remember about the progressive web app is that it's not a monolithic technology that you just drop on your site. It's actually a set of modular solutions that you can usually implement independently on each other. So you can take that list and just pick and choose and decide what would make the biggest difference first and then iterate on it. So you don't need to be scared that you need to do everything in one go. You can start with smaller changes and take it from there. In our case, we decided to focus on two areas, on offline and on app-like experience. Offline obviously would improve the performance on low internet connections. And that would improve how the app works not only for the users in the countries where the connection is scarce, but also in an environment like some big conferences. You've probably encountered this problem where you can't really access the website or a conference. And Women TechMaker is a lot of driven initiative. So it was important to add that. Secondly, we decided to add app-like experience, which means you can now install the app to the home screen of your mobile device. We've hoped that this will drive engagement with the users over time, that it will be easier to keep everyone in the loop. Once we have our priorities, we needed to get to know our tools. In just a moment, Jeff will take you through all the instant outs of the Workbox library, and he will show how easy with this library our migration became. Once Jeff is done, I will tell you a little bit about how we actually implemented the whole process. I'm going to share some challenges we tackled during the implementation phase and share some lessons learned. So I hope you will find that part really interesting. Finally, at the end of the process, it is important to measure what you achieved. Of course, first of all, we wanted to know our Lighthouse score if it's 100. This is a measurement you do for yourself to know how well you did on the migration, but it's also important for your stakeholders because it gives a nice, concise, and numerical way of presenting the progress you made on your app. While this number can bring a lot of satisfaction, especially if it turns green and it lights a Lighthouse out there, the second type of measurement I put out there, the Google Analytics, is even more important to me because I think it's very important to check afterwards how your users reacted to the changes you introduced. After all, it's the users that are the ultimate judge of your changes. And this also gives you a nice base to start this process again. Once more, Progressive Web App can be developed gradually and iteratively over time. So you can just start the whole process again, follow this path, and iterate as many times as needed to achieve the optimal user experience. Now, you know more about the business case we wanted to solve and about the process we followed. So how about we get a little bit more technical? Jeff, are you ready? Thanks, Eva. So before we detail the specific tools that we used, I wanted to provide some background on a web platform technology called Service Workers. Service Workers are a code that you write in JavaScript, and they act kind of like a proxy server sitting in between your web app and the network. They could intercept network requests, and they could return a response from some cache that they choose or from the network, or they could implement custom logic that mixes and matches where they get the response from. Now, with the right Service Worker in place, we could create a web app that loads almost instantly and works even when you're offline. But if you're not familiar with Service Workers, it could be kind of hard to picture exactly what's going on. So an analogy might help. You can imagine that a Service Worker is like an air traffic controller. And think about your web app's requests and response as airplanes that are taking off and landing. So let's see how that analogy plays out when your web app makes an HTTP request. So all right, we have this airplane representing request that's taken off. We're making a get request for this image. And now the Service Worker is in control. It gets to decide the route while your request is in flight. And for a URL that we've never seen before, it ends up just going against the network. It'll receive a response just like it normally does. That response would go back to your page. But the really cool thing is that the Service Worker could decide, hey, I actually want to save a copy of that response for use later in a cache. And that's great. So our web app gets a response. And we have a cache copy that's good to go. So next time a request takes off, same URL that we saw before, the Service Worker's like, hey, I know this URL. I could go straight to the cache. I could give the page a response from that cache, bypass the network completely, and everything looks the same from the perspective of the page. So that's really cool. And that's really what the Service Worker is doing in a nutshell. It's routing where your request will go and having some intelligent logic there. But many people would be reluctant to write code for an air traffic control system from scratch. And I don't blame you. But while the Service Worker API isn't quite as complex as that, there are some subtle issues that you could easily run into. And problematic Service Worker code could lead to things like delayed updates or missing content in your web app, just like buggy air traffic control code could lead to flight delays or even worse. So the Women Techmakers Service Worker implementation uses a brand new set of tools that we're happy to announce today. And it's called Workbox. And you can find out all about it at workboxjs.org. Happy to see people taking pictures. I'll give you a chance for that. So I'd like to walk you all through some of the common Service Worker use cases and show how Workbox helps you avoid subtle problems that you might bump into if you build everything from scratch. So first up, most Service Worker implementations start by adding all the critical assets that they need to a cache, making sure that they'll be available for later reuse. And this is referred to as pre-caching. So here's some basic Service Worker code that waits for a given version of a Service Worker to be installed, and then pre-caches a list of URLs. And this is the sort of thing you might just find is some sample on the internet and copy and paste and maybe even deploy right away. But there's some pitfalls that become apparent as you release newer versions of your web app or add in additional assets. So one thing is you need to remember to manually bump that version variable each time you change anything, or else new assets might not get pre-cached. You also need to keep updating that array of URLs that reflect your asset's current file names. And that's particularly tricky when your URLs contain versioned assets, like you see there, sorry, versioned assets, like you see there with the JavaScript and the CSS files. And if your site could be accessed via slash or via index.html, you need to have cache entries for both. Forgetting to do any of that could lead to a Service Worker that continues serving still content or just doesn't have a fully populated cache. So instead of going it alone, you could sidestep those pitfalls by using Workbox. It integrates into your existing build process, whether you're using Webpack, MPM scripts, Gulp, or in the Women Techmakers case, Grunt. So let's take a quick look at how Workbox implements pre-caching. There's a single method, pre-cache, that takes in a list of files and all of their revision information. We call this information the pre-cache manifest. You can see our source Service Worker file here, which is just a pretty empty pre-cache manifest, not doing very much. The nice thing is though, in this source file, we don't have to hard code a really hard to maintain list of URLs or anything like that. We could keep it empty. And the goal of the build process is to figure out what should go into that manifest. So our final Service Worker file will have that empty manifest replaced with a list of URLs, along with versioning information about each URL. Workbox ensures that all of your pre-cache files are available offline using the revision info in that manifest to keep them all up to date. All right, let's take a closer look at the build process that we're using to get that final Service Worker file. And here we're using a method called inject manifest, which is part of the Workbox build module. We pass in the source Service Worker file, and that's the one that has that empty manifest. And we tell it where to write the destination file. And that will have the fully populated manifest ready for use. The nice thing is we could use wildcard patterns to tell Workbox which files we want to be pre-cached. So whenever we add or rename one of our files, we don't have to remember to manually update the list of URLs. And there's also no longer a need to increment a version variable. Workbox handles versioning for us via the revision details in the manifest. We can also tell Workbox that our site responds to requests for the forward slash for the contents of index.html. So we don't end up having to pre-cache two separate entries. OK, so that's only one part of the picture. In addition to pre-caching, it's really common to use runtime caching strategies for resources that are either used infrequently or just too large to pre-cache. You might, for instance, use runtime caching to handle requests for images that aren't needed for every section of your site. Here's some boilerplate code that implements a runtime caching strategy. It fetches a response from the network, but first it saves a copy of the response in the cache. It's the sort of thing, again, you might copy and paste from a sample on the internet. And it's very similar to what we saw illustrated earlier in that air traffic controller example. But the subtlety here is that while your code will add entries to the cache, there's actually no code that's going to clean up those entries when they're no longer needed. So think back to those airplanes delivering the image files to the cache. So they're going to keep landing in the cache with more and more images. And our air traffic controller isn't going to do anything to stop them from piling up. In practice, this is a sort of code that will lead to cache responses that build up over time, which waste storage space in your user's devices. That's only one half of the picture though with runtime caching. Once you defined your caching strategy, you need to tell your service worker when to use that strategy. That's called routing. And here's some boilerplate code that checks whether it's a request for a URL ending in PNG. And if so, it uses that runtime caching strategy that we just described. And for a very basic web app, that might be fine. But things get out of hand pretty quickly if you need to implement different runtime caching strategies for different type of resources. You end up chaining them all together in a big if else block. And that just really doesn't scale very well. All right, so let's see how we're using Workbox on the women tech makers site to handle runtime caching. So there's a number of features that lead to clearer, more concise runtime caching code, as you could see there. First, we have a built-in router. And this takes care of responding to requests when certain criteria are met. And here, we're using a regular expression as a criteria for what triggers our routes. Workbox has built-in support for common caching strategies. So we don't have to write or more likely copy our own response logic. They're ready to use right out of the box. But Workbox goes beyond the basics, allowing us to customize the built-in strategies with powerful options like specifying exploration policy for a given cache. Workbox will take care of cleaning up old entries automatically, instead of them being saved indefinitely on your users' devices. So going back to our airplane analogy, our air traffic controller knows how to clear out the previous planes to make room for new ones. And so let's just take a look at the impact of adding Workbox to the site. So as you can see in the screenshot in the DevTools network panel, even if a user's device is completely offline, all the responses we need come from the service worker, giving us a progressive web app that loads in under a second. You no longer have to build a native app to get this kind of speed, reliability, and network independence. So that's great, but we don't have to stop there. Workbox also offers a number of built-in features that go beyond caching. And I'd like to highlight a couple of them that the women's tech makers site is using. First, Workbox makes it easy to add and support for offline Google Analytics. All it takes to turn it on is a single line of code. Once enabled, Google Analytics requests that are made when the network is unavailable will be automatically queued up on your users' devices and replayed when the network comes back. This means that the women's tech makers team won't lose out on valuable insights when users access their PWA while they're offline. Workbox also helps you follow user experience best practices. So using a cache-first strategy means that your PWA could load almost instantly. But it also means that your users will see previously cached content on their next visit, even if you've deployed an update to your site. So a really common UX pattern to follow is displaying a little toast message, like you see at the bottom of the screen there, letting your users know that if they refresh the page, they'll see the latest content. And Workbox makes it easy to follow this UX pattern by broadcasting a message to your page when there's an update made to one of the caches that it maintains. And this message includes important context, like the URL of the resource that was updated. And this gives you the flexibility of ignoring updates of less important assets, like some random CSS file, while prompting the user refresh when something critical is updated, like the site's main HTML. So that's just a small overview of what Workbox can do. We hope that you find Workbox equally useful when you're building your own progressive web apps. It's available for use today. We're 1.0. And examples can be found at workboxjs.org. I really want to offer a special thanks to Eva and the Women Techmakers team for being an early adopter of the library and for offering tons of valuable feedback along the way. So thank you for that. All right. Back to implementation. But first of all, thank you, Jeff, for walking us through the library. And thanks for implementing it in the first place. It saved me a ton of time. I actually think it made me delete more codes than I added in the first place. OK. Implementation. The first takeaway I wanted to share with you from the implementation process is that going for a progressive web app is a great audit opportunity. And not only because Lighthouse will list all of your performance scenes anyways, but because when you're planning to implement offline, especially if it involves caching some part of the website on users' device, you really need to be respectful of users' resources, like bandwidth or storage space. And you don't want to push to users' cash a bloated website, unnecessary assets because it's a waste. Making the site lean and resource-friendly should be a priority. And it makes it more usable to all users, not only the ones using the offline mode. And often, it's really not very hard. Usually, you can find some easy fixes and pick the lower-hanging fruit to start with. That's what we did with women tech makers. Let's look at the example. This is the network panel for women tech maker site. What I did here, I just sorted all the assets the old page was using by size. And only by looking at the very top of that list, you can easily spot easy targets for optimization. The bigger the file, the higher chance you can optimize. Here, you can see the two biggest files by far are the base.js, which was part of the YouTube API JavaScript file, and the header, which is this nice big hero image on the home page. So can we optimize those? Let's start with the image. It covers the full header area of the page, so it should be as big as the viewport at least. But it doesn't need to be bigger than that. So if the viewport is smaller, you can make the image smaller. What I did here, I just created two more versions of that image, a medium-sized one and a small one. I added a few breakpoints in my CSS so that it uses the appropriate image for the appropriate viewport. And look at the stats. It allowed me to save 21% on overall image load on that page with how many nine lines of code if you count brackets, right? So imagine this was the game just from one image. Imagine what would happen if you do it for more of your images on the page. It's a really easy fix. Now, YouTube API. Over the lifespan of your page, the libraries and the resources you're using might change. The new ones come up, some become obsolete, and so on. In this case, everything that we were achieving through the YouTube API now is possible to do with an iFrame when it's configured properly. So by embedding YouTube videos on the page with iFrame instead of YouTube API, we could just delete that file. And suddenly, we gained 400 kilobytes of the overall page weight. And if we didn't make this decision to go for a progressive web app, probably we wouldn't even spot that there is this opportunity. So it's a good moment when you go for a progressive web app to stop and think again about resources you're using. Similar thing, the low-dash library. We were using only some basic functionality of the library, so we were able to replace low-dash with low-dash core. And this brought us from 24 kilobytes to four kilobytes. You might say that 20 kilobytes is not that big, but this really adds up in your whole page. And again, here I changed only five characters in my old code, and that's the game, right? So it's really easy to get that. Now, once we ensure that we're not pushing to big resources to the user, it would be good to also make sure we don't push things twice if we don't need to. For that, we need to leverage browser caching. And this is different caching from service worker caching. It's just regular browser cache. Every browser has it these days. So in the old site, what was happening, with every build of the site, we would version files, or cache past, if you will, by the version, like build number, or by timestamp in some cases. And this is kind of uncool, because every time we made a new build of the site, the file would get a new name, even though the file itself might not have changed. That would prevent the browser from caching it, because the browser doesn't know that under different name it's still the same content. So instead, we switched to content-based caching, as you see in the lower example. Now, we make a hash of the content, and we embed it in the file name, which means the file will get a new address only if the actual content change. So this allowed us to push less resources to the users. And remember, women techmakers is a community. A lot of people are coming over and over to the page over the lifespan of their contacts with women techmakers. OK. So a lot of this type of gains are really findable in your Lighthouse report. If you dig deep, you will find a lot of good practices and good hints embedded in that report. So just by following the red color, you can find a lot of optimizations that would make the site better for all of your users. Now, the second takeaway, and it's going to be quite a journey, so bear with me on that one, is about rethinking your site resources, but not from, like, performance perspective. Let's say your site is nice and clean, and it's time to think what to cache and when so that users can use your site offline. We call it a caching strategy. And for coming up with the right caching strategy, you really need to understand your site resources, like images, media, and content. Let's look at the example. This is the Women Techmakers site as loaded online. It's a rich visual experience, lots of images, lots of graphical effects. It's really a beautiful site. What would happen if we saved all of the images from all of this site to users cache? It will just become really crowded. Imagine the user wants to, you know, browse a lot of websites on their device. If each of the websites downloaded all possible images from the whole app, that would make it really inefficient. So the question is, are they really necessary, the images? Maybe they're not at the core of the site. Well, that's how it looks like without images. It's kind of ugly, but apart from being ugly, the site is also unusable. Like, you can't even click to go to different page, which means it's entirely broken. This means no images is a no-go, and all images is a no-go. You need to find a middle ground somewhere. So how do you find a middle ground? Well, I started to think about images by the function they have on the website. Let's go color by color. The yellow ones are images that are for navigation and the images that allow user to perform some action. And this means they're super important. If they're not there, the user cannot really use the website, they cannot perform the action. They have absolute priority. Now, the red ones are the ones in our case that were related to branding, but in your case it might be different. Those are images that you, for some reason, put priority on. These might be images, for example, that create the connection between your app and the user, like, for example, something that allows your user to understand what type of app they're using. So these are your priority images. To the contrary, the blue ones are purely decorative. They're cool, they make the website look nice, but if they're missing it's not the end of the world. We were here, we were here. OK, now the green ones are kind of funky because I call them informative images. What I mean by this is that apart from adding to the visual side of the story, they also convey some message. In this case, they tell you which of the companies featured content. So I call them informative because apart from being visual, they also convey a message. Now, what would happen if we apply different caching strategy to different type of image? The inline ones, the navigation and action ones are super important. So what I did, I just inlined them because they're small, they're icons. I just encoded them in SVG and I put them directly in my HTML. This means I don't even need to think about caching strategy because if HTML is there, the images are there and I'm done. Easy fix. Now branding and priority, I pre-cache. And you saw that with Workbox, pre-caching is pretty straightforward. Those are images that I always put in users' cache because they're really important for my user's experience with the app. The blue ones, I cache at runtime. This means that when user enters the particular part of the site, I cache them on a kind of best effort basis. So there is no guarantee there will be in the cache when the user re-enters that site in offline mode. I also put a limit so that they don't build up forever. And this means that they might be in the cache and serve their purpose or they might be missing. What happens if they're missing? Well, the site is slightly less visually attractive, but it's not the end of the world. Like the site is still usable, the content is there, you can check your events, and so on. Now, informative images are super cool because they show you the full power of service worker. What I wanted to do here, when the image is not in the cache, I wanted to serve some kind of a placeholder that tells user, well, there was supposed to be an image, but it's unavailable. But I also wanted to convey the message. So I was using the alt tag of the images to read what company it referred to and try to render this image in service worker. And that's the cool thing, that service worker is just a JavaScript file. So you can do all kind of crazy rendering in there. Here, I'm creating an SVG with the offline icon, and I attached the name of the company so that I don't really store anything. Those images were never in my cache. I just rendered them on the fly, and they still serve the purpose of the website. So those are the four caching strategies I used with the help of the Workbox Library on Women TechMaker. And the truth is, your sites, your projects might be different, and you might need different caching strategies. But it should show you the direction you should go for. With the flexibility of service worker, you can find a strategy that would be best suited for your users' needs. Now, let's say we implemented all this offline and progressiveness on our app. What I wanted to tell you from my experience with this migration is that these things really influence other parts of your app as well. And you need to remember about that, because if you forget, then you'll get in trouble in the other parts of your app. Remember the little toast message that Jeff showed you when there was a new version of the app? This is the type of influence on the UX I have in mind. You really need to think how going offline influences your users' UX and respond accordingly. For example, there might be a form on your site that you actually cannot submit where you're in offline mode. So we need to remember now that, OK, it's not enough to dump everything in the user's cache. I also need to remember to implement something to tell user, oh, sorry, you can't do this while in offline mode. The second thing is to measure your impact. Some of the interaction with the user will now happen in the offline mode, so you need to add offline analytics tracking. And you saw in the work box that it's just one line of code, but it's your task to remember to put that line in your service worker and not forget about it. Finally, developing offline first can change your developer workflow. And here are some hints that I found useful during development. First of all, it is useful to have two different service workers for different environments. We separated our development service worker from the production service worker because in development, we didn't want anything cached, like you want to be able to refresh your page and see your changes. So we just had an empty service worker, like an empty file. Because service worker is a progressive enhancement, it doesn't harm that it doesn't do anything. It's just transparent for all the requests, right? And then in production, we had a fully fledged service worker that does all the caching. So that's the solution we used for separating environments. Now, when you're developing offline interaction working with the service worker, you often want to see how your page would look like for a user that enters the site first time. And if you keep refreshing, that's not that easily achievable. So for that, the best solution is to use the incognito mode. Also, when you do some mistakes and you really mess up all your caches and you don't know what's happening again, just go incognito, start from scratch, keep calm, and go incognito. Incognito is your friend. And finally, I just wanted to reiterate that you should really use build tools to version your files. Trying to do it by hand is just asking for trouble. And because the Workbox build process is just an NPM module, you can usually integrate it with whatever workflow you have there. So really use tools in order to avoid manual filling with the cache entries. What happens if there is no service worker? The user's browser, what the cool thing is, there is no problem. All of the stuff we discussed here is progressive enhancement, which means users that do have service worker available will get some more features, some more robust behaviors like offline experience. But even the users that don't have it will get a lot of gain from your progressive web app because of all those other fixes you did on the way, because of the better normal caching, better performance, linear website, and so on. So I really encourage you to consider progressive web app for your web projects, because in the end, it will increase satisfaction for all of your user base. Did we implement everything we wanted? Well, no. There's always more. And there are improvements that did not make the bar. Here is an example of few of those that we plan to implement in future. But as I told you, this process can go on and go on and go on. And as long as Natalie, the boss of the women tech make allow us to work on it, maybe we'll get them implemented with the next iteration. All right. So let's recap a bit. I told you today a little bit about women tech makers. I told you about the migration we attempted. I told you how the process looked like for us, so that you know where to start on your own. Jeff walked you through the tools, the workbooks library, and we shared some lessons learned around Perf about resources prioritization and about the development workflow. I hope you enjoyed that. Now it's your turn. Go and start your own progressive migrations. Yeah. So what's next? Where are some good places to go? We've put together a lot of really great PWA guidance on developers.google.com. We hope you check that out. Hopefully everybody's inspired to use Lighthouse, either in person in the sandbox, or you could just go and get the extension for yourself. You can find out more about that at developers.google.com as well. And I hope that folks try Workbox, give us feedback. We're really looking to talk to more developers about it as well, so you can find us in the sandbox and the libraries and frameworks section. Some of us who worked on it are here as well. It wasn't just me. And Workbox.js.org is the site for that. Yeah, and remember to join the Women Techmakers movement and remember that all genders are welcome in the movement. So join us. And thank you. All right, guys. Thanks a lot. And see you at the concert. Hi, Timothy Jordan here, your friendly developer advocate taking you around Google I-O 2017. We're now in the Maps and Mobile Web tent. And I'm here with Chris Wilson, who's going to show us how Lighthouse works. All right, thank you, Timothy. So Lighthouse is a tool that we've created to help build really good web experiences, help you improve web experiences. I'm going to actually enter in a URL right here. This is the media player application that we showed off in the Web State of the Union talk. We also have a talk about it tomorrow in the future of audio and video. And it's going to run a series of tests here to test a bunch of different things about whether it has a manifest, whether it has a service worker installed, if it's going to work when it's offline, how it works in different network environments, all kinds of stuff, how long it takes for the user to get an interactive access to the page, all that kind of stuff. And something went hideously wrong. So let's take a look at the report. So this is kind of cheating, because this particular application gets 100%. It does super well, right? I mean, we hand-coded this. Paul Lewis is an expert. He knows what he's doing. He has a manifest on there. He's got a service worker. It works offline. There's still things to be done, like performance. You can always try to do better at performance. Little bit, tiny bit in accessibility we could do. Maybe some best practices and passive listeners, that kind of stuff. But really, awesome score, right? I had to use this site, because all of mine are horribly bad. Everything. I know what the lowest score I've seen today is it's my own site. Well, I would say I'm not going to tell anybody, but you just told everybody. So now, this is a social contract to fix that. It's all open source, so you could go fix my site? Maybe? I don't think that's going to happen. Chris, thanks so much. Thank you. So this is Paul. And Paul's going to tell us a little bit about what's going on with AMP. And I have a fancy question to ask. Then ask your fancy question. My fancy question is the overlap between AMP and PWA. When are we going to see it, and how well does it work? Yeah, that's a good reason why we're so close together with the stalls. And I just talked about it this morning, how to combine those two in a really, really nice way. And there are three development models that really work well, because what we're saying is that you want to start fast, and you want to stay fast. So you can either build an AMP that is both a PWA. You can actually do that. But you can also lead from an AMP to a progressive web app, and use your AMP content as a data source in that progressive web app. All right. So that makes a lot of sense. Do you have something you could show me? Actually, yes. Yeah. Let's go ahead and show it to the camera here. Yeah. So if you take a look here, this is actually using live data from the Guardian, who graciously offered us their RSS feeds to do this. And so this is a demo app that I built a few days ago. And so as you can see, it's a fairly smooth navigation. This is actually a progressive web app. So it can navigate in categories. And then if I click on one of those links, here we go. Let's try that. It opens the actual AMP article. So this article that you see right here, as you can see, there was a transition going on here. And this article itself is all AMP. So you have a progressive web app shell that just does the navigation. But then you use AMP as a data source in your app. That's totally awesome. Hey, thanks for showing us that, Paul. Absolutely, you're welcome. What is this going on over here? It looks cool and live and data. Yeah, it's the I-O Transport Tracker, which this is the second year we've been running it. And basically we're here and show me what it is up here. Close. So there's a few components to it. There's an Android app that we put onto each of the shuttles running to the different hotels around the I-O Conference. Those Android apps are using the Firebase API to report their locations as they travel to each of their stops. And then this is just a Google map on a web page that is subscribed to that same Firebase database. And funnily enough, each time the bus moves, the bus icons on here move. And so it allows us to see exactly where all the buses are. That's really awesome. Yeah. This is us right here, right? Yeah, some of the buses you can see a park there. And it's all hooked up with GTFS data as well. All right, so that is the Maps and Mobile Web Tent. And I think it's time to get on and see some more stuff here at Google I-O 2017. We all know from experience that people love to share things about themselves, such as photos, videos, and gifs that express their feelings. So what do you do to let them store and share these files through your app? That's where Firebase Storage can help. Our Storage API lets you upload your user's files to our cloud so they can be shared with anyone else. And if you have specific rules for sharing files with certain users, you can protect this content for users logged in with Firebase authentication. Security, of course, is our first concern. All transfers are performed over a secure connection. Also, all transfers with our API are robust and will automatically resume in case the connection is broken. This is essential for transferring large files over slow or unreliable mobile connections. And finally, our storage, backed by Google Cloud Storage, scales to petabytes. That's billions of photos to meet your app's needs so you will never be out of space when you need it. So give your user space to share their lives with Firebase Storage, available right now for iOS, Android, and web applications. And to learn more about Firebase Storage, check out the documentation available right here. And I'm sitting here with two gentlemen from the Android Studio team. We've got Jamal and James. Jamal, would you start by telling us all the latest stuff going on in Android Studio? Absolutely. So I'm really excited about a couple of new features. First of all, as you heard of the keynote, we launched the bundling in of Kotlin, which is a brand new language that's able you to develop apps on Android. The second cool feature is around performance tools. We've launched a whole new suite of tools around CPU performance, memory performance, and network performance. So I'm really excited about those as well. And thirdly, we added the Google Play Store to the Android emulator that allows you to have end-to-end testing so you can test out things like in-app purchasing, or the download experience, and have updated Google Play services. So this is a cool thing that I'm looking forward to. Awesome. I assume you've been talking to developers around the festival today. What's the reaction so far? Yeah, developers are super excited about Kotlin. The thing about Kotlin is it's been around for about five or six years, and a lot of developers have been actually using it for Android. So they're really excited to hear an official announcement and endorsement from Google about using it. Now, we took a look at the profile earlier. And I mean, it's really exciting to see that ability of that level of debugging live while the app is running on the simulator. What are some of the cool things that you've been able to discover as you've been testing that feature? Yeah, I think one of the cool things is that a lot of developers didn't realize their app is using a lot of background data or a lot of active data. So having a visualization to see what the traffic looks like coming in and out, and understanding what's the packet size, what's the latency of those various packets, to have a better understanding of how to utilize network. Because what we find is actually turning on the radio is actually impacting battery life. So having a better performing networking stack actually affects better battery life. Awesome. And James, what's going on in your world? Well, with Android Studio 3.0, we are also making available for the first time the tooling for Instant Apps. So we're really excited about that because as you've heard in the keynote, Instant Apps is now GA and any developer can now go build an Instant App and publish it on Play Store. And the tooling that you've built into Android Studio makes it really easy for developers to get up and running with either a new app or an existing app, right? Can you tell me more about that? Yeah, that's right. If you have an existing app with built tooling, actually, we'll start by talking about the build system. From the ground up in the build system, we've introduced a new type of module called feature module to allow you to share code between your Instant App and your regular app. So that's sort of a key foundation piece for you to start refactoring your existing app and to build your Instant App. And then if you're building a brand new application, we've also modified and enhanced the built-in templates for you to easily include Instant App support. Awesome. So I'd like to ask a general question, if I may. Android Studio has been out a while, but it hasn't been out forever. But I think the advantages that it gives Android developers are huge. What are some of the coolest stories you've heard since Android Studio has been available? Well, you know, Android Studio has been out for extra version three, we announced that today. One thing we actually launched last year was the late editor and then you can straight out. And a lot of developers really loved the hand-jamming XML files. And we introduced this, we showed them the new tool and it took them about 15 minutes to do it the old way. And we showed them the new tool when we launched last year at IEL, it took them five minutes. He was amazed. He's like, I couldn't believe it. So we're really looking forward to making developers more efficient and developing apps. And this is some of the things we've been doing. That's awesome. Have you heard any cool stories from developers around Android Studio? Yeah, actually, one of the cool things that I've heard is in version 2.3, I think it was, when we introduced the APK Analyzer, we've heard stories where developers, when they start using the APK Analyzer, they realize that they've been bundling all these SO files for different architectures that they really shouldn't be. So as a result, they were able to reduce significantly their APK sizes, speeding up download speed for their users and just enhancing performance in general. That's awesome. All right guys, is there anything else you want to tell developers before we get going? Download Android Studio 3.0, try it out. All right, thanks y'all.