 Hello, and welcome to Google I.O. 2017, our annual developer festival. I'm Timothy Jordan, developer advocate at Google, and I'll be touring the I.O. venue throughout the next three days, exploring the sandboxes, interviewing Googlers, and giving you eyes on the ground. That's right, even though you're joining remotely, this year you'll get an in-depth look at everything happening at Shoreline, not just the sessions. You can follow along on any of the live stream channels on Google.com slash I.O. Google I.O. is an outdoor developer festival hosting 7,200 attendees at Shoreline Amphitheater, along with millions of viewers on the live stream. That's you. And thousands of developers at more than 450 local I.O. extended events across 80 countries. We have 14 content tracks with over 150 breakout sessions, all live streamed on Google.com slash I.O. There's also over 70 codelabs live to get you up and running with our latest APIs today at g.co slash I.O. slash codelabs. But before you get to any of that, let's review a handful of the announcements you just heard. Smart Reply, available in inbox by Gmail and Allo, saves you time by suggesting quick responses to your messages. It utilizes machine learning to give you better responses the more you use it and it already drives 12% of replies in inbox on mobile. Starting today, Smart Reply is coming to Gmail for Android and iOS too. We're excited to announce that our second generation Tenser processing units are coming to Google Compute Engine as cloud TPUs where you can connect them to virtual machines of all shapes and sizes and mix and match them with other types of hardware including Skylake CPUs and NVIDIA GPUs. You can program these cloud TPUs with TensorFlow, the most popular open source machine learning framework on GitHub, and we're introducing high level APIs which will make it easier to train machine learning models on CPUs, GPUs or cloud TPUs with only minimal code changes. Many top researchers don't have access to anywhere near as much compute power as they need to help as many researchers as we can and further accelerate the pace of open machine learning research, we will make 1000 cloud TPUs available at no cost to ML researchers via the TensorFlow Research Cloud. Android O coming later this year will bring more fluid experiences to your smaller screen as well as improvements to battery life and security. With Pitcher & Pitcher, you can seamlessly do two tasks simultaneously and smart text selection improves copy and paste by using machine learning to recognize entities on the screen. Google Play Protect is Google's comprehensive security services for Android which provides powerful new protections and greater visibility into your device security. Play Protect is built into every device with Google Play, is always updating and automatically takes action to keep your data and device safe. We also have an early preview of a new initiative for entry-level Android devices that internally we call Android Go. The goal is to get computing into the hands of more people by creating a great smartphone experience on all Android devices with one gigabyte or less of memory. Android Go is designed with features relevant for people who have limited data connectivity and speak multiple languages. Of course, way more was covered in the Google keynote including new ways to share with Google Photos including photo books, new ways that Google Assistant can help you do even more, investments in the core technologies that enable VR and AR and in platforms that make them accessible to more people. Have questions about IO17? Tweet them starting today through May 19th using hashtag IO17 request. A team of Googlers will be on site chasing down answers for you. Make sure to also follow the conversation on hashtag IO17 and on the Google Developers blog. Make sure to tune in to the developer keynote at 1 p.m. Pacific Time and I'll see you right here on the live stream between all the sessions. This is Google IO 2017. I'm Sarah from the Google Developer Certification Team. Last year we launched the Associate Android Developer Certification at IO. Now we're adding two certifications for mobile web developers. Why mobile? Mobile now accounts for over half of all web traffic. Users expect their small screen experiences to be as quick and intuitive as those on the desktop. But making the mobile web fast and easy takes some special skills. How can you prove you've learned them? We've created two new certifications to help developers get recognized for their knowledge and skill. Introducing the mobile site certification and the mobile web specialist certification. One focuses on sites and the other on web apps. Let's talk about mobile sites. What happens to your beautiful site if it takes too long to load? 53% of mobile visitors will leave a page if it takes more than three seconds to load. But the average mobile page loads in 22 seconds. Making this even one second faster increases conversion rates up to 27%. Google believes in the mobile web and so do our customers. That's why we've created the Google mobile site certification to help site owners find the best talent. Passing this exam demonstrates you have the knowledge for building high performing mobile sites. It also highlights your understanding of best practices and current browser technologies. To pass, you'll need to be proficient across mobile site design, UX best practices and site speed optimization. This certification is especially useful for developers working in-house for agencies or clients. To prepare, use the online study guide or e-learning course. Both are free. Once certified, you can promote your certificate on your Google partners public profile and social media. What if you're developing mobile web apps? Developing applications requires even more specialized skills than sites, so we have a certification for that. The mobile web specialist certification shows you can build quality web apps including progressive web apps. You take this exam by solving a series of coding problems. We'll test your skills in many in-demand areas including responsive design, accessibility and progressive web application development. This certification is especially useful for developers looking to move up in their careers. It will prepare you to tackle a wide range of challenges. We also provide a study guide and a range of courses to help you prepare. With multiple certifications, how do you know which one to take? Are you building mobile sites and need to demonstrate you have the knowledge to do it? Take the mobile site certification exam. Need to show that you have the skills to build a mobile web app? Take the mobile web specialist exam. Visit our certification page under the Google Developers website to learn more about our programs. Get the study guides, get ready and let's go. Okay Google, what's the temperature like at Mount Everest? The temperature there is minus 14. Ooh, I better pack a jacket. Oh hi, I'm Wayne Pekarski and today I'm going to talk about the Google Assistant and how you can develop your own actions to be a part of this new ecosystem. At Google, we've been providing assistance to users for years across many of our products but we think there's much more we can do to help people get things done right when they need it in a conversational way. And that's why we're building the Google Assistant. The Google Assistant can help users get things done throughout their day whether they're at home or on the go. And it powers devices like, for example, the Google Home, a voice-activated speaker. To better serve user requests, the Google Assistant needs to work well with an ecosystem of everyone's favorite services. Actions on Google allows you as a developer to integrate your services with the Google Assistant and that is what we're going to explain how to do in this video. Conversation actions enable you to fulfill a user's request directly via a two-way dialogue. Users don't need to pre-enable skills or install new apps to interact with any actions you build. When a user asks for your action by name, we'll connect them with you immediately. Let's first go through a detailed example of a user interacting with a conversation action. Think about something as simple as helping a user choose what to have for dinner based on their mood and the ingredients they have around. Let's call this action personal chef. The user first needs to invoke your action with something like, OK Google, let me talk to personal chef. The Assistant will then introduce your action and now the user is talking to you directly. From this point onwards, you get to interact with the user and have a conversation. OK Google, let me talk to personal chef. Sure, here's personal chef. Hi, I'm your personal chef. What are you in the mood for? Well, it's kind of cold outside so I'd like something to warm me up, like a hot soup, and I want it fast. Alright, what protein would you like to use? I have some chicken and also some canned tomatoes. OK, well I think you should try the chicken tomato soup recipe I found on example.com. Hmm, sounds good to me. So this is a pretty rich interaction. Think about all the sentences I spoke and how the action needs to extract the meaning out of this. How would you implement this? If you're an expert in the area of natural language processing, use the conversation API which allows you to process the raw strings that contain the spoken text from the user. You can then use the actions SDK that includes all the tools and libraries you need to build the actions. However, if you don't want to process the user's transcribed speech yourself, you can use one of the tools that have integrated with actions on Google. One of these tools is API.ai which provides an intuitive graphical user interface to create conversational interfaces and it does the heavy lifting in terms of managing conversational state and filling out slots and forms. This means you'll no longer need to process the raw strings. API.ai can do this for you. To handle a conversation, use the API.ai developer console to create an intent. This is where you define the information you need from the user. For our example, finding a kitchen recipe, this would be the type of food, the ingredients, the temperature, and the cooking time. You then specify example sentences. API.ai pauses these sentences and uses them to train its machine learning algorithm to process other possible sentences from your users. You don't have to write regular expressions or a pauser. You can also manually set what the acceptable values are for each piece of information. Once this is done, API.ai uses these definitions to extract meaning out of spoken sentences. The user can provide information naturally, out of order, all at once, or in pieces. The action can ask follow-up questions as needed. Pretty neat, right? Once you've set up everything in the API.ai console, you can then test it out immediately with example sentences. Then, you can test your project with the web simulator, preview it on Google Home, or deploy the full project to Google, or from within API.ai. Next, you can connect up an optional webhook to your intent to allow it to interact with a back-end server. When all the details you need are filled in, your webhook is called with the appropriate details provided as JSON data. You don't need to worry about pausing strings or dealing with responding back with follow-up questions for the user. You can also develop the webhook using the language and hosting platform of your choice. It's just an HTTP callback. So API.ai makes this really simple. It's easy to get started, and you can have a prototype working in just a few minutes. You should check out our screencast video where we show all the steps to make this happen. So the Google Assistant is the next big opportunity for developers. By developing actions on Google, you'll get cutting-edge experience in natural conversation interfaces, and be ready to actively participate in the emerging space of AI-first computing. In addition, you'll be able to help shape the platform and grow your audience in all the devices and contexts where the Assistant will be available in the future. And thanks to conversational interface building tools like API.ai, as well as Google's unique understanding of the user's interests and contexts, you'll be able to create frictionless, intelligent experiences for people that engage with the Google Assistant. You can find out more about actions on Google by reading the documentation at developers.google.com. We also have an actions on Google developer community on Google+, so you can ask questions and share your ideas with everyone. We look forward to seeing what you build, and I'll see you next time. Hi, I'm Nandini from Google's Conversation Design Team, here to give you some tips on how to design your own voice and chat UIs using actions on Google. Before we dive in, let's have a conversation about conversation. Consider this, all human inventions start as ideas. By definition, conversation is the exchange of ideas by spoken words. And by definition, civilization is the most advanced stage of human social development. It's the tangible expression of our common understanding and values, which is expressed through language, and language is molded and refined by conversations. A conversation is a contract between two participants with a mutual investment in the outcome, but all of that is really hard to codify. Building natural human-to-computer conversations is hard, but that's because human-to-human conversations are only deceptively easy. People are not going to change how they converse anytime soon, so the key to closing that gap between modern interfaces and thousands of years of evolution is to use what we know to be true about human-to-human conversation to teach our computers to talk to humans and not the other way around. So the key to building a good voice interface is not to fall into the trap of simply converting a GUI into a VUI. Obviously, I can't teach you an entire design discipline in a few minutes, but I can give you five pro tips to set you up for success. Let's design a simple number guessing game along the way. Here we go. Number one, leverage your brand and give yourself a persona. I don't mean a caricature or a mascot necessarily, but you can do that too to make it even more accessible. A persona is more than that. It's the consistent character captured by the voice and interactive experience. It's the face of that experience for the user. First, list the core attributes of your brand and what you stand for. Come up with the corresponding attributes that can be conveyed through design elements and, of course, the voice dialogue itself. For example, if your brand is known for speed, something we at Google are known for, some attributes of the design might be to be intuitive and data-driven since both of those elements cut out steps for the user. Some voice attributes for the actual dialogue wording might be engaging or apt or approachable, since those also tighten up the dialogue by removing ambiguity or making it easier for the user to have confidence in the interaction. Write a short style guide covering things like pace, tone, energy level, vocal attributes, and the overall impression that you're shooting for. Try to create a simple biosketch of a character that might embody all of these attributes. Give it a name if you want. Also, there's a practical reason for creating a persona as well. It's a good grounding mechanism for you long-term. Designers and developers will come and go or multiple people could work on it at once. It'll give everyone something to fall back on for consistency. Finally, don't forget to identify yourself as a separate entity from the Google Assistant. That means greet the user. Number two, think outside the box, literally. It's tempting to draw out a conversation path visually and plug in the dialogue, and then dive right into the code or start stringing together blocks of context to write a working agent, and then back into the experience iteratively. We don't recommend this. You can, but I promise you it'll save time and give you a much richer experience to map out the core conversation paths ahead of time. This doesn't mean just the so-called happy path. It doesn't mean error paths either. Instead, write out your core experiences like you would a screenplay. This can be as scrappy as acting it out and documenting it on paper, or create an interactive prototype you tweak and play with until you're ready to start coding. And then, when you draw out your initial vision, keep it at a high level, where the boxes present entire dialogues or user intents, but leave out the individual wording you'll use in the interaction. Number three, context. Here are just some types of context you can consider and infuse into a conversation to make it more meaningful. Where is the user? What are they doing? What type of device are they interacting on? How is the experience influenced over time? Where is the user's frame of mind in relation to what they're trying to do? Try to cater to their intent, not to a specific feature. Number four, speech recognition technology isn't perfect, but it's getting better all the time. So for the most part, you might want to treat that as a black box that'll continue to improve. You have to, of course, be aware of its limitations, but try to step back and look at the interaction from the user's perspective when something goes wrong. You don't have to try to steer the user back to the original question if they don't get recognized immediately. There are so many reasons they might not have been. People hardly ever say nonsense. Try to take those so-called errors and make them into another meaningful turn in the dialogue. Finally, I leave you with a challenge. This new world of conversation design for machines opens up a great deal of opportunity that hasn't existed before for us to use technology to advance our lives. Sure, as you get started, create some games, but I urge you to think bigger eventually. Help give someone access to information or technology that they couldn't use before because of a physical, mental, or an economic disadvantage. We're excited to help you do that with Actions on Google. Check out the description for some resources, and we can't wait to see what you create. We're bringing together a really talented group of designers and developers to collaborate and innovate and generate exciting ideas for what can be done on the Android platform. Within this context of a sprint, I really think that the distinction between designer and developer is blurred. It's all about problem-solving and getting it done. You're not working with designers as an engineer making a mistake to begin with. I feel like every designer or engineer brings a perspective to a project, and that's what's nice about working with a designer. You think like an engineer and designer comes and brings a perspective. In this group, I'm happy to work with super talented designers. It helps me understand Android better because I see how they're thinking about it. It kind of kills my prejudgments about the product, and I start thinking about it from a fresh perspective. We focus in on generating a broad range of ideas that are really innovative and far-reaching, and then prototype and pull together concepts and prototypes to demonstrate and create a vision for what those concepts could be. I never was really familiar with the idea of using this type of process for ideation, and it's impressive to see the degree of precision that Kai specifically has introduced. In the way that she's run this process, it's been really enjoyable to see specifically how one exercise the next and the next and the next, and how that can actually effectively yield good ideas. We're a very small company, so effectively we're doing similar things all the time, but we often rush straight to the solution. It was nice to see some structure around the process. So it starts here and you don't get to fix it straight away. It's like define the problem, then go to this step, then go to this step. And that kind of structure, though it seems kind of burdensome, it actually improved the overall thing we came up with. When you're in a company, you tend to think about how the company does something and how the things that you've learned in the past, and then here is just kind of like a blank canvas again. You get to start new and then rediscover the things that kind of work in a workflow or in a much more creative space. I mean, here we try to build something in three days, which is insane. This is the first time we've actually been in a sprint with just doing Android 2. I would like love to bring some more of the Android sprinting back to our company. I think design sprints facilitate interdisciplinarity, interoperability, and all of the kind of amazing things that can happen from a good collaboration. One of the really exciting features of Android is that it's a very open platform. Anyone can come and write their own apps and create their own concepts. We want to bring that opportunity of openness to the design community and inspire designers to generate concepts and ideas and design really cool apps that leverage the openness of the platform. I was impressed with the openness of Android. It's definitely a unique thing that you might not find on iOS or other systems. In our app in particular, there's things that we definitely couldn't have done on iOS that are actually really useful. And it is nice because the app can organically come with you into the rest of your life. I think for me one of the big awesome parts of it was that I was able to begin to learn Android, which is something that I've always wanted to do as a prototyper. I just would like to get to know Android better and this is like a really big jump start. One of the few constraints that they put on us here at the Design Sprint was like to sort of like come up with something that's like unique to the Android ecosystem. And as we were going through all of our, you know, crazy aids and all of the myriad ideas that we had come up with, we actually abandoned some of them that were cool because they were kind of like something that could feasibly be built on any platform. And I think that giving us that constraint infused our other ideas with more creative solutions and I think what we came up with is like so simple and so delightful and only available on Android. I am Ashok Kumar. I am Ashok Kumar. I used to play a lot of computer games. I was completely fascinated about it. I started to feel like I should prepare such games. I just wanted to learn how these applications react to humans. Somewhere the dots were not getting connected. So I started reading blogs on all the possible online resources. Luckily from GDG Bangalore, I got an invitation for Google Mobile India 2015. If I attend this competition, I could get a sponsorship for nano degree. I was completely excited. I figured out that's what I required to connect the dots. It's like being really in front of our teacher. It enabled me to develop a production-ready application and converting my idea into reality, something that helps education and make the world a better place. Now I am feeling relaxed. When we want to learn something, it's not very comfortable for girls. Generally, they don't go out and learn with other colleagues. They don't prefer learning with someone. The reason I go to Udacity is because a friend of mine told me about how wonderful the course is and even if you know nothing about Android. The courses are designed so that you don't have to have very much detailed experience in Java or Android. So when I started the course, I was very scared that maybe it will be too technical for me to understand because I don't know very much about Android or Java in very deep. How to have some logic into a program and how to code in general. But the instructors like my favorite way, Catherine, she makes it so interesting and so normal. You don't feel like that you are learning something and it's a good way to learn online. Whatever I feel like learning today, I could just go and then search for it and there will be a course available for that. I was an HR. I was working as a recruiter and I was completely unaware as to what I am supposed to do. I used to feel that I just don't want to be a recruiter. I want to be on that other side, talking technology and talking about gadgets and how technology is changing our day to day lives. One of our relatives he challenged me that you are a girl and engineering requires a lot of designs and drawings. I wanted to prove him wrong and that was the time when I heard about Udacity. I started taking some online courses in Java and Android. Very basic things. It is little difficult but it's just that I don't want to feel that sense of regret. I just don't want that. So Udacity has actually been a saviour in my life. The quality of the projects is very good. After completing those, you feel like you have conquered something. Actually, I finished my run at V yesterday and so today it's just going to be the celebration. You can be happier to be here to see the launch of the Android Skilling Program. There's going to be so many new great Android developers here in India. Good morning everyone. Thank you so much for coming. Some of you know I'm from Delhi. Always fun coming back and meeting all of you. We can scale up developers and scale up mobile developer training to help make India a global leader in mobile app development. Having the universities teamed up with us in the Skilling Program is going to be a huge opportunity to make a huge difference. Finally, we've launched it. It's been a year since we first introduced this program. Two million developers. I think that it's a really achievable goal and I think that it would do a lot for improving the environment in the country in terms of hands-on programming. So I think it's great. It's a massive number. The possibilities are immense. India will be the largest developer based globally and just to get everyone to start thinking about Android and developing for Android, we're at the cusp of a revolution. Let's do something big. More games, more users, more success. Yes. Developing a successful app isn't easy. To reach a broad audience, you'll need to consider your iOS, Android and mobile web users. And to build for these platforms, you'll need a back-end server to store data and support the apps. Of course, you want to get your users logged in, hopefully lots of users, which means your back-end will have to scale. Then after you've solved your scaling problems, you'll have to find more ways to spread the word to get new users, but have you found a way to measure all this activity? And, oh no, your app is crashing and causing servers to melt down and you haven't even made a dime yet. Don't you wish this could be easier? This is why we built Firebase. It has all the tools you need to build a successful app. It helps you reach new users, keep them engaged, scale up to meet that demand, in addition to getting paid. From the beginning, with Firebase, you'll have test lab and crash reporting to prevent and diagnose errors in your app. Your back-end infrastructure problems are solved with our real-time database, file storage, and hosting solutions. Acquiring new users is easy with invites, adwords, and dynamic links. And, using the authentication component, you can get those users logged in with minimal friction. Once installed, you can keep your users engaged with notifications, cloud messaging, and app indexing. Then, with Remote Config, you'll have the freedom to experiment with new features and optimize the user experience in real-time. And, of course, you can earn money with the same AdMob component that's been monetizing great apps for years. Last, but certainly not least, our all-new Analytics component, designed uniquely for Firebase, brings insight into how well these components are working for you and your users. With Firebase Analytics, you can measure and optimize your advertising campaigns, discover who are your most valuable users, and understand exactly how they are using your app. Now, all these components work great on their own and provide a solid infrastructure to build out your app, but they work even better when combined in creative ways. So let Firebase handle the details of your app's backend infrastructure, user engagement and monetization, while you spend more time building the apps your users will love. To get started right now with Firebase on Android, iOS, or the web, follow these links for more information. Then, to manage and monitor your apps connected to Firebase, there's a web console to view crashes, set up experiments, track analytics, and a whole lot more. And to learn more about Firebase and all of its components, you can read the documentation right here. We can't wait to see what you build. Thank you for joining us here today. India is coming along the way, as I just mentioned. Today, India is the second largest country in the world in terms of number of developers. Soon, it's going to be number one. What we want to invest in is actually training the faculty from your colleagues. The potential is so great, and what Google is doing to help catalyze that innovation is really an exciting time for these campuses. We are really trying to provide the best possible experience to teachers in these faculty hubs, because the first step to training 2 million developers is to train the teachers that are going to teach those 2 million. Industry, as of now, demands a lot of updated curriculum, developing 2 million Android developers. Being working in a technical university we can contribute hugely on developing those 1 million app developers. So we're excited that all the raw materials are there to create an innovation revolution in India. I really think the students are going to make some great things, and I can't wait to see what comes out. There's a lot of potential in India, and we need to take it forward. With Google, we can provide rich opportunities to all. That is the essence of Google Program, which I have seen. This is a good move, and this program will definitely be useful to the students, because app development is going to rule the world for the next few years, really. Hello, and welcome to Google I.O. 2017, our annual developer festival. I'm Timothy Jordan, developer advocate at Google, and I'll be touring the I.O. venue throughout the next three days, exploring the sandboxes, interviewing Googlers, and giving you eyes on the ground. That's right, even though you're joining remotely, this year you'll get an in-depth look at everything happening at Shoreline, not just the sessions. You can follow along on any of the livestream channels on Google.com slash I.O. Google I.O. is an outdoor developer festival hosting 7,200 attendees at Shoreline Ampitheater, along with millions of viewers on the livestream. That's you. And thousands of developers at more than 450 local I.O. extended events across 80 countries. We have 14 content tracks with over 150 breakout sessions all livestreamed on Google.com slash I.O. There's also over 70 codelabs live to get you up and running with our latest APIs today at g.co slash I.O. slash codelabs. But before you get to any of that, let's review a handful of the announcements you just heard. Smart reply, available in inbox by Gmail and Allo, saves you time by suggesting quick responses to your messages. It utilizes machine learning to give you better responses the more you use it, and it already drives 12% of replies in inbox on mobile. Starting today, Smart Reply is coming to Gmail for Android and iOS too. We're excited to announce that our second generation Tensor Processing units are coming to Google Compute Engine as cloud TPUs, where you can connect them to virtual machines of all shapes and sizes and mix and match them with other types of hardware, including Skylake CPUs and NVIDIA GPUs. You can program these cloud TPUs with TensorFlow, the most popular open source machine learning framework on GitHub, and we're introducing high level APIs which will make it easier to train machine learning models on CPUs, GPUs, or cloud TPUs with only minimal code changes. Many top researchers don't have access to anywhere near as much compute power as they need to help as many researchers as we can and further accelerate the pace of open machine learning research, we will make 1000 cloud TPUs available at no cost to ML researchers via the TensorFlow Research Cloud. Android O, coming later this year, will bring more fluid experiences to your smaller screen as well as improvements to battery life and security. With Pitcher & Pitcher, you can seamlessly do two tasks simultaneously, and smart text selection improves copy and paste by using machine learning to recognize entities on the screen. Google Play Protect is Google's comprehensive security services for Android, which provides powerful new protections and greater visibility into your device security. Play Protect is built into every device with Google Play, is always updating, and automatically takes action to keep your data and device safe. We also have an early preview of a new initiative for entry-level Android devices that internally we call Android Go. The goal is to get computing into the hands of more people by creating a great smartphone experience on all Android devices with one gigabyte or less of memory. Android Go is designed with features relevant for people who have limited data connectivity and speak multiple languages. Of course, way more was covered in the Google Keynote, including new ways to share with Google Photos, including photo books, new ways that Google Assistant can help you do even more investments in the core technologies that enable VR and AR, and in platforms that make them accessible to more people. Have questions about IO17? Tweet them starting today through May 19th using hashtag IO17Request. A team of Googlers will be on-site chasing down answers for you. Make sure to also follow the conversation on hashtag IO17 and on the Google Developers blog. Make sure to tune in to the Developer Keynote at 1 p.m. Pacific Time, and I'll see you right here on the livestream between all the sessions. This is Google I.O. 2017. Hello and welcome to Google I.O. 2017. I'm Timothy Jordan, and I'll be here between sessions to guide you around the in-person experiences at the festival. That's right, even though you're joining remotely, this year you'll get an in-depth look at everything happening on the ground. Have a question about IO17? Tweet it our way today through the 19th using hashtag IO17Request and a team of Googlers will be on-site tracking down answers for you. You just saw our Developer Keynote where Jason Titus led you through our investments in tools and services for developers. That's you. It's our goal to simplify repetitive tasks like dealing with user login, analytics, or synchronizing real-time data. We're providing tools to make it easier to solve these and other everyday problems in simple and powerful ways, and we want to help you build amazing new experiences with machine learning, VR, and voice-enabled interactions. Let's review a handful of the announcements you just heard along those lines. Android is officially supporting the Kotlin programming language, in addition to the Java language and C++. Kotlin is a brilliantly designed, mature, production-ready language that we believe will make Android development faster and more fun. Android Studio 3.0 Canary is our new preview that includes three major features to accelerate the development flow, a new suite of app-performance profiling tools to quickly diagnose performance issues, support for the Kotlin programming language, and increased Gradle build speeds for large-sized app projects. With Firebase, we're providing more insights to understand app performance through a new product, Firebase Performance Monitoring. We're also introducing integration between hosting and cloud functions, adding support for phone number authentication and improving analytics. Oh, and we've also started open-sourcing our SDKs. We've introduced new innovations for you to make it easy for your users to pay for services with the Google Payment API to build profitable businesses with a completely redesigned AdMob and to grow a user base with universal app campaigns. There are several powerful new features and reports in the Play Console to help you improve your app's performance, manage releases with confidence, reach a global audience, and grow your business. Android Instant Apps is a new way to run Android apps without requiring installation. Now anyone can build and publish an Instant App. There are also more than 50 new experiences available for users to try out from a variety of brands such as Jet, New York Times, Vimeo, and Zillow. And finally, we're adding two new certifications for web developers, the mobile site certification and the mobile web specialist certification. Those are some of the highlights. Check out our Google Developers blog for a more in-depth recap of this afternoon's announcements. We have 14 content tracks with over 140 breakout sessions all live-streamed, and in between them all, you'll be your all-access pass with sandbox tours, interviews, and even a peek or two at the parties. Tune into the live-stream on Google.com slash IO, catch me in-between sessions on any of the live-stream channels, and follow the conversation on hashtag IO17. This is Google IO 2017. Good afternoon, everybody. Thanks for coming to IO 2017. Yes, we got a great one to start. I'm so excited to kick this off. My name is Taylor Savage. I'm a product manager here at Google on the Chrome web platform team. Now, we work on a number of different web developer-facing products on this Chrome web platform team, and here today I am here to talk about one of our biggest, which is the Polymer Project. So the Polymer Library is a small JavaScript library that makes it easier to build web components. Now, building big complex things out of smaller, less complex things is a pretty fundamental part of any engineering discipline, and it's certainly fundamental to web development. It's hard to imagine building any kind of large, complex website or web app of any meaningful scale without some way of encapsulating and reusing logical and interface components. In the past, though, the web platform itself has never really provided us developers with a clear way to build components. We've had HTML elements baked into the platform, but we haven't had a platform-provided way to actually design and build our own components. That method has been nowhere to really be found. So as developers, we've always had to kind of drag along our own component model in JavaScript. So along with our application code that actually ran our applications, we've had to ship an invented abstraction in order to build any sort of meaningfully large application or to be able to reuse components in any sort of meaningful way. Now we have web components. Web components provide us with a very flexible, a very low-level, but a fundamentally standardized way for building components on the web. It gives us encapsulation and reusability and interoperability all things that a good component model needs, but baked directly into the web platform itself. Now we've been excited about the prospect of web components for a few years now, and we've been working hard on the Polymer project to make it easier to take advantage of the promise of web components today. But this year, this I.O. is by far the most exciting moment that we've ever had throughout the entire history of web components. Because as of this year, Safari now natively supports web components. Yeah. Yeah, this is incredibly exciting with Safari 10. The Safari browser shipped native platform support for custom elements and Shadow DOM, the two most critical web component APIs. Chrome supports these APIs. Opera supports these APIs. We've been on stage at I.O. promising web components for a few years now, but now they're here. They're actually here. So with these browsers, there are over a billion active mobile devices in the world, in our pockets right now, in all of your pockets right now, that natively support web components with their browsers. Web components are no longer some distant future. Web components aren't really even that cutting edge anymore. Web components are just a reality of today's web. A massively powerful new tool in our toolbox as web developers, and really a deeply fundamental change to the way the web platform itself works that makes it a much more capable development platform to build the types of experiences that all of our users expect today. So our mission on the Polymer project on the Polymer team is not just to build useful tools for web development, but to work to fundamentally move the web platform itself forward, and to make this future of web development more possible today. So the Polymer project as a whole started a few years back with a directive from some engineers on the Chrome team, and they said, we need to put together a team of web developers that can live in the future, on the web platform, not as it is today, but as it will be, and then kind of report back to the present to help us, the web platform team on Chrome, know what direction to go. So the Polymer project in this way was kind of designed as a laboratory for future web platform features that we wanted to bake directly into browsers. To find out what worked, to find out what didn't work, to provide feedback back to browser implementers and spec writers, and to do whatever we could to then bridge the gap between the web platform as it existed today and the new platform features that developers were going to be able to take advantage of in the near future. So fortunately, we are not alone on the Polymer project in this mission. We have friends also looking towards a web development future. In fact, we have a lot of friends looking forward to a web development future. The open source web ecosystem is a dazzling place. There is a ton of innovation, as everybody here knows, happening all the time when it comes to web development. React has brought functional reactive programming for UIs into the mainstream. Preact has made this possible at a fraction of the size. View takes a very bottom-up and modular approach to framework design. Spell looks to start with this framework and then compile it all away. There's so much exciting stuff happening all the time in the web development open source ecosystem. It's an awesome place to work. There are luminaries charting new paths, trying new things, guiding the way all the time. So on the Polymer project, we look at our mission, we look at our contribution to this open source ecosystem through the lens of being part of the web platform team here at Google. So the Polymer team itself is really half web developers and half web platform implementers. And what we quickly realized was that web platform users, web developers, framework authors have ways that they like building web apps. And web platform implementers, so browser engineers who spend their days optimizing platform features, building consensus around new platform features and APIs across different browser vendors, web platform implementers have ways that they think web apps should be built in order to be fast, in order to take advantage of all the things that this web platform does well and avoid the things that it doesn't do well. And these two worlds, how web developers like to build and how browser implementers like to build, sometimes don't agree. And sometimes they loudly disagree. So before web components, in order to build an application of any meaningful scale or size on the web, you needed to use a large JavaScript framework. It was an absolute requirement. And JavaScript frameworks can be really amazing. They allow for all sorts of different mental models to think about how you're going to build your app. They help developers avoid foot guns that are in the platform. They grow large communities and ecosystems in and of themselves in their own rights. But fundamentally, as web implementers, we see the fact that you can't really build apps with just the platform as a flaw of the platform itself. So with the Polymer project, we are pushing to make it possible to build and deploy apps that are closer to the web platform and use that platform the way implementers intend it to be used rather than substituting a separate platform on top. And we think, of course, that there can be some very real benefits to using the platform directly to build an app. We need to have a better interaction, shallower learning curves, better performance, better usability and interoperability, faster load times, and ultimately a better user experience. So we sum up this mission in our slogan and rallying cry, which is to use the platform. There are two properties of the web platform that make it great. One is it's standards-based, and two is that it's open and distributed. Anyone can look at the specs and build a browser to see if they are open source and no single entity can decide what makes up the web. So what we end up with is a number of different browser vendors in this collaborative competition with each other, this cooperation, each looking to provide a great user experience to its users in its own right and each building off the same kind of general set of specs, but inevitably providing its own slight interpretation of what this web platform is. And so the end result is a web platform that moves very slowly and very methodically. Now there are benefits to a slow-moving methodical platform, but today more than ever, we need that platform to change and to adapt. The original web platform was designed for documents, and then it allowed you to style those documents and then maybe it allowed you to script some things in those documents, but the platform itself wasn't fundamentally constructed around the idea of building applications. It's lacked very basic things you need to build an application. It lacked proper encapsulation. It lacked a component model. It lacked an efficient way to load resources. And frameworks do an amazing job of making it possible to build apps that can survive on this web platform. They bring along the things you might need to build an app. They provide encapsulation. They provide a component model. They create a layer of abstraction that lets developers do things that the low-level web platform hasn't let them do. On the Polymer project, though, we have our web platform hats on. We have our browser implementer hats on. We want the platform itself to be rich and lush and powerful. We want the code you write to be eminently reusable. We want it to be more maintainable. We want it to be lighter weight and faster. We want it to be possible to build higher quality applications closer to this platform itself. And for your applications built close to this platform to directly benefit from all the hard work that's going on all the time to make the platform better and faster. So on the Polymer project, again, with our web platform hats on, we don't want to find a way to work around the web. We want to terraform the web itself. We want to make it easier for developers to take advantage of what the platform does well and actually change what the web doesn't do well. And that is what we mean by use the platform. Now, of course, use the platform doesn't necessarily mean use only the platform and nothing else. That's not reasonable. There's this wonderful symbiosis that exists between the platform and the JavaScript community. And there's plenty that other frameworks can do that the platform can't provide, that Polymer can't provide. There's plenty that libraries and tools can do that wouldn't necessarily make sense in the platform or that you couldn't reasonably build cross-browser consensus around in order to get baked directly into the platform. So with use the platform, we're just trying to go from the bottom up. We're trying to help raise the tide of the partnership code that's absolutely just the code that's absolutely unique to their application and let the platform itself handle more and more of the glue. So the ultimate success for us on the Polymer Project would not to be to have everyone in the world using Polymer. The ultimate success for us would be for the platform itself to be powerful and extensible enough that Polymer and our tools would no longer have to exist at all. So with web component shipping broadly, along with other new powerful platform features like HTTP2, like ServiceWorker, use the platform is not some hypothetical vision. It's a very real strategy that major companies and major projects are using today. So we get asked a lot who is using Polymer? And the answer is a lot of people. So Comcast, the largest media company in the world, uses Polymer to share components across multiple different Xfinity properties that they own. USA Today uses Polymer to build rich interactive experiences like they had for the Olympics and the election. ING, the largest online bank in Europe, uses Polymer to share components across the dozens of different applications they have built for dozens of different markets. Netaporte, the largest luxury fashion e-commerce company, uses Polymer to share rich components to keep a consistent design across multiple properties and pages. BBVA, one of the largest banks in Spain and Latin America, uses Polymer to share components to be able to quickly build new applications. Coca-Cola, the largest beverage company in the world, uses Polymer components to ensure consistent branding across all their different digital signage. Electronic Arts, one of the largest game companies in the world, uses Polymer components to quickly spin up beautiful new sites for all the new games they're coming out with. And General Electric uses Polymer for Predix, which is their industrial Internet of Things platform to make it easy to build all kinds of great data visualization applications on top of the Predix platform. So companies around the world are using Polymer to build applications, and companies around the world are also using Polymer and the Polymer App Toolbox to build lightning-fast progressive web applications that use less data that load fast. Sites like Jumea Travel and Conga, the two largest e-commerce platforms in Nigeria, have production progressive web apps built using Polymer. Wego, one of the largest travel booking sites in the Middle East in Asia, has a blazing fast progressive web app built with Polymer. And OlaCab, the largest ride-hailing company in India, just launched a progressive web app using Polymer. And of course at Google, we like to use the platform as well. So there are over 700 Google projects that use Polymer. In fact, Polymer is one of the largest front-end dependencies used here at Google. We use it in production on products like Chrome and YouTube Gaming and Play Music. The new Google Earth just launched using Polymer. And the new YouTube redesign is built using Polymer, one of the biggest web properties in the world making a bet on web components. So we're seeing huge momentum in Polymer web components, but this is only the beginning. We've been working on a ton of new stuff as well, which we're extremely excited to share with you today. So here to talk about some of the latest projects that we're working on on the Polymer team is Wendy Gitzberg, the product manager of the Polymer project. Thanks, Taylor. The first version of the Polymer library was released about three years ago, and it was one of those, you go live in the future, report back and tell us what you found, proof-of-concept projects that Taylor was talking about before. It was a proof-of-concept that you could take these nascent web APIs and wire them together in such a way that you could build encapsulated, reusable, platform-based components. And we hacked through that for a while, explicitly as a non-production-grade developer preview, but that didn't really seem to stop anyone. We saw that developers were still picking it up and trying it and using it, and they really liked it. And even some large companies, like Comcast, immediately took to this idea of having platform-based components. So we took the next logical step, and we decided to move out of a proof-of-concept and take Polymer into the big leagues to graduate a little bit as a production-quality web-components library with Polymer 1.0. And since we launched 1.0, we've seen a great deal of advancement and traction that Taylor was talking about before, and Polymer 1.0 came a really long way on its own. We've seen a flourishing production in web apps all the way from Southeast Asia to North America. And with these web-components APIs supported widely across browsers, we have the perfect opportunity to realize this initial vision even further. So we've taken the best of the Polymer library and the best the platform has to offer and fused them together to form the next major version of the Polymer library. I'm proud to announce Polymer 2.0. Polymer 2.0 is based on web platform standards that are natively supported on over one billion mobile devices. Because of advancements in the platform, Polymer 2.0 is one quarter the size of 1.0. And all of this makes web-components built with Polymer 2.0 interoperable by default, meaning you can use them more easily in popular JavaScript frameworks. Here to talk more about Polymer 2.0's main goals and how we achieve them is one of the tech leads on our core Polymer team, Kevin Schauff. Thank you, Wendy. So on behalf of the entire Polymer engineering team, we are really excited to get this new release out to all of you. So I'm going to give a brief introduction of Polymer 2.0 by talking about three key goals that we organize ourselves around for this release. So first is aligning to the latest platform features that are shipping across browsers today. Delivering on the promise of web-components interoperability and then providing a smooth migration path for our existing customers to Polymer 2.0. So in the last couple of years since Polymer 1.0 shipped, all of the web browser vendors got together and agreed on a new set of web standards for defining web components. And in the modern browser, defining a new web component is as simple as writing an ES6 class that extends from HTML element. Now you may not be familiar with HTML element is actually the built-in base class that all elements in the browser extend from. And now using ES6 classes, we can do that very idiomatically. This is as simple as it is to define a new element that extends the browser. And then by registering that element in the custom elements API, we can associate that class with an HTML tag name. And then anytime the browser sees an instance of that element, either in static markup or created dynamically, say via a framework, it's going to create an instance of our class. It's going to run the constructor. And it's here in a few other lifecycle callbacks provided by custom elements that we can implement the unique behavior of our element. And that's where Shadow DOM comes in. So Shadow DOM allows us to create a scoped subtree of Shadow DOM that's completely hidden from the outside. All of the styling in DOM are completely encapsulated and self-contained. And this is the real key to ensuring that web components are interoperable and can run in any environment. And what's awesome is that the code that I had up there on that previous slide requires no library code at all. So that three or four lines of JavaScript, we can actually just paste into an inspector, plugged into any one of your phones right now, hit Enter, and it shows how to construct and render that element, that component. Web components are native, right? So you may be wondering if the browser is becoming so capable, if we have this built-in component model, why do we need Polymer at all? So Polymer's goal in life is to fill in a few key features that we find useful when building custom elements. Things like being able to render the contents of the Shadow DOM from a template, keeping the elements in that template in sync with the element state via data binding, a little bit of declarative sugar for things like custom event handlers. And then Polymer elements also keep their property and attribute APIs in sync so that you can send initial configuration down in markup and then interact with your element after that via properties in JavaScript. So this is a sampling of the features that we provide in Polymer. In Polymer 1.0, we provided these as kind of an opaque function call that kind of hid a lot of the beauty, the elegance of web components. So now that ES6 classes are native in the browser, we're actually, in Polymer 2.0, moving to a new syntax where we're layering the Polymer features onto that native, elegant syntax in the browser. So if you're just getting started with web commands, I actually really recommend you take this little snippet of code and play with it in the inspector in the browser. You can just get a feel for the web APIs right there. And then as soon as you're ready to start layering in some of the functionality that Polymer provides, it's as simple as changing the base class so in Polymer 2.0 to use the features of Polymer, all you have to do is extend from Polymer element instead of HTML element and then provide a little bit of metadata on your class to start taking advantage of those features. And because we're aligning to the ES6 class syntax, you get things like native inheritance for free. So you can extend an element that you've created or found on the internet, change some of its implementation details. I might return a different template that I want to render and define a new element that has some extra information from functionality. So to recap, Polymer 2.0 is aligning ourselves to these new platform features that are shipping across all of the web browsers. So ES6 classes, custom elements, Shadow DOM, and CSS custom properties. So I didn't really show custom properties in action, but this is very analogous to how custom elements allows us to extend HTML. CSS custom properties are a new native browser feature that are actually shipping across all browsers now. As of Edge 15 that shipped a couple of weeks ago, we can now use CSS custom properties to define a custom styling API for our custom elements and for that Shadow DOM. And of course, for the remaining browsers, which are mostly users on desktop, right? So most of our mobile browsers with Chrome and Safari all have all of these features natively. For the remaining desktop browsers, we'll continue to provide a set of high quality production polyfills that you can use to target all of these features. And the result of all of this is a dramatic reduction in code payload. On top of using these platform features, we're also shipping Polymer 2.0 with an improved polyfill loader that detects the capabilities of the client and only loads the polyfills necessary for that individual browser. And the result is this really nice reduction in payload. So as the capabilities of the browser go up, the amount of code that you need to load and run on the client to use all of Polymer's features goes down. And if we compare this to Polymer 1.0, you can see this is a huge improvement, especially for the most capable browsers. We're able to use Polymer's features in a fraction of size. And this slide right here is really the promise of Web Components in action, right? Aligning to the latest platform features. So I'll move to the next goal. This is around interoperability. And interoperability is a feature of Web Components that we promised from the beginning. That building your components around a state of a set of standard APIs that the browser can use in any environment means that you can use them in any environment. So we've always known that Polymer elements could be used here in static markups from the server or inside of other Polymer elements. But the promise has really been that you could use Polymer elements here in Angular component, for example. Or even in some of the hottest new JavaScript frameworks out there like Preact. But unfortunately in Polymer 1.0, the realities of polyfilling these browser features kind of got in the way. When we set out to ship Polymer 1.0, we found that the mobile browsers of the day kind of back in 2014, weren't capable enough for us to robustly polyfill the entire Shadow DOM spec. And so we made a conscious decision to really, we really reluctantly shipped a proprietary API for interacting with Polymer elements in the DOM that mimicked a lot of the Shadow DOM features so that you could use those features today on those underpowered devices. But unfortunately, this obviously broke that promise of interoperability. So I'm really excited to say that things are different now. With all of the mobile browsers that we're all using now shipping Shadow DOM and now being able to leverage Shadow DOM, the importance of the polyfills is slowly fading away. And so in Polymer 2.0, we've taken the opportunity to rewrite the Shadow DOM Polyfill to be completely transparent so that any framework that's rendering the DOM is actually using Shadow DOM. And the result of this is that Polymer elements built with Polymer 2.0 will now be able to be rendered by all of the JavaScript libraries out there today. And as you know in the JavaScript community, all the frameworks that everyone's going to be using tomorrow are probably going to be totally different. But the key point to take away is that elements built with Polymer 2.0 and Web components will still be compatible with them, right? The promise of Web components. All right, so last I want to touch on the migration path that we've charted for Polymer 2.0. So as Taylor mentioned, there's a ton of really awesome companies making huge strategic changes to Polymer and Web components. And we're really excited to get this release out to all of you. But we also know how scary a big breaking change can be to the foundation of your tech stack, something like the component model that you're building everything from, right? And so we spend a ton of time and effort on the Polymer team to craft a really nice migration path from Polymer 1 to Polymer 2. So on the left here, I've got an example of the Polymer 1.0 syntax. And if you're not familiar with it, that's fine. All I want to do is get your attention to the right and get a feel for the amount of work that will be required to port from Polymer 1.0 to Polymer 2.0. All right, everybody spot the difference? That's right, there's not any. Because Polymer 2.0 is shipping with a backward compatibility layer that allows elements written with the Polymer 1.0 API to still target Polymer 2.0 and all of the native browser features that we're taking advantage of. And so for the vast majority of Polymer 1.0 code out there, the only changes to migrate are some minor changes to the template to align to the new Shadow DOM spec. But that's really mechanical. So for the users getting started with Polymer 2, we'll have a really nice set of options for getting started. A lot of users will be able to just start right with the ESX class syntax and get all the benefits of using native classes in the browser like inheritance and native supercalls. On the other hand, a lot of users that have existing Polymer 1.0 code will be excited to be able to port that to 2.0 as quickly as possible. And for those users, they can just simply load the compatibility layer, make those minor changes to the template and be off and running. And then migrate ESX classes in over time as they see fit. And then last, we're also shipping with what we're calling a hybrid pattern which allows users to port elements to a common subset of features in Polymer 1.0 and 2.0. And then those elements can actually run in applications targeting either version of Polymer 1.0 or 2.0. And this will be really useful for organizations shipping a lot of components to a lot of production applications at the same time because it means they won't have to freeze their application development while they're porting their whole element set. So I'll also mention that we've dog fooded this migration path with all of the elements that the Polymer team maintains, something like 75 or something, and then a whole bunch of our demo applications. We're also hearing really good things from the community that the migration from Polymer 1.0 to 2.0. So that's Polymer 2.0 in a nutshell. We're aligning to the latest platform features. We're delivering on the promise of WebCabon center operability, and we're providing that smooth migration path for people to get up and started with it today. So with that, I'll hand it back to Wendy to talk about all the other awesome stuff that's happening on the project. Thank you. All right. Thanks, Kevin. So the Polymer project isn't just about the Polymer library. It's made up of a bunch of things that the Polymer team is working on. The Polymer web development nowadays can be unnecessarily challenging. So we bring a whole suite of tools, libraries, and patterns to support you. We have our elements, a collection of about 100 web components made by our team. This includes Material Design UI components, Firebase integration pieces, data storage elements, and much more. To make development with web components as streamlined and accessible as possible, we have a full suite of web component that allows you to use Polymer or any other sort of web components you're using. To learn best practices for rendering web components both reliably and at scale, we've created a few helpful patterns. Most recently, we've developed the purple pattern, which focuses on fast-first load emphasizing time to interactive and reliability in all network conditions, including usability offline. And with the Polymer library is the foundation for building components and tools and patterns to help along the way. We've also gone that next step and built the Polymer app toolbox. The Polymer app toolbox is a set of tools, templates, and off-the-shelf components to help you build web component-based progressive web apps more easily. So that's quite a lot under the Polymer project umbrella. And over the past year, we've advanced each portion of the library and the other projects to pack in more power and help you build for the future of the Polymer app. So the Polymer library exists to help you build encapsulated reusable components. We know this. So, of course, the Polymer team uses it ourselves to power all of the elements that we've made. And as Kevin mentioned, we've taken all of our elements, about 100 of them, and ported them over to hybrid mode so they can be used in projects built with Polymer 1.0 or 2.0. And so, sure, there are a bunch of elements that we've made, including source, off-the-shelf, high-quality web components that you can use today. And if you want to find any elements to use, they're all conveniently located in the same place. On webcomponents.org. Webcomponents.org launched just a few months ago, and it's a consolidated home for high-quality, open-source web components of all types made with plain JavaScript or library-based. And since it's released just a few months ago, we've seen an overwhelming wave of organic growth on the site and the community around it. As of now, there are almost 1,000 elements up there to choose from. And besides that large selection, there's a rich feature set on the site that makes it an awesome place to search for, try, and vet components before downloading. So there's automatically generated documentation across the entire site that gives a consistent look and feel no matter which component you're looking at. There's also inline editable demos. So if you watch here, if you're looking at this game card component, you can click, see that CSS animation inline. You can even edit the attributes down there and change the rank to an ace of five. It updates real-time, pop in another one, unrevealed, and the card flips over. It's pretty awesome. You don't have to download a thing. This works right there on the page. We also have collections. If you want to make it even easier to find different elements that are related, this is the Google Cast Collection. So if you want to add Cast Integration to any of your projects, it's as easy as finding this collection and popping one of those elements in. There's also a couple from IBM Research and other developers in the community who have collected really great components together. And lastly, if you really want to vet and see how popular a component is before you download, here's a super helpful one if you want to add emoji range to any of your projects, because, of course, we all want to do that and we've got 120 stars on GitHub. You can check out Forks, Active Maintainers, and see the health of an element in the open-source community before downloading. And lastly, I just wanted to show some super cool elements that I really like that you can install, import, and insert directly into your code. On the top left, you can check out that game card. It's got those beautiful CSS animations and resizable SVG graphics. The cleans and the jacks look awesome. There's that sign here element with a really cool ink texture for the signature. And there are way, way more. And if any of these look cool to you, just remember that this is a tiny sampling of what's on webcomponents.org. And there are many more useful interoperable components. Anybody can get up there and publish an element. So anybody here, if you want to go home tonight and do that, it's open to you. And a big reason I really love webcomponents.org is because of that community-driven variety that only continues to grow and flourish, including web components of all types, because Polymer is just one way to build web components. It's that same philosophy which inspires our tooling team's commitment to building tools for all web components, not just those built with Polymer. Because it's 2017 and where would a web developer be without her trusty tools? I know that a lot of developers, including myself, have a love-hate relationship with tooling, so we make sure to make our tools as straightforward as possible so they really empower you while you're developing to be better. Two of our most popular tooling projects are our IDE plugins and our CLI. And yes, they work for Polymer and vanilla web components. I'm not going to go into too much detail on these because we have a whole talk about this tomorrow. But real quick, our IDE plugins are awesome for Polymer and web component development. We have them for Adam and VS Code, and they include features like type ahead completion, real-time linting, real-time error checking, and even inline documentation, so you can check out docs as you're typing to get reminded of different properties. We also have the Polymer CLI, which you can think of as your web component multi-tool or your Swiss Army knife. And it's simple, it's smart, it's got just a handful of really powerful commands that make building with web components smooth and efficient. So by using the Polymer CLI and all of the commands, you can get a web app template with a responsive design, client-side routing, and an automatically generated service worker with little-to-no effort on your part. Just a few command line scripts. So our CLI is built around the concept of taking all of the best practices of progressive web apps with web components and making them as simple and straightforward and accessible to achieve as possible. So I wanted to dive into one of our newest best practices how many of you are familiar with the purple pattern? Oh, awesome. Awesome. So the purple pattern was designed by the Polymer team to promote efficient scalable loading on client-side web apps. We introduced it last year, actually Kevin did, on the stage at I.O., and we've seen it really take off since then. You'll hear a bunch of folks talking about it at I.O., including Adi Asmani and Alex Russell, and it's pretty apt that Alex Russell talks about it. So some of you might be familiar with Alex Russell or slightly late on Twitter, but if not, know that he, one, cares a lot about web performance, and two, is very active about that on Twitter. He's also an engineer on the Chrome team who sits just a few feet away from us, and us being on the same team does not make us immune to that. It actually makes us a really conveniently located audience. So in fact, it was one of our many performance discussions with Alex that led to the creation of the purple pattern. So you all have to him to thank for that. And even though we read it as purple, it has nothing to do with the color. It's just a really convenient acronym highlighting key loading techniques, and you could think of it a lot like AJAX, and that it doesn't specify one specific API that you must use, rather kind of a roadmap for how you should be building and loading. The first P stands for push, proactively delivering all of the resources necessary to satisfy our users' initial request, and you can do that using HTTP2 server push or preload. R is for render, render only the requested route, because the most important thing when loading is getting that initial critical view rendered and interactive. The next P stands for pre-cache. Pre-cache the resources necessary that a user might want to access later, and you can do this using a service worker, and that will keep you one step ahead of a user as they interact with your app. And L is for lazy load, which is how you'll be loading your code or your resources, whether that's from the cache or by making an additional network request. So this might seem like a lot to do, but with the power of the Polymer CLI, and mostly just the inherently granular declarative and low over nature of web components, we've done much of this grunt work and automated it away. So now getting started with purple is as simple as choosing a purple-ready app template, running a CLI command or two to let us analyze your dependency graph and generate a manifest, and then follow instructions to deploy to a serving environment, like Firebase or App Engine. That's it. And if you want to learn more about purple, you can see it in action, both in demos and production today. Purple is actually being used in the most high-profile PWA on the web, Twitter Lite, Twitter's new progressive web experience. And Twitter Lite isn't even built with Polymer. It's just using these purple principles in action. But if you want to build purple experiences faster, we've got you covered with the Polymer App Toolbox and our Polymer App Toolbox sample apps. Polymer News is a full-featured, open-source progressive web app built using the Polymer App Toolbox. Along with a demo and all of the code, full documentation, outlining our design decisions, architecture choices, and even advanced features, like integration with AMP or how to include ads, all of that is outlined for you online. But that's not all our apps team has been working on. We tasked a few of the engineers to build something awesome out of the newest APIs. We challenged them to deliver a mobile web experience that you would only expect from native and, of course, do it in a fraction of the size. And they did. And they called it cheese. So let's take a look at the demo. When cheese opens up, it looks like a native app, but I can assure you it is a full web experience. It's got that jumping up and down button that says, say cheese at the bottom, encouraging you to either upload a photo or take a photo. So we're going to upload a photo. And the photo pops right onto the screen and you can see these beautiful CSS animations scanning the image for facial expressions and for her face. And then it jazzes up the photo with emoji, popping on eyes and silly hats, even gets your expression and replaces it with an emoji. It's pretty cool, yeah? And then you can even swipe through and find all of the other great things we've added. And when you find one you like, let's go all the way back. I think I like that first one a little bit. You can even move the emoji. So maybe we want to take the cheese off of Valjean's head and put it on a Frankie's because he looks pretty good with it. And then you can download the photo and share it later. And as you can see, our team, our family, our friends had a blast using cheese and Google's Vision APIs to jazz up our photos with emoji. You can check out Kevin's kids looking really great there. My dad blending in with some sheep in the top left. And it was really fun to use cheese but Frankie, Valjean and Keanu who built it had such a great time building it because they got to use all the latest web technologies. They built a Polymer 2 class-based syntax progressive web app using purple pattern with H2 push on Firebase. All of that earned them 100 out of 100 on Lighthouse which means it has offline support. And it uses Google's machine learning technology through the Vision API to power all of that finding your expression, finding your eyes, your nose and even cooler it's got some great accessibility features. So it uses Google AI to read your image, figure out what's in it and add an alt tag right on there so you can use voiceover and listen to what you took a photo of. It's pretty crazy. Here's the URL for cheese cheese.polymer-project.org We'd love to see your selfies whether you're here at I.O. or watching somewhere around the world and then you can share them with us on Twitter. See if you can come up with any crazy combinations. So that's everything. That's the Polymer 2.0 library built for platform-based interoperable web component development. Web components.org and our house-made Polymer elements all ported to hybrid mode, a suite of web component tooling with our CLI and our IDE plugins, the purple pattern for scalable, efficient loading and the app toolbox and app toolbox reference implementation, Polymer News. This is our mission in action, bringing the future of web development into the present by enabling web developers to more easily lean on everything new and powerful the web platform has to offer. We are committed as a team to researching and developing better ways to leverage all of these new platform features. We're going to continue to advance our library and support developers and companies large and small that depend on us and we'll focus on spreading news and best practices to make sure that developers can build with the platform symbiotically for the long term. The future is here and with all of the recent and revolutionary platform advancements right now is a really awesome time to be a web developer. So we hope you enjoyed hearing all about the latest releases from Chrome's Polymer project. There will be many, many more ways to learn about us all throughout I.O. In fact, this is our largest presence at I.O. yet. If you want to learn more, yeah. If you want to learn more about 2.0 and Alucard, you can check out Monica's talk bright and early 8.30 tomorrow on stage one. If you want to learn more about tools, you can check out Justin's talk tomorrow right here on this stage at 3.30. We've got refreshed brand new code labs. We've got onboarding material on our site. We're going to be in the web sandbox and at office hours. You can come talk to members of the team. Most excitingly, you'll have to make it to the after hours event tonight to check out the Polymon battle stage. If you were at CDS, if you played Santa Tracker, if you're at the Polymer Summit, you'll know about this. But it's a progressive web app where you can battle with your team. And if you want to see even more about Polymer after I.O. is said and done, there's a full event dedicated to it, the Polymer Summit. And we're extremely excited to bring it back for a third time in Copenhagen. The summit will be in Copenhagen on August 22nd and 23rd. And this time we want to hear from you, the community. So if you want to talk on stage at a Google developer event, now is your chance. Go to this URL at co-slash Polymer Summit 2017 to hear more about tickets, which are free as always, and updates. Hope to see you all there. That's all for us. If you have questions or just want to say hi, feel free to stop by. Thanks. Welcome. This is our first certification summit. You guys and ladies are among the first certified Android developers. The developer base growing very fast. Going and becoming the largest developer base in the world. The interesting point is that India is a mobile first market. However, the percentage of developers developing for mobile is relatively low. So we're trying to really supercharge that. India is one of the emerging markets. 80% of the smartphone growth rate is expected till 2019. You guys are Android certified developers. And just imagine that you are going to reach these many people with your applications that you're going to develop. They're not trying to solve for the entire world. They're trying to solve for their own users. You are at the end of the day developing a product, not for you also. You're developing for an consumer. So I'm going to talk to you guys about what's new with Android O. Any of you guys use some of the Firebase 2.0 features? Yes. It's about recognition about getting a job. It's about growing your career. But there are bigger forces at play. I feel that development, mobile development, Android can make a difference actually in the world. Fixing problems in one's own community. Whether it's water, education, environment. But we want to support you connecting to communities and create change in the world. Firebase makes authentication easy for end users and developers. Most applications need to know the identity of a user so they can provide a customized experience and keep their data secure. Firebase supports lots of different ways for your users to authenticate. If your users want to authenticate with their email address, you can build that for them. Firebase Auth has built-in functionality for third-party providers such as Facebook, Twitter, GitHub and Google. It can also integrate with your existing account system if you have one. You're given the choice about how to present login to the user. You can build your own interface or you can take advantage of our open source UI which is fully customizable and incorporates years of Google's experience in building simple sign-in UX. No matter which one you use once a user authenticates three things happen. Information about the user is returned to the device via callbacks. This allows you to personalize your app's user experience for that specific user. The user information contains a unique ID which is guaranteed to be distinct across all providers never changing for specific authenticated user. This unique ID is used to identify your user and what parts of your backend system they're authorized to access. Firebase will also manage your user's session so that users will remain logged in after the browser or application restarts. And of course it works on Android, iOS and the web. That's Firebase off allowing you to focus on your users and not the sign-in infrastructure to support them. Did you know that the average user has 36 apps on their device and doesn't use three quarters of them most of the time and of those about one third of them have only ever been used once. Well what if that's your app? You've done the research, you've written the code, you've performed the testing you've perfected the design you've gotten the installs and then nothing. So how do you prevent this? App indexing helps you re-engage with your users through tight integration with Google search. As well as appearing in search results it surfaces your app through autocomplete and now on tap. All you have to do is get your app in the index and when users search for the content that's already in your app they'll be able to see your app directly in the search results and be able to launch it right from there. It's as easy as that but how does it work? If your app and site have similar content you associate them with each other. Then your app can receive incoming links from search. On Android these are achieved using standard Android app links and on iOS using standard iOS universal links. When a user searches for your content they can then find your app. If you have the app installed it will allow you to link directly to it. When the app launches it sees the address of the index content and decides which screen to load to show it. It's really as easy as that. You can also use the app indexing SDK to submit content of the search engine based on how people use your app content. When people use your app your search position can be improved. With app indexing you get into the index putting your app into Google search and allowing you to re-engage your users. So you've built an amazing mobile app that your users are going to love. But you want to get into people's hands and let them see just how awesome it is. Well AdWords helps you do this putting ads for your app in front of billions of people that use search, YouTube, Google Play and more. You can quickly set up an ad campaign to reach the type of users that might be interested in your app. You only pay if the user clicks on that ad and you can set the budget and acquisition costs that you're comfortable with. But how do you know you're reaching the right users? Maybe some will install your app and forget about it, while others will make it part of their daily lives. Firebase Analytics helps you do this. You can define events that happen in your app that you consider to be important such as reaching the first level of your game, purchasing a fancy new pair of sunglasses or returning every morning to check out new products. You can tell AdWords which of these events are most important to you. Then AdWords will display ads to people who are more likely to complete these important actions in the future. You can also build audiences which are specific segments of users and have AdWords display your app to them. For example, imagine that you have a group of users who are very active, have added a product to their cart but haven't purchased yet. Well you can use Firebase to create an audience of just these people and then use AdWords to give them specific ads and encourage them to come back to your app and take action. Understanding your users and engaging with them at just the right time and in the right way will help you build loyal users for your app. Firebase and AdWords working together to help you grow your user base. Get started today. Your new users are waiting. Android Instant Apps make it possible for users to access your app without having to install it first. Imagine users opening your app just by clicking on a link in an email or a text message. We've recently made Android Instant Apps available to all Android developers. To take full advantage of this we have some best practices to help you make your Instant Apps user experience as great as that in your installed app or maybe even better. You can find all this and more at the URL in the description below. It's important to keep in mind that by enabling your app to run instantly without installation, you're not creating another additional app. We're thinking of Instant Apps as another way to use the app your users already without installation. By adding the ability to access your app directly from a link, a search result or another app, it's much easier for users to engage with your app and get excited about it. If they decide to keep your app on their device permanently, they can then install it right from within the Instant App. The ability to launch an app without having to install it provides an enormous opportunity. For a long time, app developers have focused on the number of app installations as a proxy for the metrics their business really cares about. Without installation, users simply weren't able to engage with the developer's offerings at all. Removing this barrier to entry enables you to think about the metrics your business really cares about. Your audience is now just one tap away from engaging with your service. Your Instant App is just another mode your app can run in. So don't branch your UI and make any unnecessary changes regarding the layout, the interface, design or experience of your Instant App. The transition from Instant to installed mode after installation should be as smooth and seamless as possible. Your users should have a rich and full app experience even if they haven't installed your app. Rather than thinking of Instant Apps as a limiting factor to what your audience can do, think of it as an opportunity to get them to your functionality quicker and a way to foster your relationship with them. Avoid prompting your users to install the app when they're in the middle of a task. They'll be much more inclined to place your app onto their device permanently after it has had the opportunity to prove its usefulness. Refrain from bouncing them back and forth between your Instant App and your mobile web offerings. You can probably tell by now Instant Apps are all about removing friction for your users and getting them closer to your functionality. Think about ways you can remove friction for your users. For example, wait until users can see the benefit of making an account and signing in until the value of doing so becomes apparent. Asking users to create an account after installation seems like a small additional ask when they've already gone through the app installation flow and are only just getting started. However, when they're coming from a link looking for specific content or functionality, being asked to register can feel very disruptive. Make sure to use available APIs to make your and your user's life easier. Using Google Smart Lock for example makes signing up and signing in a much simpler and straightforward experience. In summary, we really think that Instant Apps will unlock a lot of opportunity to engage your audience more directly. Users will be able to focus on what it is they want to accomplish rather than having to spend time maintaining and updating apps on their phone. That's what we're working with. Everything I talked about here and much more you can find on g.co. Thanks for watching. At this time, please find your seat. Our session will begin soon. All right. Hi. Good afternoon. If you're just coming in, please get settled quietly. We have to get started to stay on schedule. So hi. I'm Mike Claran. I'm on the Android team. We're going to be talking today in a lot of detail about some big changes to how we recommend that you build Android applications. But before we go into a lot of the details, I thought it'd be helpful to step back a bit and frame the discussion by talking about where we came from and where we're going. So let's do that. So to start, Android has always been based on some strong principles. Not surprisingly, those were best expressed by Diane Hackborn. She's the one who actually architected most of the Android framework. She wrote a post about this a few months back and I have some excerpts from that here. If you look at our core primitives, activity, broadcast receiver, service content provider, you might reasonably think that those constitute an application framework. But that's not the right way to go. These classes are actually contracts between the application and the operating system. They represent the minimal amount of information necessary so the OS knows what's going on inside your app so we can manage it properly. So as an example, if your app is running in the background, but it's also exposing data to another application through a content provider, the OS needs to know that the content provider is a mechanism that tells us that so that we can keep you alive. So we think of these core classes as really being like the fundamental laws of physics for Android, hence the illustration. That is the cover of the manuscript where Isaac Newton first presented the basic laws of motion. Now, fundamental laws are a good thing. I use a shorthand when talking about this. I say Android has good bones even though people look at me funny after I say that. But what I mean by that is that Android is based on a small, stable, cohesive set of core primitives and that allows a common programming model across a really incredibly diverse range of devices from wearables to phones to tablets to TVs to cars and more. This model also gives application developers the freedom to choose whatever framework they want inside their application for their internal framework. So that means that we on the Android team don't have to get involved in debates about whether MVC is better than MVP or whether MVP is better than MVVBM. You guys can pick whatever makes sense to you. Now, that's a pretty good story if you are in the operating system business, like say me. But if you're in the app development business, like say all of you, that's really only chapter one of the story. And the reason for that is because while strong fundamentals and freedom of choice are good things, we know that in your day-to-day jobs, and we know this because you told us, you want more from us. So I'm going to abuse my analogy a bit here. We can all appreciate the simple elegance of Newton's laws of motion. But if your job is to land a rover on Mars, you don't want to come to work each day and start with only f equals ma and derive everything from first principles. So we've been talking to developers both inside and outside of Google and taking a hard look at the app development experience. And we've realized a couple of things. First, there are peaks and valleys. Some aspects of app development are better served by our APIs than others. For example, we think RecyclerView is at the better end of that spectrum. So with RecyclerView, we didn't say, hey, we give you events and you can draw stuff, and in between you have a touring complete language. So good luck with everything else. On the other hand, maybe activity and fragment life cycles belong down in that dark shadowy place because there I think too much of it is indeed left as an exercise for the reader. And we want to fix that. So as we thought about this, we realized a good solution has some key properties. First, we have to solve the right problems. This is going to be a sustained effort, like sustained for us on Android. But for the first cut we want to make sure that we are going after the problems that every developer faces, the things that are hard to do right now. Again, app life cycles is a really good example. If you don't get that right in your app, nothing is going to work on top of that. And that's true for your app, but that's also true for the frameworks we're trying to build. We have to get that right before we can do anything else. Second, we have to play well with others. We know that you all have huge investments in your existing code bases and we can't create a model where the first thing we say to you is throw all that out and start over. So we're trying to create APIs that you can adopt a little bit at a time and also that interoperate well with other libraries or other frameworks. Third, we want to be more opinionated. We want to take a stronger clear stance on how to Android an app the right way, at least as we see it. Now, this is all still optional and if you already have something that works for you, then great. But developers are telling us that they want more guidance on how apps should be built. And by the way, we're not changing any of the laws of physics here, we're just layering some higher level constructs on top because, after all, F is going to equal NA whether you believe it should or not. Next, it needs to scale. We want solutions that are industrial strength and that will scale to the real-world requirements of real-world applications. We don't want to build something that's awesome for hello world, but then it's going to collapse the first time it bumps into the messy complexities of reality. And finally, reach. For this problem, for making it easier for you to write Android applications the right way, what we think is the right way, we want to use libraries like supportlib wherever possible, rather than adding new APIs to the platform, because that lets our solution reach older versions of the OS as well. Okay, so that's the background of what we're trying to accomplish and why we're here. Now I'd like to introduce Yiit, toolkit engineer extraordinaire, and he's going to walk you through what we actually built. Thank you. Hello, everybody. Okay, that was the background. What are we shipping today? The very first thing we are shipping is an architecture guide on developer Android.com. Now for over years you've been asking us for our opinion, like how do we think an application should be built, and this is that guide. So we believe that it's a very good covers loss of application cases, but even if you have an architecture that you are comfortable with, you can keep it, but you can probably learn something from this guide. Second, we are shipping a new set of libraries that we call architecture components. These are more fundamental components where you can build your application on top. The first thing is life cycles. So this is the biggest developer complaint we have. Life cycles are hard, life cycles are hard. We say, okay, we should solve this problem and the first level of this needs a set of components. Second one is life cycle-level observables, which we'll go in detail later, but this is basically things that can do something based on the life cycle. Third, we are going to introduce a lightweight view model, which is all of our effort to take out that code outside of your activities and fragments and put it somewhere else where you can easily test it. Last but not least, we are going to introduce a new object mapping library for SQLite. And all of this is available for you today on MavenGoogle.com. Okay. Let's look about life cycles. What's hard about life cycles? Why do we hear so many complaints about that? Let's go through an example. We have an activity where we want to show the location of the device on the screen. So you will write something like this. You create a location listener on create method. You need to initialize it with the context and you have a callback that it calls whenever the location changes to the object. Now, if you have ever written an Android application, you know that this code is never enough. You also need to go ahead and override on start and then tell it to start and override on stop and tell it to stop. You always need to do this babysitting for these components, but this is acceptable. This is a simple example. This looks all right. But then your product manager comes for location, then your developer says, sure, that's an easy change. I'm going to change this method to first call this utility method which probably makes a web service call to check the user settings. And then if the user is enrolled, then we want to start the location listener, which looks like a very simple change. You will think this would work, but let's look at what happens in that activity's life cycle. So our activity was created. Okay, let's start check if the user's status is enrolled. Then meanwhile, the user wants to rotate the device, which rotation means a configuration change which means Android is going to recreate that activity. So on stop, we knew about this and we said, okay, location manager, stop. And then the new activity came, it also goes through the same thing. Looks all right, except, do you remember this call we made before? And then it decided to come back. Hey, users enrolled. And then what we did, we said, okay, then start. Now you realize the bug, we called start after calling on stop, which means our activity will live forever. We are going to observe the location forever. The battery will drain. We will have said users. This is a situation, right? We want to get rid of this. We want to put an end to this. So we said, okay, we need to acknowledge, as Mike mentioned, we cannot change the loss, but we can make it easier to deal with these things. So we decided to introduce a new interface called lifecycle owner. This is a thing with a lifecycle. Is your activity, is your fragment, or maybe you have your own UI framework, whatever the container you have there is a lifecycle owner. And we have these lifecycle observers, which are the things that care about the lifecycle. Like the location listener we had, it cares about the lifecycle. It wants to stop itself if the lifecycle is not active. So we said, okay, we will acknowledge this and we have lifecycle observers. I will go through our activity. Now we make our activity extend the lifecycle activity class. This is the temporary class until these components reach 1.0, then everything in support library will implement this lifecycle owner interface. Inside our activity, when we initialize our location listener, we are going to tell it this is the lifecycle you care about. And that's all we will do. The rest is the same. It calls back. We update the UI. So how can we change our location listener to take advantage of this lifecycle? Oh, we do the same thing for the users as well. Okay. So there is some boilerplate code here to get the fields. It doesn't really matter. But we have this enable method which gets called if the user is enrolled. Inside this enable method, now we want to start listening to location only if the activity is started. Now you can do this. You can say what is my current state, which is amazing, we didn't have this API until now. But now you can. So, okay, that was a simple change. But we also get notified what if we get enrolled when the activity was in the back stake and user comes back to the activity, now we should actually start the location manager. For this, we want to observe that lifecycle. To do that, we implement this interface which allows us to write these methods. You can annotate a method saying that if one start happens, call this method. And the new components will take care of calling it. So if you are already enabled, you can do that. You can do that. You can do that. You can do that. You can do that. And on stop, you disconnect. And last but not least, if the activity is destroyed, there is nothing you want to do with that activity, so you can unregister. So now you might be asking yourself, well, you just move those on start on stop methods from the activity into this location manager, like how come it is without the activity babysitting itself? Like I'm sure if you look at your code today, your activity on start on stop methods are like at least 20, 30 lines of code. We want them to be zero lines of code. If we go back to activity, I want to point out something that look in on create, we initialize these components, and that's all we did. We didn't overwrite, we didn't on stop on start, we don't overwrite any of those components. We want to introduce a life cycle component. It's a component that can get a life cycle and do the right things. It can take care of itself so that in your activity, you can just initialize it and forget about it. You know that it's not going to leak you. Now, of course, it was like more of moving the complexity from activity to the location manager and then it still needs to deal with life cycle. We said, okay, we want to do that, but we want more. We want very convenient to handle this common case. It's very common that your activity or fragment, it observes some data and whenever that data changes, it wants to refresh itself. Like it happens basically almost every single UI. And we want to share resources across multiple fragments or activities. The location of the device is the same from fragment to fragment. If you have two devices, you need to create two listeners to listen to the same location. As we created this new live data class, let's look at that. So live data is a data holder. It's just awesome data. It's like an observable. But the tricky thing about live data is that it is life cycle everywhere. It understands about life cycles. Because it understands about life cycles, it automatically manages everything. If you are observing a live data, you don't need to unsubscribe. The right things will happen in the right times. So if that location listener was a live data and a singleton because location is singleton, we could write the code like this. Get the instance, start observing. And when you observe, you say, this is my life cycle. This is all you need to do. Before on Android, if you were using the same live data, you would have to use the same live data. Everyone would give like minus 2 to that code review. Now you can do this. This is safe. Nothing ever leaks. So if you want to change your location listener to use this API, we get a lot of all unnecessary things. All we need is a context to connect. But we say this is a live data. This is a live data of a location. We have an active observer. And the other one says on inactive, which means you don't have any observers that are active. Now at this point, you're probably asking yourself, what is an active observer? Now we define an active observer as an observer that's in the started or resumed state, which is like an activity user is currently seeing. So if you have an observer in the back stack, there's no reason to call this inactive. There's no reason to see what's going on there. So inside our connect method, all we need to do is whenever the system location manager sends us a new location, we call set value on ourselves. Then the live data knows which are the active observers and delivers the data to those observers. Or if one of the observers was like on the back stack and then becomes visible again, live data takes care of sending the latest data back to that observer. And then there's no reason to call this inactive. So we don't need multiple instances. So if we look at live data, it is a life cycle observable. It has very simple start and stop semantics. Doesn't matter how many observers you have or what state they are, we merge all of it into one life cycle. And it doesn't have any activities or fragments inside it, but it works with both of them. And it's also really simple. And if you know about this like infamous fragment transaction exception, we guarantee that your observer will never ever be called in a state where you cannot run a fragment transaction. So this is very, very specifically designed to work well with your activities and fragments. Okay. Let's think about configuration changes. Now that example was easy because location is global, but most of the activity is inactive. So if we had an activity where we show a user profile and we implemented a web service that can return the data as a live data, which we can safely observe without risking leaking our activity, this all looks nice. You will never leak this activity. It will work very well except what happens if the user rotates the device. Let's look at the life cycle of the user. And then while you are fetching the user, the user decides, I want to rotate the phone. And then the activity is destroyed. Luckily we don't leak it, which is great. But then the new activity starts which makes the name call. Now this is okay, but not great. What do we want? We want to actually retain that data, right? We're already making that request. Why we make it? So we want our graph to be the same. So if the new activity comes, we should be able to give it back the same V model, which is a new class called V model. So we are introducing this new class very specific to this thing where you should put the data inside your activities into the V model and make the activities data-free. So if you want to change this activity, we create this new class. It extends the V model. And in the V model, all we do is inside the get user method, if this is the first call, get it from the web service, otherwise return the existing value. Now it's super simple. And inside our activity, we get it off all that code. We say get the V model providers of this. So each activity or fragment has a V model provider that you can obtain and that V model provider knows how to do it. So the first time you make this call, we will give you an instance. When the rotated activity comes back, it's going to reconnect to the same V model. And then the rest of the code is the same. So if you look at the lifecycle, this is what we wanted. The new activity started, it reconnects. And when the new activity is finished, like when we don't have anything to do with that activity, it takes you to the only method in the V model class. So it's very simple. So if you look at life cycles, they hold the data for the activity, they serve our configuration changes. They should never ever reference views because they outlive the activity. So you cannot reference back to the activity. That's why you use things like live data or XJava or data binding to the V model. Now, another big topic is persistence. Now, we know that to write a good responsive Android app, you need to save the data on disk. If you come to Android, there's these three major APIs we have. One of them is content providers, which is to talk between processes. It really has nothing to do with persistence. In reality, the other one is shared preferences that you can put very little data into that. And the last one is SQLite, which is something we have been shipping since Android 1. So you know you need to use SQLite if you want to save big data. And so you go into the developer Android com, this is the very first saving your data slide. This is so confusing. This is very sad. So we said, okay, we want to make this sad. We want to make it sad. So there's one time it tries to say, I want to select these three columns with this constraint and I want to order them like this. It's actually very, very simple SQL query, but you need to write all this code, plus this code doesn't even show where you define all those constants. So what do we really want? We want to get rid of that boiler free code. When you're writing Java, if you make a typo in Java, it works very well. We still want to use SQLite because on every single Android device it's a proven technology. We know it works very well. But we want to compile time verification. So we don't want the boiler plate code. We want to compile time verification. So we said, well, we came up with room, which is an object mapping library for SQLite. So if you look at this query, we said, okay, let's move this to SQLite. And we have this feed object, which we want to save in the database. I want to put that query inside an interface. You want to create the feed DAW. The DAW stands for data access object. Usually in database, the best practice to put your database access into certain interfaces. Then we just need to tell room, this is a DAW, tell room, this is an entity. And finally we had a database class which is a data access object. And I have these data access objects. This is all you write. Once you write that you can get an implementation of it from room. It's very similar to how you use retrofit or dagger. You define the interfaces, we provide the implementation. Now once we know this is a DAW, well we can do these shortcut methods, like insert these items or delete these items as long as you can read it and it makes sense, room will understand it. But the most important part of room is it understands your SQL. So the part like all those constants I mentioned we defined to get compile time guarantees, room actually gives all of these for free. So when room sees this query it says, okay you are receiving these three columns from this table where the title looks like this keyword. Where is this keyword here? Well, it's coming from the function parameters, makes sense. And what does it want to return? It wants to return a list of feeds and then room goes and checks. Does their columns return, mesh the object the user wants to return and once they are equal it says okay I can generate this code. You can even say select start like you don't need to list them. Room really really understands your query. You can even like join 10 tables it will still work. But what does it want to do? You can even type instead of saying feed table you wrote feeds. Now if this happens room is going to give you an error at compile time. So it goes at and verifies your query against the schema you have defined and it tells you if something is wrong. That's not the only thing it does. So if you say if your query is correct you want to fetch ID and title this is a valid query but you want to return it as a one string that doesn't make sense and it's going to give you a compile time error again. And it's a very nice way to fix this in room you can create any Java class it doesn't need to be annotating there's nothing special about that pojo and tell the room to return it. As long as whatever query returns matches what you want to return room will write the code for you. And observability which is very important right if you have a query like this now you're showing lists you obviously want to get notified when the data changes. And in room if you want to do this all you have to do is tell it to return a live data and it will do it for you. Because it knows your query it knows what things affected so it can let you know if that query changes. And this is the part where all these architecture components work well together. Room already knows about live data. So in your v-model all you will write is from the database called this query and this all you will do whenever the data changes your UI will get a new update and it only happens if the UI is visible. Last but not least room also supports RxJava 2. Okay if you look at room in a nutshell it tries the boiler plate code for you it has full SQLite support you can just write in SQLite there's no builders it verifies your queries at compile time it incentivizes best practices which helps you with testing, migrations and it's also observable out of the box. Okay architecture our last topic today it's all where we started right and now you might be asking yourselves what has changed in 2017 that you are talking about architecture. Well actually nothing has changed we've been talking about this topic a lot Adam Powell and I gave a lot of talks on this topic there's even a talk from 2010 which I watch as a developer so this is a topic we have been walking about but what is missing was a well-defined reference architecture so it's what we are shipping today if you go to developer Android come today after the session there's a section about how to architect an Android application so by the way this is a reference guide this is not your religious book we believe this is a very good way to write applications but you don't have to write all of it long by line so I'm going to briefly go through this architecture but if you get lost don't worry we have all of this documented on developer Android come with sample applications so we think that applications composed of four main things there's UI controllers the V models, a repository and the data sources so let's look at these in detail UI controllers are your activities, fragments, custom features they have very simple task they observe the fields of the V model and update themselves and they have one more responsibility whenever user takes an action on the UI they understand the action and call the V model to express whatever the user wanted to do if you go to our V model V model is the one which prepares the data for the UI and holds on to it this is where the data for the UI lives and it knows how to get that data usually it has like live data if you're using an observable or data binding observables it survives configuration changes that's why we put the data into the V models and it's also the gateway you can also consider it as your UI controller only ever talks to the V model to reach to the rest of the application and what's the repository now the V model serves as a store for your UI controller right repository saves as a data store for all of your application so it's the complete data model for the app and it provides this data with simple APIs to the rest of the application like you can have a user repository where you pass a user ID and it returns you a live data of user how it gets the data you don't care it's the repository's job so how does it do that it talks to the like fetching syncing take looking at the database we're talking to your retrofit backend is the repository's job and last but not least we have our data sources like your REST API client you might be using retrofit or you have SQLite storage you might be using room or you might be using grand doesn't really matter or you might be talking to other content providers from other processes these are things we call data sources and we think that all of these layers can discover each other through a dependency injection system which we recommend using Dagger but we also realize that understanding dependency injection is not very trivial like it's a more complex topic and sometimes it might be an overkill and you can also use a service locator if you feel more comfortable with it so let's go back to go through a concrete example let's say we have a UI that shows a user profile and we have data sources which like we save it to database we also can get it from the network how do we connect these two things well we said we first need a user repository user repository knows it should check the database it's not there make a web request or meanwhile also turn on the database it doesn't matter how it does it but it knows how to create a live data of a user or unobservable doesn't matter and then we need to remodel right now let's go back to the data for the UI that lives in the v-model so we create this profile v-model which talks to the repository to get this information and then the actual fragment gets the data from the v-model so that if the fragment comes back the live data will be there in the profile v-model but if the fragment disappears completely we will get rid of the v-model and the data can be garbage collected now we made if you notice every single component only talks to the one right below it which helps to scale your application it also has a great side benefit which is called testing you're testing right so let's say you want to test your UI now people say UI testing is hard UI testing is yes it's harder but it's usually hard because you put all of your code into that activity now we said put most of it into the v-model and you know that the UI only talks to the v-model so you can get rid of the other two you only need to create a fake v-model to test your UI testing your UI becomes super easy with Espresso and we have a sample app on GitHub that you can check how we test the UI and the same thing is valid for v-models if you want to test the v-model you know it only talks to the repositories you replace it with a mock repository and it works so you will test your v-models on your host machine on JVM and last but not least you can test the repository the same way you just mock the data sources you can easily test your repositories as J unit tests now I know this has been out of information we have two sessions tomorrow and lots of documentation but now I want to call our product manager Lukas to talk about what to do next like it said we just covered a lot of ground and actually we glossed over a lot of detail while we did that but luckily you don't have to remember everything that you just heard we have a lot of material for you to check out at developer.android.com slash arch and that link should start working in 21 minutes so we wanted to give you guys a chance to kind of blog and tweet about this before anybody else so we held it back so yeah we made having good documentation and samples a priority from the beginning of this project since you know providing good guidance is really one of the major goals so you're going to find in depth documentation that's written from the perspective of an app developer you're going to find really meaty sample apps that show how to build a real app and just as an example of how much work went into this we have a github browser sample app that probably has better test coverage than many real world apps written by that guy and of course we have the guide to app architecture which internally we called the opinionated guide for a while and we think that label still applies but even if you're not planning to use our recommended architecture we think people should check out the guide it has principles that we think apply to all apps on Android and you're probably asking yourself do I not you know what's the impact of this going to be on me am I going to have to kind of change the way that I'm doing everything you know if you're starting a new project or if you have an existing app that you want to improve the core architecture then yeah we recommend taking a look at this stuff it's still preview we won't be hitting 1.0 for a few months but we think it's definitely ready for you guys to check out and use in projects but if you're happy with what you have you don't need to use the app so in the spirit of be together not the same we're not dictating what everyone has to use if you're happy with your app architecture you can keep it if you're happy with your existing ORM you don't have to use room architecture components are designed to work well together but they do work perfectly fine stand alone and mixing and matching applies not only to architecture components but also third-party libraries so waiting for the slide to come up so yeah so you can kind of use what you have and start to integrate architecture components where they make sense so for example if you're happy with rx java but you really like the life cycle aware components stuff that you just showed so you have these kind of like self-sufficient components you can use live data together with rx java so you can get all the power of rx java operators and now it's life cycle safe so kind of the best of both worlds and we've got additional integrations to come we're definitely looking at a lot of stuff internally that would be nice if it were kind of like self-sufficient and life cycle aware and if you're a library developer we really recommend checking out life cycles and life cycle observer because we think there is a really bright future and a lot of potential in making libraries and components that are life cycle aware by default but before you go do that we have a lot more for you at iO this year we have two more talks one on life cycles that's even more in depth than what we just showed tomorrow morning we have another one on room and persistence and going a little bit beyond room starting at 12.30 tomorrow and we'll be we'll have people who are kind of well versed in architecture components in the sandbox for all the iO and we also have code labs which we're pretty happy with and there's more to come so we think we've just scratched the surface of ways that we can improve the experience of using Android frameworks and we're looking to applying this approach in other areas as well so some things are already in the works and we're also interested in hearing from you on kind of what else you'd like to see so come by talk to us tell us what you like, what you don't and stay tuned because we're really excited about the future of Android development thank you so for example you can send a message to all of your users who have made an in-app purchase giving them a special offer allowing you to re-engage with them the Firebase notifications console integrates with analytics so you can measure the effectiveness of your messages and explore insights based on your users activities so you can grow your application by easily engaging your users through the Firebase notifications console so you can share your experience that people love to share things about themselves such as photos, videos and gifs that express their feelings so what do you do to let them store and share these files through your app that's where Firebase storage can help our storage API lets you upload your users files to our cloud so they can be shared with anyone else and if you have specific rules for sharing files with certain users you can use the intent for users logged in with Firebase authentication security of course is our first concern all transfers are performed over a secure connection also all transfers with our API are robust and will automatically resume in case the connection is broken this is essential for transferring large files over slow or unreliable mobile connections and finally our storage backed by Google cloud storage that's a petabytes that's billions of photos to meet your app's needs so you will never be out of space when you need it so give your users space to share their lives with Firebase storage available right now for iOS, Android and web applications and to learn more about Firebase storage check out the documentation available right here analytics we all know they're important to building a successful app which is why there are many different kinds of analytics tools for app developers to use there are in-app behavioral analytics which measure who your users are what they're doing and so on and then you've got attribution analytics which you can use to measure the effectiveness of your advertising and other growth campaigns not to mention push notification analytics and crash reporting but quite often this work is being done by completely different analytics libraries which means you've got reports living in various tools across the web and trying to understand trends across these different reports much less get them to talk to each other it's always easy that's why we've created Firebase Analytics Firebase Analytics is built from the ground up to provide all the data that mobile app developers need in one easy place and it starts by giving you free and unlimited logging and reporting that's right, no quotas no sampling and no paid tier to worry about simply by installing the Firebase SDK Analytics automatically starts providing insight into your app you receive demographic information on who your users are how regularly they visit your app how much time they've spent using it and how much money they've spent in your app but not all apps are alike and you can get detailed information about what your users are up to by logging events specific to your app these can include common events that Firebase Analytics has already defined like when your users add an item to their cart and there's also support for custom events you create yourself like when a user completes a workout in your fitness app or when they take a selfie in your photo app geez it's not just about seeing what your users are doing it's also about discovering who your users are so in addition to demographic information you can also discover how your different groups of users behave by setting custom user properties have a music app and want to find out whether your classical music fans are browsing more albums than your jazz fusion fans that's the kind of data you can easily break out thanks to custom user properties and Firebase Analytics doesn't just measure what's happening inside your app it lets you combine your behavioral reporting what your users are doing with attribution reporting or what growth campaigns are bringing people to your app in the first place so if you want to know which ad campaigns are bringing you the users who spend the most money or are sharing the app with their friends or have unlocked the last level in your game and are ready for the sequel you can do all of that in Firebase Analytics but don't stop there once you have all this information you can take action on it using Firebase Analytics audiences Firebase Analytics gives you the power to measure the groups of users or audiences out of just about anything you can measure in your app want to target users in Brazil who have visited the sports section of your in-app store it's as easy as a few clicks in a Firebase console once your app has built up this audience you can send them notifications using Firebase notifications or you can modify their in-app experience using Firebase Remote Config or you can target them through AdWords Google's ad platform and then because that impact can be measured with Google Analytics you can confirm you're getting the outcomes you expect Firebase Analytics already comes with a dashboard that lets you view answers for common questions but if you need more specialized analysis you can export all of your data into BigQuery Google's data warehouse in the cloud where you can run super fast SQL queries to slice and dice this data however you'd like you can even combine it with other analytics data that you might be capturing and this is just the tip of the iceberg of what Firebase Analytics can do for you if you want more check out our documentation here and give Firebase Analytics a try Hello everyone and welcome to the 2017 TensorFlow Developer Summit and I'm delighted to see all of you here today Today we are excited to announce TensorFlow 1.0 TensorFlow's philosophy has always been to give you the power to do whatever you want but also make it easy and this makes it even easier We really were hoping to build a machine learning platform for everyone in the world that was fast flexible and production ready The point of TensorFlow is to figure out how can we give this back to the community and be able to use TensorFlow to further whether it's the research or the production needs It's how we express our ideas and it's the piece of software our engineers and scientists spend most of the time interacting with it So TensorFlow is a really exciting tool it's something that will let you take the confusing world of TensorFlow and start to dive into it It's just a really amazing time to be an AI researcher One of the projects that we've been working on is using deep learning for retinal imaging Can we use deep learning and reinforcement learning to generate compelling media But this is just the beginning The TensorFlow community is truly global We want to see all the amazing things that you guys can do with TensorFlow Thank you very much Good morning Berlin It is an absolute pleasure to be here with you Here we go We're live We have a lot of experience building some of the world's most popular applications and we've learned a thing or two about what it takes to build an app and we found that it's a pretty difficult process A lot of your time goes into running infrastructure instead of building the features that make your app your app There has to be a better way That better way is using Firebase We're now up to over 750,000 developers using the product If you use Firebase we're going to be able to use the code We take care of security and scalability so that you can focus on building the features that your users love Today we're launching Firebase UI 1.0 It's an open source library It has customized theming and it works for web and android and iOS So you can go ahead and drop that in and you'll have all of the UIs that you'll need Is my app set up correctly? Are you ready SDK? Are you receiving my events and parameters? We've built something, the ideal tool to answer all of these questions and these pain points App quality leads to better user retention Better your app is and the more stable it is the more likely for users to come back and for your business to be successful and sustainable And that's where we come in So we're really looking forward to get the feedback from the community and a product and to work together to help you build a better app And we want you to be able to spend all of your energy on bringing innovation and creativity something new to the world That's really what we're trying to achieve here is making all the infrastructure pieces simple for you And I'm really excited for you to engage with Firebase and see how it can make you more successful Alright, let's get back to the code Hey gang, want to see something neat? Check out this awesome hidden feature I found in Firebase Analytics So I'm over here looking at all my reports in the Firebase Analytics Dashboard Here for instance, I've got my active users for the last 30 days and while these graphs sure are pretty I'm thinking it'd be kind of nice if I could get these numbers into like Google Sheets or maybe Excel so I could analyze them a little better Right? Well, watch this I'm going to select my graph here in the Firebase console It's kind of hard to tell but you can see by like here that my graph has been selected and then I'll hit command C to copy it and then I'm going to switch to a blank Google spreadsheet and hit command V to paste and look at that all my values are right there in the spreadsheet for me to analyze So you can see here on the left most column I've got the date and then all the actual numbers are in the columns next to it Now you might notice that I seem to have two columns of what looks like the same data right? I've got monthly active users here and then right next to it I've got this monthly users column and then the same goes for my weekly actives and same for my daily actives and so basically that first column is for the value that corresponds to the date here on the left the second column is basically for that corresponding date in the previous 30-day time period basically it's the values that belong to this dotted line here in the graph that I copied Make sense? Okay and then I can do the same thing for a bunch of these other graphs and copy and paste my daily engagement numbers let's get these into a new sheet here and again you can see I've got my engagement numbers from this time frame and this first column and then those same numbers for the previous 30 days in this second column and better yet I can jump over to an individual event like this completed five levels event and copy all these graphs here at the top and you can see I'll get event counts, user counts, event per user counts and values for every one of my events that I am recording in Firebase Analytics and this lets me do some pretty nice calculations right here in Google Sheets for example let's say our game designer is curious how often people are failing a level in our game well for starters I've got my level start graph here to show when people are starting a level in my game so first I'm going to copy and paste these numbers into a new sheet so put them in, okay great and then I'm going to do the same thing for my level fail graph and that will show when people have failed a level so we'll copy from here and we'll paste them right in next to my other numbers and once I've copied and pasted these values into Google Sheets I can then calculate my average failure rate per game stat by dividing this number here by this other one I'm going to copy this formula down for all of my dates let's give it a percentage format so it looks nice maybe we'll add an average at the bottom here let's do average for all these numbers and there we go looks like my game has an average failure rate somewhere in the low 30s which sounds like it's just challenging enough for our players so our game designer is happy now a couple of disclaimers here first this doesn't work on all the graphs I've tried some of them just don't seem to copy and paste as well as others but it does work on a surprising number of them you'll just kind of have to try them out and see if they work and second this will never be a replacement for the awesome and sophisticated data analysis capabilities you get by exporting your raw data to BigQuery and you should totally go watch this video if you want to find out more but if all you want to do is maybe compare two graphs to each other or calculate some standard deviations or averages on a particular event this trick can work surprisingly well so give it a try yourself have fun with it and we will see you soon on another episode of Firecasts we are welcome to our session I'm Xu we are engineers from the mobile vision team mobile vision team is about providing you with the greatest and latest computer vision algorithm that are on privately on your device with low latency and no internet access required our API also work both for Android and iOS now let's take a look at how it works application that demonstrate how to use the mobile vision face barcode and text API first the application scan barcode from a paper ad bring the user directly to a product page then you use the face API to try virtually the sunglasses on his face last he use the text API to scan credit card real quickly to do the payments now let's take a look at the application that is quite popular we have 125 million 30 day active users this number is contributed through more than 15,000 applications in our community we see a lot of very interesting use cases for example on our face API we see people use the face API for photo correction type of application like face blur detection for barcode API we see applications that use wedding registry as well as tracking 2D in space to place 3D object in an era like application for text API we see application that do quick payment processing kind like what we show in the video as well as help blind user to see its surroundings after today's talk you can join with us we have a lot of active community as well to build awesome applications the mobile vision API consists of four major components the common utility API and three specialized detection API the common utility API provides infrastructure and building blocks that help you construct a stream pipeline you can also understand landmarks like face and eyes as well as facial classification like are you smiling or is your eye closed for the barcode API you can detect 1D and 2D barcodes for multiple formats in different orientation at the same time the text API can detect Latin based languages for detected text you can also use the mobile vision API as well as the web as well as the web the mobile vision API works on both static image as well as a pipeline the easiest use case is to use on static images first you want to create a detector then you provide it with an image it will run detection algorithm and generate detection results this is what a pipeline looks like so the camera source that uses the camera API internally it streams the camera frame to the detector the detector then run detection algorithm to generate detection results after that it hands over the detection results to a processor a processor is a first step post processing it's responsible to discard merge or deliver the detected item to its associated tracker a tracker is an event based object listener it can notify for a tracked item over time so the camera source detector and processor are provided by the mobile vision API the only portion you need to worry about is the tracker tracker is the piece of code you want to write that implement your business logic now you understand the basic concepts like let's take a deep dive with the barcode API so barcode API sounds boring but it's surprisingly awesome if you think about it barcode is everywhere it tracks everything so the airplane ticket you purchased is tracked by barcode so the barcode API for Android is provided by Google Play Services you want to declare your dependency for the Gradle and also during the runtime for your manifest file for mobile vision API the first time when it's wrong on your device it requires to download some additional vision models but only for the first time once it's finished downloading from then on no internet is required in this slide we'll talk about how it's used for static image case so it's three steps first you want to instantiate a barcode detector using a builder pattern the builder will allow you to specify what kind of barcode format you are interested in this example we are specifying QR code and UPCA so you will ignore any other formats next you will provide with the image and then you will run through the detection algorithm to provide you a detected barcode after you have the barcodes then you can access its property every single barcode has some common property like raw value what is the encoded barcode value corner point where is this barcode located on the image as well as barcode format is a UPCA is a QR code for 2D barcode it can actually contain structured data we parsed that for you so what you want to do is you want to check the value format in this particular case it's foam there's also other value format like contact, address and others once it's parsed then you can access it through objects that we provide so that is how you do static image detection now let's expand that to a pipeline remember we talked about before there are 4 steps to the pipeline the camera source, the detector the processor and then the tracker so the first thing you want to do is instantiate a barcode detector just like we talked about before next you want to instantiate a camera source and then you want to provide it to the detector so when the camera starts you will automatically deliver the camera to the detector to run detection next you want to instantiate a tracker so remember we say tracker is what you want to put your business logic at it's an event based on lessoner and the list are the few methods you can override so our new item is called when a detected a barcode is seen the first time for the pipeline if you want to add an overlay graphic like the video we show into your application this is where you want to add your graphic our update is called usually on every single frame when you get an update location for the detected barcode so in your example for the overlay this is where you want to update your location our missing is called when a tracked barcode is missing for the couple of frames this can be due to occlusion or the frame quality is blurred after several frames like the barcode just nowhere to be seen on down is called on down is called when the pipeline is releasing all the tracking resources related to this barcode this is where you want to also clean about your overlay graphics next we want to instantiate a processor this is where we hook up the detection results to a tracker the mobile vision API provides two flavors of the processor the focusing processor and the multiprocessor we will talk about multiprocessor later in the presentation in this use case we are talking about a focusing processor focusing processor allow you to select one barcode and then focus on that barcode continue to deliver the notification until that barcode is no longer seen by the pipeline so this is what the code looks like you want to overwrite the select focus method again select focus method is only called when there is no item to be focused on so you get a batch of barcode detection comes in you select which barcode you want to focus on that barcode will be continued send notification to the tracker until it's on down is called on down barcode after that the select focus will get called again and then you get to select what's the next barcode you want to continue tracking that's how you construct a pipeline very easy four step you instantiate detector you instantiate tracker a processor and then you hook it all together using the camera source so the barcode API also works on iOS the barcode API for enjoying iOS you want to share the same common algorithm so when you use them you get consistent results for both of your platforms our iOS API is provided through Cocoa Pots if you only want to use the detector you can specify the barcode detector parts if you also want to use the pipeline then you can also add the NV data output part again in this slide we're going to take a look how to use the iOS barcode detector in a static image detection first you initiate the barcode detector in a factory pattern just like before you can specify what kind of barcode format you are interested in next you'll provide it with an image and then you will detect barcodes back once you get a barcode you can access this property all the properties we talked about before are all available here you can also find the barcode that is available here so this is what the iOS pipeline looks like it should be seeming familiar it has four steps just like before however to respect iOS convention we may some changes instead of the camera source we are using the AV capture session to interact with the camera instead of the processor we are using data output to use the AV capture session to hook up the detection result to the tracker the rest should be fairly familiar so you want to instantiate a detector you want to instantiate a tracker where you put your business logic you are using a data output to hook up your detection result to your tracker and then you use the AV capture session to stream the camera into the detection next hello now it's turn of the Face API so Face API makes detect face in static image or video or camera stream really easy using Face API you can build all kinds of fun apps with detecting face in it in the previous demo we show that you can use Face API to build a sunglass trial app so user take a picture of themself or point the camera to themself they can see how they will look like if they are wearing a sunglass you can of course do more using Face API for example you can build an avatar app using Face API user input his photo and you can use Face API to generate an avatar for the user one thing to note about Face API is that this is a face detection API it's not face recognition so it could detect all faces inside the image or video but it has no idea about who these faces are Face API works very well on human faces no matter the face is with extreme expression or the face is obstructed Face API could accurately locate the face in the image and when detecting the face Face API also report a list of positions on the face which we call facial landmarks including eyes, nose and mouth and so on these facial landmarks helps you better understand the position of the face and the angle of the face Face API doesn't require you to face the camera directly to make it work we support multiple angles and Face API will report you which angle the face is on Face API also supports some facial activity classification it could detect whether the person's eye is open or not as you can see from the video it also could detect if the person is smiling or not in order to use Face API the same grid or dependency as barcode is used and in the runtime dependency in your android manifest.xml you want to specify that you are using Face API so that our API could download the necessary file once you have the dependency setup the usage is very easy the first step is to instantiate a face detector using the builder pattern you can tell the builder a list of parameters like for example you can see I only want to detect face that is larger than 10% of the image size or you can tell Face API that I am only interested into the largest face of the image and if you want to do the classification means is the eye open or not is the person smiling you need to enable the classification when building the builder the same for landmark if you want to detect landmarks you need to enable it when you are building the builder and you can tell face detector to run at a accurate mode or fast mode this is the tradeoff between the performance I mean the accuracy and the speed once you have the instance of the face API you can apply it on a static image and you can get all these information of the face including the position of the face and if you enable the classification you can get the probability of is the eye open or not is the person smiling of course if you enable landmark detection you can get a list of landmarks of the face the same as barcode API face API also support video analysis so quick refresh of the pipeline so we have four components we have a camera source which is responsible for getting frames from camera and camera source pass these frames to face detector which faces will be detected and the focusing processor is responsible for select the right face and roll that face to a tracker tracker is where you want to put your business logic in we will show in the next slide about how to implement a tracker class so config the whole pipeline together is very straightforward you instantiate a tracker you instantiate a detector the same as we did for the static image and you tell the detector in this example you tell the detector that you are using largest face focusing processor which only rolls the largest face to the tracker and omits the rest of the face then you build the camera source class and you start the camera source class the whole pipeline will start to work this is what a tracker class implementation of tracker class looks like most importantly two methods are needed the first method is our new item which is called every time a new face is detected by the face detector this is if you wanted to build the same app as we did in the video that scan the face and put a sunglass on the face this is where you want to create the sunglass overlay on the face the next method is the update method this method is called every time the existing face has some updates it could be the position of the face change though but the next method is the update method this is where you want to update the sunglass overlay to a proper location in the previous talk we only discussed to track one single object no matter if it's a barcode or a face but our API of course supports multiple object tracking the pipeline looks very similar the same for components the only difference is highlighted there that instead of using a focusing processor you want to use a multiprocessor here unlike focusing processor which only rolls one element to the tracker multiprocessor will create tracker instance for all the elements that is detected by the face detector the code looks very similar the only difference is highlighted is you want to create a multiprocessor for a factory class which will create tracker instance for all the faces that detected by the face detector and the rest of the pipeline are the same as single face tracking for face API we also have iOS support it's available through CocoaPods if you only need to run Face API on static image Face detector is the right CocoaPod you want to use if you also want you need to include the mv data output CocoaPod to use the API on iOS is very similar as on Android the first step is to instantiate a face detector with all the desired configurations then you run the detector on a static image and you can get the property of the face in a very simple way again for iOS API we support the same video video analysis the same as Android API the pipeline is the same except the name of the class change like for example it's camera source that is responsible for providing frames on Android but on iOS is AV capture session the face detector is responsible for detecting faces while the focusing data output is basically a focusing processor the same where you want to put your business logic in in order to configure the pipeline the first step is to instantiate face detector and a tracker and you instantiate a largest face focusing data output which is basically the same as the largest face focusing processor on Android then you use the AV capture session to get frames from a camera and the whole pipeline will start work the implementation of the tracker is very similar except the language is different of course two methods the first method is called when a new face is detected the second method is called when an existing face has some updates we again have the same multi-face tracking on iOS the same as Android the only difference is that you use a data output instead of focusing data output so that it will be responsible for creating tracker class for all the faces that is detected and the code change is very straightforward it's only one class change so I'll skip this slide the next API I'm going to talk about is text API I personally feel text API is very interesting you can build a lot of useful apps using text API if you are building a payment app you can do what we did in the video demo so the user can point the camera to their credit card and you can use text API to extract the necessary information for the user like the credit card number the card holder name or the expiration date so the user don't have to type all this information using their tiny keyboard or you can build a business card scan app take a picture of the business card and extract the email address the phone number, the name of the person and save this information to content this is a short video demo of a text API as you can see we support multi color schemes it's very robust and we support multiple languages currently we support all Latin character languages which are more than 20 and when detecting and recognizing text, text API doesn't only return you the content of the text it also keeps the structure of the original text that means in the return value of the text API there are three levels of the return object the top level is the text block which is essentially a paragraph in the original image and in each text block there are multiple lines each line corresponds to one single line in the original image and in each line we report to you the multiple words inside the line the same grid or dependency if you wanted to use text API on Android and the runtime dependency you want to specify that you are using text API so we can download the necessary file although the development of text API is kind of difficult but the usage is actually very easy the first step is to instantiate a builder and build the text recognizer then you can run it on a static image and you can get a list of useful information of the text for example you can get the language of the text as we mentioned before that text API supports more than 20 languages we don't require you to tell text API that which language you are detecting text API could automatically get the language and tell you which language the text is and you can get the position of the text of course most importantly the value of the text you can call the get components method on a text block which is a paragraph to get the lines in the paragraph and you can call the same method on a line to get all the words inside the line same as the other two API the text API is the same so I will just give this slide to save your time a quick summary here we currently have three API we have a face API which detects faces as well as associated facial landmarks and facial activities we have barcode API which decodes both 1D and 2D barcodes we have text API that recognize text we support Latin character based languages and we keep the structure of text for you all these three API follow very similar pattern so if you know how to use one of the API it's very easy to use the rest of the API and all the computation of these API happens on your device so you don't need to worry about the network bandwidth now that you know how to use the API here are some tips about how to make best use of the API the first tip we have is always run the detector in background threads this is because the latency of these three APIs are higher than 16 milliseconds so if you put it in a UI thread your UI will be lucky and you will have very bad user experience second one is that if possible do image preprocessing for example if you are running a face API on a video and you know that part of the video is blurred it could be motion blur or the light is really dark so it's difficult to recognize a face you might want to skip these frames because by doing that you can save a lot of battery and in order to make best use of the mobile vision API you can also use it together with the cloud vision API for those who don't know much about cloud vision API that is another vision related API developed by Google runs on Google cloud and provides more information for detected items for example cloud vision face API provides emotion detection while mobile vision API doesn't it could tell you if the person is joy or not or if the person has higher latency because it requires you to do a network run trip to cloud so see if you want to detect the emotion of a person inside the video you can combine mobile vision API and cloud vision API together use mobile vision API as a course detector or preprocessor runs on every single frames you have to detect if there is a large enough for cloud vision API to do emotion detection or not only if you find a large enough face inside the frame you pass that frame to cloud vision API to do the emotion detection by doing this you can reduce the latency of your app and you can get the same result as using cloud vision API solely I'm going to play a video about what can we do by combining this local detection and cloud detection together thanks inside this demo the on-device detection is responsible for detect if the object of interest is inside the frame or not if the object of interest is inside the frame we pass that frame to cloud to do a more detailed detection of the cloud vision API and here are some useful links for people who might want to know more about the mobile vision API the first link is our official website where we put our documentation on and the three code labs are we developed for the android API these code labs have step-by-step instruction from how to create an android project for the app so if you are new to android development and you wanted to try our API these three links are definitely very helpful and we maintained two github repositories one for android and one for ios we put some sample code there so if you want to know how we use the API you can check these two github repositories and if you find any bug inside the API feel free to use the github repository in case you have some general questions like how to use the API or why I got this exception what does it mean feel free to ask a question under the android vision or google ios vision tag we have developers monitor these tags periodically and answer questions so mobile vision API has powered more than 15,000 apps and with the core feature of detecting face recognize barcode and text now it's waiting for you to think of how to use this feature inside your app on behalf of mobile vision team Xiu and I and Xiu stock would like to thank you all for coming to this presentation and thank you thank you we still have six minutes to take some questions so if you have any questions use the three microphones the microphone please hi there ios developer interested in getting started with the firebase platform in your app well you've come to the right place there are two main parts to getting the firebase platform up and running adding your app to the firebase console let's go over these one at a time for starters let's go to the firebase console at this URL here depending on when you're watching this video the UI might look slightly different but the general concept should remain the same now depending on your situation you might see a blank create a new project screen or you might see a list of existing projects oh before we go further let me take a moment to explain the difference between projects and apps a project consists of one or more apps and the same project use the same firebase database backend and if you want you can use features like firebase cloud messaging to talk to all of them at once you don't have to but you can which is sometimes convenient so if you're a developer that has a cross platform app you generally want to put the ios and android versions of your app in the same project now that will give you some nice cross platform benefits your user can access the same data if they switch back and forth between the ios and android versions of your app things like dynamic links will work across both platforms you can send one notification to all versions of your app and so on on the other hand completely different apps you should put those in completely different projects there's nothing gained by cramming them into the same project except tiers and heartache I guess so if you're working on a cross platform app and your android or web team has already created a firebase project you should probably select that project and connect your ios app in there otherwise if you're the first one to be adding firebase to your app you can be the one to create the new project in my case I'm the first app associated with the project so I'm going to create a new project here I'll give it a name and there we go once you've selected or created a project you're going to want to connect your client app I'm going to select the ios button here and I'll give it my apps bundle id you are eventually going to need to add your app store id here if you want features like firebase invites or dynamic links to work but you can leave this blank for now and change it later now when you click continue your browser should automatically download this google services info.plist file for you note that it needs to be named this exactly so if you get that little one in parentheses after the name like I just did you're going to need to do a little bit of renaming and finder okay next up drag the file into your Xcode project like so and let me go back to the console and hit continue here and it's telling us that this would be a good time to install the firebase cocoa pods now I'm assuming you know something about firebase cocoa pods but if you don't here's a little video for you to check out it's fun so I'm going to jump into my project directory here and do a little pod init we'll open up the file and I'm going to uncomment this line because I am using swift and this line because my app happens to have a base SDK of 8.0 although at the time this recording firebase is supported as far back as 7.0 next up let's add some pods now it's important to remember that to keep your app nice and spelt you should only install the cocoa pods for features that you need in fact there is no all encompassing uber firebase cocoa pod that installs everything for you you're going to need to pod install each individual feature and you can find a full list of the cocoa pods and the features they correspond to over here now for starters I'm just going to add firebase slash core which includes everything needed to get the basics up and running and also enables firebase analytics so now I'll make sure my project is closed and I'll run pod install and then let's open up the generated workspace alright we'll build it and make sure everything compiles okay and it does so we can move on to the next step okay looks like the last step here is to add some initialization code I recommend putting this in your app delegate did finish launching method first things first let's import firebase in my app delegate note that this is usually the only thing I'll ever need to import no matter what we've installed we're doing some pretty nifty work behind the scenes to make sure this works properly I know this sounds like the exact opposite of my only pod install what you need advice from like two minutes ago but trust me here this makes development a whole lot easier alright so next up we'll add the line for app.configure to make sure firebase gets set up properly and that's actually all you need to do this configure method will take a look at what libraries you have installed initialize them grabbing all the appropriate constants from that Google services file that you dragged in earlier into your Xcode project so we'll give it a quick run and if everything is set up and working correctly you should see a few lines in your console about how firebase analytics is up and running alright so congratulations you are now up and running with firebase so there are a lot of places you can go from here you can add sign in using firebase auth or get your app talking to the real-time database or start tracking more of your users usage with firebase analytics you can check out these links to get started and have a little fun what's up everybody David here and today I have a quick and easy firecast for you we're going to get up and running with firebase and the web and this is actually going to be the first of many screencasts in a series so make sure you subscribe to get notified of tutorials on authentication storage hosting and web push notifications with firebase cloud messaging also if you're a fan of javascript frameworks I'm going to be dropping videos for angular one and two polymer react and ember so you better subscribe because you want to miss those but today we're going to start with the very basics I'm going to show you some mad copy and pasting skills by getting the project initialization code from the firebase console and then we're going to set up a small web app so let's go and dive in so I'm in the firebase console at console dot firebase google dot com you can see I'm logged in as myself up here just smiling at you but to get started I'm going to create a new project so I'm going to click create new project I'm going to call it web quick start and then we'll create it my project is now created so I'm going to click add firebase to your web app and this brings up a little model with all the initialization code I need to get started it has things like my api key off domain database url and storage bucket I can go to the bottom right and then I can click copy and that's all the code I need to get started but just as a little fyi you can access all of this information by clicking off and then going up to the top right where there's web setup but now to the editor so here my editor I'm going to get crazy I'm going to create this web page from scratch so I'll start with my basic html boiler plates give it a title and now I can just paste in all the code from the console and this is all you need to get started and just to prove that it works I'm going to use the database as a little tiny demo so I'm going to create an h1 and give it an id and every single time the value changes in the real time database I'm going to sync it to this h1 so the first thing I need to do is get that h1 by its id and then now I'm going to create a database reference using firebase.database.ref and then create a child location to the text location and now I can synchronize any changes using the on function and then using es2015 arrow functions I can just do it all in one line so to the left right here I have my project and the firebase console and to the right is just my blank page to use the database I'm going to remove all security so I'm going to click rules and then I'm going to say read is true and write is true and click publish and you should totally know that you only do this while you're developing because that means anyone can read or write to your database so now I'm going to give my browser a refresh and then I'm going to add a text location and it synchronizes to the browser and so I can change it and then it changes as well so keep in mind that the realtime database is just one of the many features firebase offers for the web you can also use notification, storage, hosting and even firebase cloud messaging so that's all it takes to get started with firebase in the web and if you want to go and learn more then check out the link in the description for our official documentation and if you're super excited to learn more about firebase on the web then please subscribe to our channel because we're going to have tons of more content so that's all for this time and I will see you all later we started is on a living room couch and we really started because the problem that we had which was asking the same questions to our closest friends where are you, what are you doing we were baffled by the fact that there wasn't a solution that solved this problem and we felt like we could build one that was better the value that is drives for all users is knowing which of your friends are nearby so if you look around where we are right now in arena, how many times have people gone to a basketball game, hockey game or a concert and found out the next day that they had friends we're at the same event and think about all those moments that are missed because they didn't know they had friends there so what we're solving is letting people know who's nearby and making those moments matter my name is diesel pelts and I'm the founder and CEO of is I'm Mark French co-founder of is we felt there was no reason users should manually go fetch data when I get a text message there's no reason for me to go fetch it and we felt why should it be different from anything else and Firebase let us solve that Firebase really allowed us to enhance user experience by making it real time simplify the UI by not having a fresh button and cut down and develop in time like any startup the most valuable asset that you have is your team and your time and what Firebase has allowed us to do is save 50% in terms of time by moving that much quicker with a product it's a game changer we're using eight features from Firebase right now they're analytics remote config dynamic links the real time database traditionally that would have been eight different places and now we go to one place for the Firebase console we're eager to launch this product in a big way we're seeing how people are using the product and how they're inviting more and more friends that we're concerned we're growing very very quickly so we sleep a lot easier at night knowing that we got Firebase it's really there to build that infrastructure if you're a developer use it we love it and it's enabled us to focus on developing user experience and not have to worry about the things in the background that should be there and with NPR 1 we are reimagining what a listening experience could be outside of the radio it's the radio but better it has all of the great stuff that we've spent 40 years perfecting with NPR 1 we see the opportunity of reaching an audience that have a device in their pocket all the time my name is Mike Safe-Lahi I'm the lead mobile developer for NPR Digital Media my name is Nick Dupre and I'm the innovation accountant at NPR my name is Tejas Mistry and I'm the senior product manager of NPR 1 some of the biggest challenges in any mobile app are that first impression when the user first installs the app you've got a very limited amount of time to convince them to keep the app for their user experience trying to figure out how we can get users into the content as quickly as possible was the real focus of integrating Firebase and dynamic links using dynamic links we were able to shorten the number of interactions it takes for a user installing the app to get from the promoted content to the content from 20 to 3 so that user is able to get right into the content we are driving and we are listening for user every week it's really astounding creating playlists of content that are configured by the podcaster or by a member station or by us internally and with Firebase we have that at our hands having the analytics product interact with things like dynamic links remote configuration cloud messaging it adds a real multiplier effect and the integration with the broader Firebase suite I don't have to go outside the platform to figure out what's working so it's not just about shipping the product faster it's about analyzing the results faster and with the integration with all the other Firebase products we are really excited about all the things we can find Raze Labs is a company that is focused on building excellence in software technology and design we do that through our work on mobile applications and websites and technologies in general my name is Gregory Raze I'm the CEO and founder of Raze Labs we really want to understand the human problem and often times the hard problems in software aren't just the technology problems the API, how do you connect these things but really getting at the heart of what people are trying to accomplish and do in their day to day my name is Ben Johnson I'm the managing director at Raze Labs in Boston we decided to put our hat in the ring for the Google certified agency program Raze Lab is just having access to a lot of what Google is doing today so there's access to design reviews invitations to events and that's sort of the base level and I think that's hugely rewarding even in and of itself having Google review your app from a design perspective is amazingly helpful so that's sort of the first tier the second tier comes with certified status there's a long application process for that and once you have it it's something that you can really say to your clients that gives them comfort that we're a reputable firm that we're building great software in a way that Google believes in the certification is a higher bar for us to really differentiate ourselves from many of the other companies out there it required us to really dig into what that means to be truly world class and we wanted to set that bar for ourselves as well my name is John Green I'm a VP creative at Raze Labs the Google developer agency program allowed us to have access to engineers with the map team the design team to figure out how can we actually do some of these things and we could reach out to them when we needed and also allowed us to set up and say we can make this a success they might look closer at this app because we're part of this program which has actually been super helpful some of the challenges in building the Six Flags app and which touched on some of these are certainly mapping technology and payment technology material design or the APIs having access to the Google team to really ascertain how we're approaching certain software and ensuring that we're building technologies the right way makes for a smooth development process we set off to build the Six Flags app with a pretty lofty ambition and it was to bring in-park navigation and commerce to the app the comfort of knowing that Google is there to help us understand where they are heading as an organization and that we are along for that ride is a really helpful thing to know and as a business we know that going forward we're going to be at the cutting edge of whatever Google is doing through access to programs through the collaboration with their teams it's really helpful for us to know that six months, nine months down the road will still be a part of that process and we'll still be working with them to figure out what's next so here we are in the sandbox I'm so glad that you'll be here with us for the next three days as we explore everything on the ground at Google IO first off is the mocktail mixer it's not getting started early because there's no alcohol Chris, would you tell me about this? Excellent, thanks Timothy so this was a do-it-yourself mixer that has the Google Assistant built in it was part of a collaboration between the assistant team and Deep Local, these guys behind you it's a creative agency out of Pittsburgh and what they've done is they've used the Google Assistant SDK which we launched three weeks ago and actions on Google to customize the drink, the drink ingredients and the code as well as services like API.ai to create a conversational interface so you can have a natural interaction with the mixer so we're super cool, it's a super cool demo it shows you how to go from zero to prototyping in a matter of hours now I want to talk a little bit more about this demo and the SDK and all the things that we can do with it but first let's make a drink let's talk to Mocktail's mixer let's talk to Mocktail's mixer what's on the menu? let's get a pairing mode is that the robot sounds it's making? yeah so it's actually like it's going from my voice going through this mic through the assistant SDK running on a Raspberry Pi device which I think Oscar will talk a little bit more about going back to the assistant server running in the cloud and figure out what I'm saying doing natural language understanding and speech recognition and then basically coming back and controlling the devices and now you see we've started making the pairing mode drinks for everybody around here that's awesome there's a bunch of drinks so we have some other friends joining us let's start with Oscar Oscar you're one of the guys that actually built this yeah so I work for Deep Local we work like Chris said with the SDK team on the project and basically the way this works is there's a Raspberry Pi inside the device that runs the SDK and when you speak with it it runs up to API AI where you can program your conversational interface from there there's a webhook that's called that when you call a drink it pushes a message over google pub sub down to the devices and actually sends a serial command to the arduinos inside that is actually what controls the motors and dispenses the liquid that was like a design doc in five sentences thank you and everything's open sourcing online so you can find it on github if you search for mocktails mixers or if you go to deeplocal.com slash mocktails mixers there's a write up and a video and DIY instructions so the home builder can make it themselves that's Wayne Wayne you're the home builder so Wayne you are one of the developer advocates working on assistant and the SDK and all these APIs is this what you do all the time? well I made a dog feeder one time but this is new I gotta get into this now I can imagine something where you can like mix up custom food or something like that would be kind of cool you made a dog feeder they made a human feeder we're gonna merge them together now that's the cool thing is because this SDK is available to the public anyone can build devices like this now I'm quite excited about it I'm gonna work out my next plan for some kind of dog feeder well that's the really cool thing about this recent release like the SDK just came out a few weeks ago right? three weeks ago and it's really giving the ability for people to bring the assistant into their own hardware exactly I mean you can take any crazy idea that you've got and you can embed the assistant SDK into it runs on most Linux operating systems and so forth and it's just really easy to get started we've got a whole bunch of samples people can try it out all the demos here as well they're all open source so people can try them out and again just like sort of taking a look at the value of that it's really this is something that people have been doing before but they had to build the whole stack themselves which includes a lot of technologies that they don't want to really be experts in but Google can be that expert for them and you can just use the APIs instead you can focus on what you're good at which is making these kinds of devices and leave all the speech recognition and the natural language understanding to us awesome are the drinks ready oh I see they're getting poured out okay well we're waiting for the drinks Vera tell me about some of the ways that people are using the assistant today awesome so today we at IO we announced that the assistant is fundamentally conversational so everything that we're saying here that Wayne mentioned that it's natural language processing we were able to actually as a user talk to the assistant and the assistant can do things for you and so the assistant is live across devices like this through our open SDK but it's also live across Android and we announced an iOS app so you can get it on your iPhone and it's also live on wearables and soon TVs, cars etc and the assistant can so as a part of our developer platform actions on Google the assistant can order caps for you or make table reservations or even set up your smart home so it can clean your apartment and so we're really excited about what developers will build on top of the platform to help users that's awesome I think the drinks are ready I sure do this is the which drink is this pairing mode pairing mode it's pretty fantastic I'm impressed it tastes better because it was made with technology that's absolutely right putting artificial intelligence into your drinks done so I'm going to drink a little bit more of this but maybe while I do that can you talk about where you see this going what are you most excited about developers doing with this technology? yes that's a great question so what we envision is a ubiquitous assistant experience so that when you need help or need something to be done in your life that you can just ask the question and it will happen so for that experience to have the assistant in multiple places in your life and we don't expect a Google Home to be in every corner of your house and we also don't expect Google to build all of the appliances in your house it just doesn't make any sense so what we need to do is we need to empower a diverse ecosystem of device manufacturers embedding the assistant in their devices then you can have the assistant available to you when you need it wherever you need it however you need it so that's really where we see this going longer term in the assistant SDK and actions on Google and API.ai are just the beginnings of where we're going awesome well thank you so much thank you everybody and it's y'all out there cheers my name is Sam Birch I'm a product manager working on Chrome for Android I've also got Alex today I got a text from Alex he said the web is still too slow just let me do a couple more traces he's very serious about web performance so I guess he'll join later on if you didn't catch Rahul's talk in the last sessions you might want to look it up on YouTube later I thought it was a really crisp overview of the web it's the biggest platform in the world bigger than any other OS or any other form factor and there hasn't been another platform like it in the history of computing so what we're going to focus on in this session specifically is the web's role as a platform for apps for deeply engaging experiences and specifically progressive web apps but let me start at the beginning why build apps on the web if you have a significant presence today it's likely that it already has a lot of visitors and you're not alone here's a recent breakdown comparing monthly unique visitors to the top thousand apps and sites and what you see is that the web reaches three times as many people and you might have heard in the developer keynote earlier today that that reach is growing faster too historically though most users have visited the web in passing particularly on mobile so you might have to type a URL than it is to tap on a home screen icon you might watch a video clip or read an article but you're not that likely to come back later because native apps have had access to features like push notifications and icons on the home screen users who engage deeply with services on their phone have tended to do so through apps so looking at the amount of time users spend in those same apps and sites the average for apps is much higher than it is for the web so I think the opportunity here is pretty clear what if you could give users a deeply engaging experience as soon as they landed on your website instead of asking them to take the high friction step of installing a native app what if they could get a native-like experience from the web and the good news is the past few years have seen a dramatic increase in the quality and the capabilities of web apps the web platform has their broadly support features like push notifications on the home screen and technologies like ServiceWorker that make your site more reliable so if you're not familiar with the term web apps taking advantage of some of these technologies are called progressive web apps they're reliable, they're fast and they're more engaging as a result of this change folks who are investing in the mobile web are seeing great results a lot of decisions have been made in the past assuming that it's too hard to get users to come back to your website but the best should change too let me dive into a few more examples here way back in 2011 the Financial Times abandoned their native apps they built a web app using the best technology available at the time and app.ft.com has been going strong ever since recently that experience has gotten even better as browsers caught up to support developers building deeply engaging experiences on the web and now their site is a fully-fledged progressive web app that means I can save it to my home screen and even read an article while I'm offline the Financial Times will sync an offline section of the paper so that I can read it even while I'm on the train and this uses the ServiceWorker and they've even got an offline enabled podcasting app which is available at listentoft.com so you can build these pretty sophisticated experiences on the web today and it's fascinating to see how prescient the Financial Times was back in 2011 fellow publishing giant Forbes just took a similar step launching their progressive web app for mobile and they've even made a slick video to match their new experience with the itch and access of the web there's no place I can't reach the impact of the web on the newsroom was monumental it's now more the reader telling the newsroom this is important you really have to start to build from scratch what is a story on the phone with a progressive web app there's a link to install with no friction the PWA is on their phone and once that is installed we are able to alert you to hey we got some automation if you're interested in whatever areas that you are you can install that subject to topic and we're going to serve you the content that you want and that's going to change our business the technology has enabled us to make our new PWA faster than your current motorcycle we're now able to deliver visuals faster and if you can start to deliver visuals faster then you can start to change the formats you do people are willing to stay longer if they stay longer they see more advertising the PWA is going to result in a more personalization personalization will be more engaging the web has made me realize there's an audience out there there's an audience that's knowledgeable and there's an audience that needs to be understood that's pretty compelling stuff and I want to underline a couple of statistics that they called out in that video after rolling out the progressive web app Forbes saw a 43% increase in the number of sessions per user and those sessions were double the length on average and I think that a better experience for Forbes's users was also great business move sorry for the delay this isn't just about publishing Lyft has also launched their new mobile website as a progressive web app with the needs of users and of emerging markets in mind and you can try it yourself at ride.lyft.com in emerging markets where you can't take bandwidth or even connectivity for granted it's harder for your audience to get into an app and so instead of making a site that uses a landing page asking users to take this high friction step to install Lyft's PWA is a feature complete version of Lyft just without the install step after all the goal isn't to get users to install an app it's to get them using your service so to recap users are already visiting mobile websites making the experience of your mobile website radically better by building a PWA helps you engage and retain users from just passing by the pivotal moment for any app is when it earns a place on the user's home screen with progressive web apps that happens when users choose to add a site to their home screen improvements to this key step have been a focus for us lately and they're rolling out now and there's actually a flag you can enable here if you want to test it with your own site there are two key changes here for engagement first something we've heard consistently from developers and users is that it's confusing because there are a lot of apps where on the home screen but not in other parts of the Android system UI like the app drawer as part of the improved at the home screen experience users can find them now find them there now they'll also show up in system settings allowing users to manage progressive web apps more like other apps but I want to emphasize that this is actually pretty general for example they'll show up as suggestions and open from the Google search widget and there are a lot of places that were formally reserved formally reserved just for apps second something else we've heard is that it's confusing that sometimes progressive web apps would load in a tab in Chrome and other times as a full screen activity after the user had added it to their home screen so I'm happy to say now that we'll be able to handle intent from links opened in other apps or in Chrome that means that users who have added your PWA to their home screen will get the immersive version of your site okay now let's take a closer look at the manifest which provides the metadata to enable progressive web apps to be added in the first place this is the same manifest that you would use today to allow users to add shortcuts so if you support that today and you have a progressive web app all of this happens and you'll switch over without requiring any extra work so it starts with a name and a short name if you've seen a talk about progressive web apps or app manifest before this will look pretty familiar the short name is actually what shows up on the user's home screen and in system settings and the full name will appear basically where there's space for it so in the Chrome prompt and in the splash screen which shows before your site loads then you've got an icon you can specify multiple sizes here but we recommend at least one icon which is a ping with at least 144 pixels to the side 192 will scale up even better and that icon gets used in a few different places so in the app drawer as I just showed you in the rest of the system UI on Android inside of Chrome when users are prompted to add your site to the home screen and they're used to generate that splash screen the start URL is so to speak that the main screen of your app it's what users will get when they tap on the icon on your home screen and if it's open in the background already they'll just pop back to whatever page they had left the display mode controls how your app shows up on screen usually you want stand alone which is sort of like full screen you won't see a browser toolbar or other browser UI but you will still see the Android status bar and the Android navigation bar so this is by default sort of what native apps look like but there are also a couple of other modes coming to Chrome so first full screen which in contrast to stand alone does cover up the status bar and the navigation bar that's what you see here with paperplanes.worlds and minimal UI which is a new mode which has a simplified toolbar with your URL and allows users to see it and copy it more easily then there's scope even if you have an app manifest today you probably don't use this scope because today it actually hasn't done anything to affect the behavior of your app however with support for links coming into Chrome this field does now have some meaning so a core part of the user experience of progressive web apps is that they open reliably and users never see a network error from the browser in Chrome that's the dinosaur page and that's insured by having a service worker which handles a set of URLs inside your progressive web app but if you do provide a scope explicitly in the manifest it will restrict the set of URLs that should open in your progressive web app in its immersive form the scope of your progressive web app will also define the boundary of your app so right now navigations to other sites from your progressive web app that is navigations outside of the scope will open with this little URL bar on the top that you can see here that little bar isn't all that useful from you all that it's lacking especially compared to what you can do opening content from native apps so to close this gap we're planning to move this to a more featureful and familiar UI it'll allow you to copy the URL to open it in Chrome or jump back to the progressive web app that opened that content with the X and we're looking forward to getting your feedback on this as it rolls out later this year okay so you have a progressive web app with a manifest how do you figure out what precisely qualifies as a progressive web app the best way to check the requirements is to use Lighthouse where you can see there's a section called user can add site to home screen it's pretty self-explanatory if those are green your site should be addable on any Android device as we announced today Lighthouse will soon be integrated into Chrome's developer tools making this even easier to get at and we also know that being able to add your site predictably is important so we're going to take a look at some of the advantages to these criteria carefully and with advanced notice on the Chromium developer blog so with all those requirements in place users can be prompted to add your site after the browser fires the on before install prompt event today that prompts looks like this in Chrome it's fired when we determine your site is an eligible progressive web app and that the user has been sufficiently engaged with your site the intent of that engagement check is to avoid spamming users with requests and we said at Chrome developer summit last fall that we were experimenting with a threshold for user engagement at the very beginning this was a few minutes and a couple visits to your site and we heard from a lot of developers that this wasn't very predictable and so in the most successful variant of that experiment we saw banner triggering increased by half and remarkably the user acceptance rate didn't change much even though we were triggering earlier so we saw 48% increase in the number of installs as well so just to us is that we were being too conservative before and so I'm pleased to say that we've decreased the engagement threshold in Chrome stable targeting your prompts is even better than leaving it to Chrome so you can hold on to the on before install prompt event and show the prompts later at a time that makes the most sense to your app Flipkart for example in this screenshot delayed prompting users until after they completed a purchase a moment when users are engaged and getting a lot of value from their service and this led to three times more users accepting the prompt than that had before pretty soon we're going to be taking this even further by firing on before install prompt for sites as soon as Chrome understands the site is a progressive web app if you add a listener for this event early on in your page Chrome will suppress the default prompt and leave triggering to the install flow up to you which gives you a lot more control when the prompt is triggered from a user action the display will change from the toast at the bottom to this style of modal prompts in the gift here this better aligns with the way that users and sites are using at the home screen today in fact we see that most additions come from Chrome's overflow menu with sites often actually point to from their own UI and so by giving developers more control and the ability to reprompt when the user taps something it will be easier to show more timely and relevant prompts so to summarize again you can make your mobile web experience radically better by building a PWA and that can help you engage and retain users who are already coming to you we've been working in particular to make it easier than ever for users to add their site at your site to their home screen to improve the experience and users engagement once it's there and with that I'll hand it over to Alex to talk about some of the other platforms building for PWAs and some of the new capabilities Thanks Sam can't wait to see all that stuff roll out later this year Hi everybody, I'm Alex Russell I'm a software engineer on the Chrome team like Sam mentioned and I wanted to spend the next 15 minutes so making you as sad about web performance as I am No? I'm kidding, of course I did promise Sam that I wouldn't use my last minute slide editing power to make this secretly a performance talk it is actually taking quite a lot of self-restraint for me to not go into why it takes 40 seconds for this site to load but I'll do it so where were we? was it progressive web apps again? Right so app.ft.com is awesome but you might not know that progressive web apps are also on Chrome OS so if I go to app.ft.com in a tab on Chrome OS and I spend a little bit of time on it I'll eventually get an add to shelf banner very much the same way that I would on Chrome for Android with that add to home screen prompt at the bottom if I click add I'll get a full item in my shelf at the bottom and if I launch it it shows up as a full screen window a standalone thing it even shows up as a differentiated item in the task switcher it's a real boy, it's a full first class application this is pretty great so Chrome has been taking some steps to make progressive web apps work on the desktop but it turns out we're not alone for the past few years we've been extraordinarily lucky to have banner ads behind progressive web apps since version 4 Samsung Internet has had support for progressive web apps you can even try it on Google today so in addition to the use will add to home screen banner with user engagement Samsung Internet has been forging ahead with new ideas how to communicate to users that they're visiting something that isn't just a regular website If I land on Twitter Lite, I see the usual star button at the upper left when I first land there. If you've used Samsung Internet, you'll know that this is a little button that lets you bookmark any page. It's a quick action for that. But once Samsung Internet detects that the site is a progressive web app, that star changes to a plus. That's a visual cue to you that this isn't just your regular website. There's more to do here. Tapping on the plus brings you up to this menu, which lets you both bookmark and add it to the home screen. So it's a persistent cue that lets you understand that this thing is a PWA. So here's how the experience feels in context. This is the site that I'm particularly partial to, and not just because it loads fast. As you can see, Samsung Internet's progressive web app integration feels fast and fluid. Just like Chrome, Samsung Internet progressive web apps get their own top-level activities in the task switcher when launched from the home screen. And they launch display standalone too. So if you follow Android hardware, you might have heard of the new DEX. DEX is an add-on for the recently launched flagship S8 phone, which was released last month. DEX is a bit like a dock for your phone. It gives you a desktop mode, letting you connect an external mouse, keyboard, and monitor, all driven from the phone that you put inside the dock. As you saw on Chrome OS, there's kind of no reason to believe that progressive web apps can't excel on the desktop too. So here's the same flow. Samsung Internet team sent us this video showing adding a PWA to the desktop, launching it, and once launched, it again launches in its own top-level standalone window. Because making responsive multi-form factor experiences is so easy and fluid on the web, the Polymer Shop demo works great in both form factors. I think this is an outstanding testament to what's possible with the web today. Because web apps are standards-based, advanced features like push notifications can follow you to whichever browser you happen to favor on your Android. But that's not all. Thanks to the open nature of standards, our friends on the Microsoft Edge team have also been working with us on compatibility for features like service workers and web push. But what I'm really excited about is some of the stuff they unveiled at their build conference last week in Seattle. Later this year, Windows 10 will gain some very deep integration with progressive web apps. In a first for progressive web apps, the Windows Store will crawl the web to discover which sites are. Bing users will see them as installable apps in search results, and developers will be able to easily claim store listings. Listings in the Windows Store let users discover progressive web apps wherever they might be looking for them, either in a tab or in the store. So progressive web apps like jig.space will always be available in the browser, of course, but if you're searching for the same experience in the store, they'll be listed there too. Installing these apps is just like downloading any other app from the Windows Store, except they're tiny. But as a developer, all you had to do was build a great progressive web app experience. You didn't have to target a brand new runtime. Apps that are downloaded and pinned this way get all the same UI affordances as native apps, including the ability to be configured, installed, and uninstalled naturally, just like you would any other Windows 10 app. Billions of users across desktop and mobile are getting first class support for progressive web app experiences this year. I think it's a great time to be a web developer. This is hugely exciting. So PWAs are everywhere. In tabs for sites that you visit, on the home screen if you want them to be, on your desktop, and soon inside stores. For apps that need what the web already is great at, that opens up huge opportunities. If you saw Rahul's mobile web state of the union, you know how vitally important that is to businesses and how PWAs are positively impacting sites worldwide. But what keeps me up at night on the engineering side is the set of things that the web hasn't been great at. A few years ago, people sort of thought it was nuts that we would do push notifications and deep system integration without coming up with a brand new format and a brand new way of packaging and distributing applications. And that's what everybody else was using stores, right? But we've got some great examples now, things like Twitter Lite, OlaCabs, Forbes, and Lyft showing what's possible if we just add a couple of things that are currently missing from the web and how much better that can be. So what is that next set of apps? On the Chrome team, we've been kind of preoccupied with the question why aren't all of my social and media apps progressive web apps? So back in 2013, we were struggling to answer the question why don't web apps ever feel great on a phone? What's the set of capabilities that reinforce each other? To make reliable apps that loaded quickly every time, we had to solve the offline problem, hence service workers. To participate in a tap and swipe ecosystem on your phone, we had to make it possible for you to access them from the home screen and inside the notification tray. So we worked on add to home screen and notifications and manifests. So what next? Well, we think it's a similar constellation of new capabilities targeted specifically at social sharing and media. We've been working on a lot of stuff over the past year to expand the power and reach of progressive web apps, but I want to highlight a few that are close to shipping, namely, web share, improved media selection, image capture and shape detection. This is not by any means the full tour. There will be lots of other talks, which will point you at a few of them. But rest assured, we're continuing to expand the set of capabilities for the web. So if you've implemented a modern website, you'll no doubt be familiar with the endless opportunities for performance degradation that social media widgets add. As bad as the performance problem is on desktop, it's much, much, much worse on mobile. I promised I wouldn't turn this into a performance talk, so here's the shiny video. This is Twitter Lite using a new experimental API called Web Share. Web Share lets sites trigger a share intent on Android in exactly the same way native apps can. On other OSes, Web Share triggers whatever the built-in sharing system is, too. Instead of pulling in half a dozen social network scripts to enable sharing, the browser can now do the heavy lifting for you. That's a huge win for performance and for consistency with EOS. Triggering sharing is pretty simple. You just call the navigator.share method with the data that you want to share, and you'll have to provide either text or a URL in order for it to work out. The API is asynchronous, so it returns a promise which integrates nicely with a new async function syntax coming to JavaScript. As you probably guessed, we're also working on the ability for progressive web apps that have been installed to handle intents, too. That's happening via the Share Target API, and you should look for both of them in a Chrome near you sometime in the next year. Another major issue with sharing from the web today is that when you go to select a photo or a video to attach to a post, you'll often wind up seeing an intent chooser that looks kind of like this. Now, I don't know about you, but the difference between camera and camera wasn't immediately obvious to me. It turns out that the one on the left is for taking a still image, and the one on the right is for video. Who knew? The last one, files, brings you into a system-provided file picker like this. This isn't the nicest UI, and while it's functional, it shares a problematic commonality with the first two. Because this is using the Android Intent system to launch another application into the foreground, that means that Chrome can go into the background where it might be killed, specifically on a low-memory device. So you sit there and you go through and you find the perfect thing to upload along with this post. You hit OK, and suddenly the page is reloading. Where did it go? This is a pretty bad experience. So to get rid of the confusing dialogues and to deal with the memory pressure issues, we were working hard on a new UI that mirrors what most native apps have taken a shine to for similar reasons. Here it is. This new picker is faster and more intuitive, and I'm happy to say that it won't require any extra work from you to take advantage of it. Like the social apps you're used to, it allows you to select items from context already. There's also work happening to make background uploads of large media files easier, and one-shot background syncs are already shipped last year in Chrome. That means it's now the most reliable way to do an action like posting something to the web. It even retries when it detects that you're back online from a disconnected state. All of these APIs work together to make sharing and media easier. And that's a big improvement for selecting photos that you've already taken. But what about camera-based apps? I always tell the team that the web is for catgifts and making catgifts is too darn hard on the web. What's up with that? Luckily they agreed, and that's why the API is shipping in Chrome 59. This API builds on Chrome's really great get-user media infrastructure to give you access to detailed information from the camera. It also lets you build your own controls to manage important features like zoom level, focus mode, color temperature, red-eye reduction, flash, contrast, saturation, exposure. I could keep going. It's a long list. And we finally are going to get control over those features on the web in the next release of Chrome. There are so many options for image capture that I'm not going to go through code examples for them because it would be basically an exercise in regurgitating WebIDL. And I don't know about you, but reading WebIDL is not my idea of a good time. So luckily the Polymer team has been paying attention to this problem space too, and they put together some high quality components that let you easily integrate image capture into your app. Check them out at webcomponents.org by searching for the app-media component set. It's pretty great stuff. What's increasingly important for AR and media applications is the ability to quickly understand what's in a scene, though. This is helpful both for focusing when you're taking a photo but also for matching effects to faces and things that you might have in frame. Interestingly, it turns out that face detection, shape detection, and text recognition are built into nearly all modern OSes, and in the case of things like QR codes, sometimes even into the firmware for the camera itself. That sort of thing is expensive and slow to do in JavaScript and can potentially jank your main thread at a moment where you really want it to be smooth. You're taking a photo after all. You want to put some pictures on screen. Like WebShare, having the platform doing the heavy lifting for you can really transform the quality of the application that you deliver to the end user. So we're bringing shape detection to the web. Using the new image capture constructor, you can easily grab a frame from input that's coming from the camera or any other media stream whose contents the face detector lets you scan captured frames for faces, returning a list of detected faces with x, y, width, and height attached to each of them. You can capture multiple faces in a single frame and this API takes care of all the fiddly bits for you. Same for the barcode detection. It works exactly the same way. You grab frames from the screen and pass them into image capture and then you decode them using the new barcode detector object. It does exactly the same thing except that barcodes not only have x, y, width, and height but they also have a raw value giving you the decoded value of the QR code of the barcode that you're currently looking at. And while this may not seem like a big deal to those of us who don't live in countries where QR codes are used every day in many parts of the world physical commerce is increasingly mediated by the ability to quickly scan QR codes. This capability unlocks the ability for progressive web apps and the web to compete on equal footing letting commerce-focused progressive web apps easily integrate a heavily used feature at much lower cost. You no longer have to send your own QR code detection library down the wire. This is great for users. And there's so much more to say about media. In fact, John and Francois are giving a whole talk focusing on playback on Friday at 10.30 which you should absolutely check out. The set of things that you can do today with service workers and the web's built-in media stack blew my socks off when they showed me the demos. I think you'll be similarly impressed. Also, be sure to check out Owen Campbell Moore's talk on building great user experiences tomorrow at 11.30 over on stage two. We've learned some really important lessons and common patterns in working with partners over the past few years and his talk is going to be essential if you're just starting on that journey. It'll help you learn a lot from the things that we've learned. So that's all we've got for you today, but we have a lot of time for questions. And if you do have questions I'm hopeful that you'll be able to use the mics that are standing in the middle of each of the aisles and line up. Thanks so much for having us and I can't wait to see what you build. Adding support for a new platform feature in your app can be a very repetitive process. Many times it will switch back and forth between your code and a step-by-step tutorial or documentation especially where you're not familiar with the API. That was the case when I was implementing support for AppLynx in a sample app back when Android Marshmallow was released. AppLynx is a neat feature that lets you verify a domain that is listed in your activities intent filters. From that moment forward, whenever a user clicks on a URL containing this domain your app will open automatically without showing the disambiguation dialogue. I remember that in order to make this API work, I had to make many small changes in different parts of my app which is why I was very curious when I heard about the new AppLynx Assistant that is part of the Android Studio 2.3 release. It's promise to add AppLynx support in your project with just a few clicks without leaving the IDE. Let's see how that works out. I found a perfect sample app for my experiment under Google slash search samples on GitHub. It's a very barebones recipe app. The main page doesn't really do anything but there's also a content provider with several example recipes set up and ready to be displayed in the whole page implemented in recipe activity. All I need to do is open the activity and call Show Recipe passing a content URI which matches a recipe in the content provider. Now imagine that I'm also running a website for my app and I'd like my users to be able to share and open links to recipes in the same way, regardless of which device they're using. Let's see how long it takes to implement this using the AppLynx Assistant. You'll find the Assistant under the menu and it shows up as a panel on the right side here. First step, I want to define URL mappings in my app. I will add just one to connect recipe URLs to the detail activity. I just have to fill in my website's host name and what path to match. I'll use the path prefix recipe to also capture any recipe ID after the last path separator and I want to launch the recipe activity. The tool generates the correct intent filter and adds it to my manifest. I can even see a preview of the changes and check that the URL I need will get matched to the correct activity. Step 2, add logic to the activity to handle the URL. I'll select the activity that I specified in the previous step and some code gets added to my onCreate method. This is just to help you get the necessary data from the intent, but remember that it's up to you to actually handle it in your app. In my case I want to load and show the correct recipe. For that, I need the recipe ID which is the last path part in the URL. Now I'll just convert that to a URI pointing to my content provider and pass it on to show recipe. In some cases, such as this one when the activity's launch mode is set to single top, you will also have to handle a new intent delivered to an already running instance of the activity. I will refactor the code I just wrote into a new method from on new intent as well. Before I move on to configuring Applix on my server, I just want to check that everything I configured so far works on the device. I'll just use the Applix tester in step 4 to launch a URL pointing to the grilled potato salad recipe. It correctly launches my app and shows the recipe. Great! You might have noticed that this immigration dialogue that popped up when I launched the URL. Getting rid of it and launching it directly into the app is the last part. Let's proceed with step 3. Generating the digital asset links file. You'll need a few details about your app such as the domain you're using in your links, application ID, and assigning config. Most of these will normally be pre-filled for you, so you can just click the generate button. Now you will need to place this file under this path on a domain that you control. Please note that this is where it's the best way to make a mistake. The path must match exactly. The well-known folder must be under the root of the domain, and the server must be using a valid HTTPS certificate to serve the file, even if your app links are only using HTTP. For testing my app, I'm just using GitHub Pages, which gives me an easy way of hosting my asset links file with the correct SSL certificate. Let's test a sample link again at step 4. Let's have a little chat about a brand new API that's just coming out on the platform called the Media Session API. I heard about it when I started doing this app. It was mentioned a few months ago, but I'm not sure if you've seen it yet. Let's have a little chat about a brand new API that's just coming out on the platform called the Media Session API. I heard about it when I started doing this app. It was mentioned to me by a couple of the Chrome engineers. But Francois, who's on my team, there's a brilliant Google Web Developers update where he explains in complete detail about how to set up the Media Sessions API. So check the notes below. We're going to pop in a link for that, but I will just show you briefly what it actually looks like in the context of the app on the phone here. Cross to the direct scream cam. There you go. That's what it looks like on the phone. So as I start playing a video like this, you see that if I swipe down from the top, we actually get a notification which has an icon here and it has play and rewind and fast forward buttons. And you get to configure those yourself. In fact, let me go into another one of these videos where I think I've actually set up to load some custom album art as well. This is me and Jake. And you can see here now we've got custom album art which is the picture of Jake and the previous and next, the fast forward and rewind buttons are actually set to be skip forward 30 and go back 30. So if you'll be able to tap that and go forward 30 which you can see there. Oh, it's just skipped it right to the end. Whoops a daisy. But I can replay the video. Don't worry. The other thing that it actually does which I really like is if I turn off the screen in the background. That is very exciting, isn't it? So let me show you a little bit as well since we're here. Let me show you a little bit of the code. It's very straightforward. We have a quick check whether we support the media session API which is basically looking for let me show you actually. It is just simply looking for media session in navigator and if we have media session in navigator then we consider ourselves having the media session API. And what we do is we say media navigator.mediasession.metadata and then we create one of these new media metadata thingamajigs. It's very exciting. ESLint doesn't like it. It doesn't think it's a real thing. It is a real thing. You can totally use it and you give it things like the title, the album, the artwork. I've only set the 512 and the 256 but very much like your manifest files for progressive web apps. You can set as many of these as you need and the user agent will choose whichever one it's for, the device that it's on and it will upscale and downscale as necessary but I currently am just setting a couple of them. I may need more as time goes by and then afterwards after we've set up the metadata we set some action handlers for things like the play the pause, the seek backwards and the seek forwards. The thing to bear in mind here is that any that you set will have the buttons appear in the notification. If you don't set one because there are other ones that you can set as well and I forget which ones they are but check out François's post he explains the whole Kikaboodle. Any that you don't set won't appear, any that you do set will appear in the notification and then somebody can control your stuff from the lock screen or by just dragging down from the top. All very good isn't it? A very straightforward code to be writing so a brilliant little progressive enhancement theme that you can chuck on and that I have chucked on my media app. Cool. Do it Lou. GDG Mumbai. Such a shitty a call to all. I am from GDG Jalandhar. As a community we are a family and we learn a lot of things together and we share it with the community. It's a good opportunity for networking. We get the exposure we'd never get from any other organization. Hello everyone. How many of you guys are android developers here? I thought as much of a majority. Looks like you guys are already big fans of Firebase. I'm gonna explain you how you can get stuck in with APA.ai, kids that you know go back and hopefully build one agent. To all the women out there let's get to work ladies. It's high time that we start coming forward because we are good at it and there shouldn't be anything that's holding us back. Being with GDG is really cool. Googling their rocks. Google Launchpad is Google's program to achieve startup success through in-person mentorship. If you look at India today we have 400 million folks online and by 2020 it's gonna be 650 million. That's a huge number and therefore we feel that working closely with developers, entrepreneurs startups is the way to go forward. At the core of Leader's Lab it's really around this question how do you help smart and very motivated people grow and our belief at Google is that the best way to help them grow is simply by giving them feedback. This is built upon Google's best practices in people operations and supported by Harvard and Stanford Research so we really use the best of Google and the best of the world. The Leader's Lab was beyond my expectations. What were my strengths? What were my weaknesses? What are some of the gaps that I should be focusing on to become even a more effective leader? How do you have those tough conversations with people? How do you structure it in the right way, the value of honesty? The way the feedback was collected I think I really liked that from our colleagues, from our partners, from our clients. That was a great effort by team Google to actually do that for us because we wouldn't do that for ourselves. Launchpad Accelerator is a global program. We're looking to empower the startup leaders across the world especially in next billion markets and we want to help them of course to achieve success but lift up the potential of what's possible. I think that one of the things that I have applied in my life as I have been evolving is like feel the fear and do it anyway. Well I want to tell the whole story. Basically I am a developer advocate at Google and somebody you know the head of WebDare Rail told me that would you like to go to Latin America and to do the roadshow and I said yes. Anything that I can do to help the region and the developers here to advance and to enable them to create awesome stuff it actually makes me very very happy. We've been in Sao Paulo and Rio and now we're in Mexico City we're doing one day events and meeting fantastic developers the web is a fantastic way to deliver content you know you can get content to users with very small downloads using progressive web apps. My life philosophy my motto actually have two mottoes on the web. The first is keep it simple and the second is focus on the user. The roadshow is a great way to educate people how to build fantastic user journeys from top to bottom in 2017 and beyond most of the barriers are safe inflicted. You really can do most of the things that people tell you not to do on your own so keep pushing for what you want to achieve in life. For us it's really interesting to actually get out there and understand the local culture as well just you know email or Twitter or something like that is not enough you need to go out there and see there's actually human beings all over the place and understand both the opportunities they have but also the constraints so many things in life that can be boring when you actually get out there and you connect to people and you see all the similarities and the things you can do together there's nothing better than that. Keep on exploring all the time there's so many things that are terrifying and you don't want to take to risk but it's almost always worth it. It's great to be back in Mexico City. I've met some really amazing developers here who are just building some really cool things so it's been an awesome opportunity to connect with folks. Every day when I go into work or start any new project I always think to myself no obstacles only challenges. My life motto is probably passion for your craft so really believing in what you do and wanting to be the best at it. I can't wait to see what comes out of this what gets built. This is exactly why we love doing what we do so much to see these sorts of events and these sort of passionate developers and visit these sorts of amazing cities too. We hope to come back visit more countries and we really want to hear what you're doing the PWAs that you're making and get in touch if you have something that you think we should look at. So Matt what's in your bundle? I don't have any bundles. You don't have any JavaScript bundles? I don't write JavaScript anymore. Is JavaScript fatigue finally gotten you? Yeah that's it I'm dead now. What do you do these days? I come and meet you in a coffee shop and get free coffee because you're a delightful human being. Like any normal developer I see. What else out there working on JavaScript? There's a really cool tool I recently discovered called Source Map Explorer. You can get it on NPM. This is sort of useful for understanding what is in your JavaScript bundle what you're shipping down to your users. How is this different from just normal source maps? What does it give you? It sort of understands your dependency graph and dependency tree. Visualize it for you in a nice pleasant way. So I'm going to run it against a source map that was generated by my vendor bundles and just show you what this looks like. So here we've got Chrome Open. I've got the Source Map Explorer dependency graph sort of visualized here. And this is kind of nice because I can take a look at any one part of my graph. So I can see here these are all the different bits and pieces of React that I'm using. I've got React Router over here in the corner and I can zoom in and out. It's so nice and pretty. But I've also got I can see that you know what percentage of my bundle is being taken up from my database. Here I can see that in this case Firebase is taking up quite a lot of my bundle. But I wanted to show you a pro tip that I was walking through with David East the other day. Source Map Explorer visualizes this quite nicely. So I can actually just require in the pieces of Firebase that I'm actually using. Rather than a whole massive bundle. Right, rather than a whole thing. So a few minor changes. I can go and do a new build. And if that all works out I can show you the before and after thanks to some movie magic. Now we're going to go and run it against the new vendor bundle. And what we can see now is that A, we've got a much smaller bundle on the whole. But B, we've also changed the look of this graph. So earlier we had Firebase that's about 304 kilobytes sort of that's unminified on GZIP or anything like that. But it contained database communication, storage and app. I really only need app and database so I just switched my code over to using those atomic modules and here you can see that we've actually managed to make our app a tiny bit smaller. It's 138 kilobytes before we've minified it and GZIPed it. All you need to use this is just the source map. You don't need the original source or anything like that. It just works on source maps. So normally like for a dev at least I usually have my source maps on my site. Yeah it's kind of nice because I guess if you strip stuff out of the actual main source by whatever tooling that then gets accounted for in the source map it's nice. There's a bunch of different bundle analysis tools that are available. Webpack Bundle Analyzer is another one that's got colorization in place and it's super sweet. Specific to Webpack I think at the moment but it's also worth checking out. But yeah tools like this are just great for understanding exactly what you're shipping down to your users. So it's good to ask yourself what's in your bundle. I see what you did there. Nice. Hello I'm Timothy Jordan and this is your update about the coolest developer news from Google in the last week. The advanced Android app development online course has been updated, improved and extended. With it you can build a portfolio of apps as you improve your Android dev skills. The course is linked from the post in the description below. We recently launched AIY projects. Do-it-yourself Artificial Intelligence for Makers. With it Makers can use artificial intelligence to make human-to-machine interaction more like human-to-human interactions. We'll be releasing a series of reference kits starting with voice recognition. More details and links are on the post. Chrome 59 beta is now available with headless chromium, native notifications on macOS, service worker navigation preload and more. All the details are on the post. Google Cloud Launcher has more Google-maintained containers including Cassandra, Elasticsearch, Jenkins, MySQL and more. Google container solutions are managed by Google engineers and since we're maintaining the images, the containers available on Google Cloud Launcher will be current with the latest application and security updates. There are two announcements from Google Cloud Next London that I wanted to tell you about. First, Google Cloud Natural Language API is adding support for new languages and entity analysis. And second, Cloud Spanner is now generally available. Check out the details of both announcements on the post. Google I.O. is just around the corner and if you're like me, you like to go in prepared, which is why we have an Android, iOS and web app to help you customize your I.O. schedule and get around the developer festival. Check out the screenshots and find the download links on the post in the description below. Please subscribe and share. I'm Timothy Jordan and I'll see you next week. Thank you. And now it's time to talk about Android Auto. Doesn't it feel like old times? I know it feels like we're an introduction to Android Auto Reunion. Dylan, tell us about what we're sitting in. Actually before we get to that, tell me about what the latest is with Android Auto. Well, I think the latest we're showing here today is that Android is now embeddable in the car. So unlike, I think we've seen in the past, you guys were showing on the phone or even connected over USB to the car. That's Android Auto. Now we're showing Android as the actual embedded operating system in the car. So our goal is always to have safe, seamless, integrated connected services. And now I think we're seeing the seamless part of that. There's no phone involved here. And it's beautiful. Yeah, so this is an Audi Q8 concept car. It's absolutely gorgeous. It's got a really nice set of screens we can look at here. And Audi actually just announced plans to ship in the future with Android in the car. So that's what we're showing here today. Very cool. Well let's get to, I don't know play with some features and then we'll get back to Android Auto in general. Cool. All right. Let me show you a few different things here. So first of all, we've obviously got a home screen. It doesn't really look like a phone. It doesn't look like a wearable or a TV. So this is nice automotive integrated experience. We've got the kind of information a driver is used to seeing here. But the key aspect of this is that Audi did the integration and it's Audi's UI concept. It's not just another phone or even just another car. So for example, they like to have music front and center. They've got a great sound system. So there's interactive tiles here. We can turn these on and off. They have the ability to look at vehicle information. Unfortunately, we're not driving right now. So it's a little static, but they have this and it's an Android app. That's the important part. It's just an Android app. We can also go to their home screen and see on the launcher, they've got their important stuff that they feel is important and center here. So for example, we can switch between apps. There's an Audi navigation app and we can start running it. It does what you might expect in a car, right? We're driving somewhere so that's good. Let's go. Actually, yeah, we should do. Not. They won't be happy. So this is happening, but what I want to draw your attention to actually is kind of the integrated aspect of what's important over in the cluster display now, which actually isn't running Android. This is a real-time operating system for the driver information, but we do have information coming from the app, from the APK through the vehicle network and it's being integrated into their cluster over here. The same with actually the Android notifications as well. And if I switch to what they call their big stage, their different view, we can also now push through a cluster API. We can push real effects from an app as well. So this is pretty automotive specific. This is not something you would necessarily want to do on a watch or what have you, but this is the kind of integration we're looking to do with the partners. That's awesome. I mean, how it's all well integrated and you still get the real-time OS for the instrumentation. I mean, that imagine is critical in the automotive industry. Yeah, absolutely. So we're really focusing on what they call infotainment. The music, the maps, the media, what we're looking into is the critical information that we're not touching the brakes, for example. We're not controlling the brakes yet. I shouldn't say yet. We're not controlling the brakes. Or things like the speed limit, which really has to be accurate and present all the time. But I think the key, as we're saying, is we still do want to show integration as appropriate so the driver has all the information they need to see right in front of their eyes. And the passenger, for example, can still work with the system as well. Maps, for example, like this. And the passenger can work with the app like so. Or they can even use the Google services like the Assistant. Navigate to San Francisco. San Francisco, alright. That's more like it. We're on the way. We can also show how here we've got third-party apps as well. So we were just talking about how on the phone side of things the app developers have enabled their apps to work seamlessly with their car. And the same applies here. So it just extends right across the ecosystem. It's kind of just another screen to an app developer implementing a couple of APIs. But of course we can switch to different apps. And I think the really cool thing here is that it still looks and feels like an Audi. It feels like a premium experience. But it also kind of feels like pocketcast. And it also kind of feels like Spotify. So I think those are the key things we're trying to call out with this concept demo with Audi. It's number one Android is a good operating system for the car. And number two, the developer ecosystem can come with it and tie really nicely with a good vehicle integration as well. So there's a lot. Like he uses the standard Android auto, media and messaging API. So that just integrates right into here. So yeah, that's cool. Yeah, absolutely. Like media browse, media session, as long as the apps work there, then Audi can assure that it's going to work in their car as can any other OEM that's the key, right? The standard APIs. Awesome. Well, before we get going, I'd like to ask what else is going on the world of Android Auto? One of the things that I noticed recently and people are still asking about is that you can get Android Auto on your phone even if your car doesn't have the integration built in. Yeah, yeah. So one of the things we talk about is safe and seamless connected experience in any car. So obviously in this car, we have it here, it's seamless connected. But actually just if I have a phone and I'm driving my crummy old, I shouldn't say crummy, my Myster 5. I love it. I love it. It moves my kids around. I can stick this on the dashboard. I have Bluetooth connection and I have everything that I want. So if I'm driving with my old car that doesn't have Android Auto into it, I can still use my phone as is and look at the media apps that I already use today, my contacts and my map in a safe, seamless way. So yeah, I think that's the exciting part is on your phone, connected from the phone to the car or just the car. Very cool. Well, I think that's all the time that we have. So I guess we'll get going. Anything else you want to say? Well, do you want to get going? Should we just drive? Yeah, let's do that. Let's do it. You want to go take a seat? Hello everybody. Hello, hello. Thanks for coming. My name is Corio Conner and I'm product manager on Android TV for major platform features including the system UX setup and more. Hi, I'm Isaac Atenelson and I'm the technical lead for the home screen and launcher experience. So I know happy hour is coming up and we'll try to keep this really interesting to the point and get you to drinks shortly after. I'm going to start and give you all an ecosystem update and walk through some of our new announcements and then I'm going to hand it over to Isaac and he's going to give you the technical details of how you as Android developers can bring the new experiences to your apps. So there are big changes coming to Android TV but before we dive into them we wanted to take a little bit and look back this year. In short it's been a great year for Android TV. We continue to see amazing growth in the number of devices. Our strong partnerships with pay TV operators and hardware manufacturers have allowed us to more than double the number of devices that have been activated in 2016 and we actually expect that to continue if not increase in 2017 and going forwards. We're seeing this growth both in the set top box form factor which includes streaming devices and cable and satellite form factor boxes and in addition to the smart TV form factor. Since last year at IO we've launched a number of new devices with partners including the dish, air TV with dish, the Mi box with Xiaomi and with Airtel we actually launched the first satellite set top box based on Android TV in India this year so that's pretty exciting. The list of partners up here is actually just a few that we wanted to call out. We've also expanded our international footprint to 70 countries and there are now more than 3,000 apps on the Play Store so that's pretty exciting Android TV apps. In addition though mobile viewing has grown the TV still dominates for lean back content. There's a recent Netflix report came out shows that while acquisition of a user can happen across a number of form factors that could be laptops, computers phones, tablets and the TV itself 67% of long term viewing happens on the TV and in the living room. Now this just validates what we've long held which is the TV app experience is critical for content providers. It's what you have to nail in order to deliver long term highly satisfied users. As Dave said in the keynote this morning Android TV has seen tremendous growth I think his stats were a million new activations every two months but as they say the best is yet to come. So we launched Android TV three years ago and since then we've learned a lot but this year we took the opportunity to take a step back and take stock of where we were on the platform and also how changes to how people were engaging with content in the living rooms. Now across the industry we found these three common issues these three core needs that were really going unaddressed. So the first thing we saw was that it's really hard to come to grips with all the different content available to us. So families now have multiple sources for what they want to watch depending on the mood the time of day who's in the room and a bunch of other factors and it's interesting because content is mostly visible within the app or service. So what you do is you turn on the TV and you have to actually like deep dive into the app to see if it has something you're actually interested in watching. So it's become really hard to know what's available before you figure out who you want to watch it from which we kind of realize is a bit of a weird situation. Also the problem gets a bit harder because there are so many different types of content available to watch. It could be video on demand and that could be through subscription paid or rented. It could be digital web driven sort of social content or it could be live television through pay TV subscription or an antenna or over the air source. So as of yet it's been really hard to fold all of these different types of content into an empowering experience for users. So the second thing we saw is that the emotional hook from a piece of content that we see it's diminishing. You can imagine that we may have found our way into a list of options that could be recommendations or something like that and we're seeing the movie art or poster art and we're not feeling connected to any of it. Now with this huge number of options for content available to us which we love, we like a big menu but we're losing our ability to make a decision. Now we saw users with these huge lists of options available to them. Maybe there's a watch list that they've curated or maybe there's recommendations that they've made sure as much as possible that they reflect what they want to watch but when the time comes to actually place something off of that they really can't find the spark that they originally had, why they put it there in the first place. So the watch list had become more aspirational than actionable. The third thing we saw was people were really looking for ways to tweak the content on a device to customize it for how they want to watch. So when you bring a TV to your home and you sign up for a service or an app the shows, the actors that they become part of your family, they're part of your routine, you watch them every night it's like a ritual. We think that part of this should be being able to nest with the device to customize the experience so that it fits with your life and your needs. If you watch people get a new TV or a new phone one of the first things they do is they sort of move things around, you move the apps around, you set the background you set your ringtone, you customize it to how you want to use the device but with TV from a content perspective people really can't do that. Now these concerns have been around for some time we're not surfacing these, these are the only people to have ever discovered these things but they come out now and again user research and market and analyst reports, they're known issues but what we've done is we've taken these issues to heart in Android O and we've set out to directly address them with a major refresh of Android TV. Now the first major change you probably have already heard about earlier this year we announced that the Google Assistant will be coming to Android TV the Google Assistant is your own personal Google, you can ask it questions, you can tell it to do things it's always ready to help and it's available across all of your devices so you can integrate it into your life. Now what does that really mean for TV? Well, one of the core design principles in bringing the Assistant to the TV was that people are trained from a young age to talk about media with others to use our voice into discussion about what we want to watch or what we feel I do this frequently with friends and family, we go back and forth about what we're interested in, figure out what we want to show or a movie how much time we have left what's available to us and we arrive at a decision. This sort of interaction feels very natural, it's what we're used to when we're dealing with people in everyday life. Now we think that voice is an amazing augmentation for media consumption in the living room. So what we've done is we've optimized the Assistant on TV from the ground up to fill this role. That means with an Android TV you can navigate the UI, you can use the voice button on the remote and you'll be able to talk to the Assistant or you can use the Assistant in a hands-free mode to discover and play content. Even better, you can transition between these modes seamlessly so whatever you want to do it fits the situation you're in. If you want to talk, it's there if not, no big deal. Now there was a bunch of work that went into us getting here. We had to build multiple speech models and we had to use the UI that's responsive to the situation and context of a user and a number of other things. But what it's resulted in is a new, more natural and transparent way of interacting with your TV. In addition, because it's the same Assistant that connects your Google and third-party services and devices you can use it in a bunch of interesting scenarios so actually let's take a look at one of these. Let's imagine it's Friday night and the Assistant knows it's me and it knows the context where I'm at and what sorts of things I'm interested in, like actors, for example. So it's going to bring up a list of things that might interest me from multiple sources. There might be a way to further refine the search or broaden it but let's assume for now that I see what I'm looking for. One thing to note is the Assistant can work in a conversational manner meaning that it's going to remember the context I have in case there's a list of things I'm interested in. In this case, I want to watch Deadpool, which is a hilarious movie by the way, but if, for example, it's a complicated title or if I'm feeling a little bit lazy I might just refer to it by the order it is in the list like I might be talking to someone else in the room so I might say play the third one. Now, what's happened is that the Assistant understands where we're at in the conversation and what I'm looking for. So, I might say, dim the lights. Maybe the movie is just starting and the family is sitting down ready to watch and we realize we want to make the experience a little bit more comfortable. So, because it's the Assistant, it's connected to the other devices in my house and I can control the lighting in my living room just by asking. So I might say, dim the lights. So, in order to truly change the way people use Android TV, we had to make a change to our home. For Android O, we've redefined this home screen on Android TV to be a channel-based content first experience. This new home enables users to engage with the content and apps that they love all in the same place. Now, instead of you showing more screenshots up here, we decided at a bit of risk to give you a live demo using a developer device that we have and we'll walk through the features on that. So, could we switch to the Nexus Player please? Isaac's going to help me out for this. So, I wanted to call out that we're still putting the finishing touches on this experience, so you should expect to see further visual changes and performance improvements as we move closer to launch. So, let's walk through the layout a bit. So, at the top, you're going to see the same quick access to search that you're used to and below that we're introducing Ro for favorite apps to easily launch into the favorite apps right from the top of the screen. You'll see watch next below that and we'll come back to favorite apps and watch next a little bit later. In the middle and continuing down, we have a number of rows full of content that we call, perhaps unsurprisingly, channels. So, Ro is a channel. These channels are the core of the new home screen experience. Each of these channels is created by a media app to display content relevant to you. You might think of each channel as a window into the content available inside the app. Now, apps surface content in channels by displaying programs, which are the boxes you see here with TV shows or movies or video on them. So, we've got a channel inside a bunch of boxes. Those are called programs. These are recommendations for content that's available right now for you to watch and they could be on-demand content like a movie, a TV series, could be a video clip, live TV show, could even be a live traditional TV channel. And actually, there's a couple more. Isaac will talk about that later. Now, each app gets to decide what the channel's named, what it looks like, what content appears in it, what metadata goes with that content, and in what order the content is displayed in that channel. When you select a program that seems interesting to you, it's going to launch the app and take you directly into the content starting playback immediately. If you decide to press the back button from there, the app has the ability to sort of re-engage you and take you to a higher level screen inside the app. Now, by structuring the experience this way, what we're allowing apps to do is surface different types of content in the same experience, giving you the choice of what you're in the mood for. Now, you could see in the same experience what's on your DVR, what movies are recommended for you to rent, and what on-demand shows maybe you haven't been watching yet. In addition, apps have a lot of control about how they want to present what they think is most relevant to you, so there's a good balance there. All right, actually let's go to the Play Movies channel and let's select Fantastic Beasts. Now, what you're seeing right now is a feature we're really excited about called Video Previews. From our research, we found that seeing short previews of the content is a significant boost to engagement and decision making, and that kind of makes sense, so let's use Fantastic Beasts as an example. I know it's set in the Harry Potter universe, and I love the Harry Potter universe, and I've seen it come across my recommendations often, and I haven't yet pulled the trigger. But when I focus on it in this discovery experience, and it plays the preview, I see the world and I see the characters and I hear the music and it's no longer a poster to me. I'm drawn in. Now, apps have the ability to provide video previews for each program they put in the channel, regardless of what type of content that might be. That might take the form of a live preview for live content. It might be a trailer for a movie, for a TV show, it could be the series trailer, or it could be, we like, a season trailer, and for video or digital social content, it might be maybe the first 15 seconds of what that content is. We think this is going to drive a lot of engagement for our app partners and we're seeing many developers integrating already, and you can see it's quite a compelling experience, as we're in a discovery mode here over a poster. So maybe I'm not in the mood for fantastic piece right now. Let's actually go to the next one and take a look at hidden figures. So you can see how powerful these video clips are, where we're sort of previewing the content that we may want to watch right from the discovery experience. It's almost like I'm in the movie theater and I'm watching the previews and getting excited about what I'm about to watch. Now what happened when, actually this looks great, so let's open it up and launch it. What do you ladies do for NASA? Now what happened is we clicked on it and it launched directly into the Play Movies app and it's playing the content. This experience works really well, you can tell with the existing app model. So we've hopped into the middle of the movie. Let's pretend for a moment that I've watched it, but I have to stop and let's go back out to the home screen. And let's go back to the top, please. So what you're seeing is that the content has been added to my Watch Next Row, because I wasn't done with it yet and the system is helpfully saying look, you have this available still to watch. The Watch Next Row is a single system-delivered channel that's always at the top, where content you previously engaged in will be presented and this will be from all apps. Watch Next is designed to give you an easy way to get back to content that you know and love, especially if it's something you're binge watching or actively recording or you haven't finished. In addition, if you haven't watched something yet but you want to in the future, you can add it to the Watch Next Row directly. So I did get excited about Fantastic Beast, I think it was the preview video, so let's go back to Play Movies Row, let's add it to Watch Next, and let's go back up to the top. You can see now how the Watch Next Row over time is going to reflect the things that I like to watch. So an app is not just restricted from offering only a single channel. In fact, it's just the first channel from every app that appears automatically. Apps can actually offer multiple channels for a user to the home screen, which basically gives the app more real estate for them to sort of draw the user in with their content. You know, I wonder if YouTube actually has an extra channel, YouTube has lots of content available, so let's go take a look. We're going to go to Customize Channels and we're going to go to YouTube. We'll do it questions in a little bit. So we're seeing multiple channels here. You know, today at the keynote, I think Sarah mentioned that the YouTube is launching 360 videos on the TV and Android TV was going to be one of the first devices to get that. Well, for a demo, surprise, surprise. What a complete coincidence, we have a channel with 360 videos, so let's enable that. As you can see, the channel was added to the home screen. So actually, let's check out one of these videos. I think we're on a Coachella video, so let's open that. Launch the YouTube app. Now, Isaac's going to use the remote to look around, and I think this video is pretty cool. It's got actually multiple 360 cameras all throughout the audience and on stage. I think we're going to do transitions between these different cameras so you can kind of look around and see what it would be like to be on stage in the audience backstage a whole bunch of things. So this video is pretty great. Let's hop back out, but make sure to go and check this out on your Android TV, I think it's launching in the next few weeks. So the ability to have multiple channels from the same app and channels from different apps allows you to have content for everyone in your family's case available from the same UI. So we've seen how as a user you can go and proactively look for channels available to you through apps that are already installed on the device. Also though when you're in an app, that app can suggest channels to add to the home screen. So a dialogue would appear and you would say yes or no, and if you said yes, that new channel would appear. So in this way, as a developer, if a user is interacting with some specific part, some themed part or something relevant for you to create a channel, you can actually proactively suggest hey, there's something you can add to your home screen from my app, would you like to do that. The last thing we wanted to mention is that there's still a way to access all the apps on the device, whether that's because you're launching an app that doesn't fit really well into the media first model or because you're just more comfortable clicking on icons and brands and that sort of thing resonates with you. So from here, you can also set which apps appear in the favorites row we talked earlier. So let's go ahead and add YouTube Kids to the favorites row because we have friends come and stay with us and it would really help to have an app available for content available for them. So it was added up there. So could we change back to the slides, please? So in addition to the home screen, we've made changes all around the Android TV platform and we really don't have the time to discuss them in this session but I just wanted to pick out two examples. So as part of Android O, we've built a completely new setup experience for the TV that's going to help you transition media apps from other Android TV devices to your new TV. During setup, if you, for example, have an Android phone with TV relevant apps on it and you're signed in to a Google account when you're doing setup on your TV, we're going to suggest installation of these apps. With a simple click of a button, we're going to install the app in the background and that app will then proactively put channels full of content on your home screen so that by the time you're done with setup, voila, you're going to land on the home screen and it's going to be full of channels, full of content available to you. And even better if that app integrates with smart lock, what we're going to do is we're going to remember the login information so when you click on this app or click on a piece of content, it's going to automatically sign you in so you'll get out of your, out of the setup flow your TV will have content, you click on a piece of content, you go right into watching it. Also we've made updates to components in the Leanback library and call this one out. This is a great update to the playback element that adds detailed seek thumbnails. It looks amazing and it's actually extremely useful when you're trying to find something specific in the speak, in the seek. So we talked a little bit about the three challenges we saw on the living room when it comes to entertainment and we also talked about the changes coming to Android TV. With the Assistant, we're augmenting the remote to have natural voice interactions with your TV and we're really excited about the new content first home experience on the Android TV. We think you're going to find that it's easier to figure out what to watch, it's more fun to look at your available list of options and it's more engaging when you actually customize this experience for what you're looking for. Now the Assistant will be available on Android TV devices back to Marshmallow and it'll be later this year and the new home experience will be launching on all Android TV devices when they upgrade to O, so expect to see the first devices with this new experience late this summer. Now I'm going to hand it over to Isaac like I talked about before and he's going to talk about the technical details for how you as Android developers can help us create this experience with your apps. Thank you, Corey. So let's talk specifics. What can you the app developer do to showcase your content, your app content on the home screen? We start with the basics. We're using two well known Android concepts. The first one is the content provider. We use content providers with new APIs for storing channel and program data. The other concept is we use intent to tell your app when to insert your first channel and when your user is interacting with your content. You will use a broadcast receiver to listen to this intent and act upon them. And of course to make it easier we provide a support library for easier implementation. But what exactly do you need to do? What should the app do and exactly when? So to better understand it I'm introducing Shea Isaac, Shea Isaac, a brand new fictional Android TV app. The app allows user to watch cooking videos in different categories, favorite them, rate them and create personal content channels. And I will show you what I did in my app to showcase my content on the home screen. So let's begin. The first thing I added to my app is a broadcast receiver to listen to this intent, the initialized program intent. The intent is sent to your app, to my app, once the app is installed. So very basically broadcast receiver to listen to the initial program intent. Once the receiver is called I start a job. I use a job scheduler or you can use it to insert my channels. So I decided to insert three channels. Two or three channels is about the right number of channels. I can fine tune it later when I know my users taste better. So as you can see, I schedule using a job scheduler or a task. If you notice there's zero on the set minimal latency, when you set it to zero it means that the system will actually schedule this job immediately, which is a good thing because we don't have to do that. The job runs a simple task that actually pulls data of my channels from my back end and insert them into the provider. But how do you do that? So how to insert the channels? So we have in the support library several builders to help you build channels and programs to insert them into the provider. So I use the support library channel builder to create a channel and you can see I set up the username to the channel name and I set a link back to my app so when the user click on the channel icon it actually opens my app. And I use a condom resolver to call insert and just insert the data into the provider. And the last thing to do is actually add a logo to my channel so there's a call for that and actually you can either provide a bitmap or if it's too hard you can provide a URL to the provider and the support library will download it for you and actually insert it for you. Same thing for programs, for each channel I will use a builder to insert the programs so I set the channel ID, the same one that I created before I set the type for me it's clip I set the title of the description there's way more metadata to insert I'll talk about this a bit later and then I insert the URI that will actually when the user click on the channel and again I use the condom resolver to insert it into the provider. And the last thing I need to do is ask the system to make the channel visible and to do that I do one simple call I request it to be visible, browsable and presto that's it I created my first channel. I want to take a moment to talk about the first channel when the app is installed it which the app can use to insert the channel without the user approving rather the user interaction will be visible by default the app gets one golden ticket so once you insert the first channel the ticket is gone and cannot be used anymore so if you want to add additional channels it needs to be approved by the user so the first thing you need to do is do not remove your first channel your default channel use it update it but don't remove it. Since my app shows video clip I use the program type called clip but there are many other program types to use and it depends of the content that you're showing what types to use and each type has a different metadata associated with it the API documentation online show very well what exactly the metadata you need for each clip type for each program type so just to show we have seven type of video programs for movies to TV season live channels and you can understand that if I use a TV episode I may have the episode number and the season number which is not relevant to a movie so we'll use different metadata for different kind of programs the same thing for audio we have five types of audio programs talking about metadata each app has a title it has a description it has more metadata like the release date the length of the program, the user rating, there's author, there's price, there's many more fields available and of course for the visual part of it which is the most important part you can insert two different images one is when the actually image is focused, your program is focused on the home UI and one when it's not focused and of course you can also use the preview video which has the most visual and the more metadata you add the better so my app is installed on the device the user used it, watched a few videos even added a second channel using the customized channel and it's great but I want the user to be engaged more I want to go back to my app and watch more content so how do I do that to do that I need to keep my content fresh to update my channels all the time so what I did I used cloud messaging to have my content back to my app when new content is available when my app gets this ping, this message I run a job again I run the job in the background to pull this new content from my back-end server and also to query the data that's actually on the device in the provider so I know what's new and what's already on the device and I'm going to consolidate the two data sets when I do that I try to do what I call a smart sync so I will compare the two data sets and then I will remove all programs that are not available anymore I will add new ones that are new and the one that are still there I can update them maybe the release date change maybe the number of view change I can do that update the programs and also maybe change the order because some of them are more popular than others and that's okay we try to avoid the remove all add all or paradigm because what happens if you do the smart sync the visual will be better there will be less visual glitches with movements on your channel and that's what we're aiming for so I have more channels I have them with fresh new data what's next so I added a feature into my app to actually allow the user to add channels from the app itself so when the user is interested in the topic or the category I pop up a view which has a button it suggests the user to add the channel to the home screen when the user click on it what I need to do is a few simple things first of all again I need to build a channel to insert so I use the program builder to insert the channels and the programs into the provider and then I do one more thing is ask the system to make it visible and the way it works is that I create an intent which is called channel browsable request channel browsable then I do start activity for result they tell the system to actually show a confirmation screen and remember golden ticket is gone you need the user approval the user can approve or not and when the user approve it or not I can actually know what the user replied by checking on activity result and I know if the user actually approve it or not and do with it what I want three channels now on the home screen good job so let's move on so another opportunity to engage the user is actually the watch next channel remember that was the channel that was on top of the screen and it's a system channel so you don't have to create it but you do need to follow some guidelines in order to use it so the first thing I did I added support in my app to actually monitor what's added to my watch next channel so every time a program is added to the watch channel that it's my program I want to know about it and I want to track this so it's a very simple API you can add, remove and check if the program is in the watch next channel and you can implement it with a database or short prefs pretty easy thing to do so let's look at some use cases so let's talk about the continue playing so the user watched one of my videos but didn't finish watching it so what do I do what I need to do is actually add a program like before the same concept I set the type, the clip I added another type which is what kind of type it's on the watch next channel this time it's continue watching what we call continue here and then I add three more data to it one is when did it happen and the other two to help us with the UI is to the length of the program and the current play position and again simply again just insert it into the provider and you're done and then I can just update the current play position to update the UI great, I have another card on the watch next channel one more thing I can do is that if I have some clip that are part of a season or a series when the user lets say it's watching episode 2 I can insert episode 3 and say that's the next one to watch or maybe if I have at the end of the season I can add a card for the next season a program for the next season so what I did is I actually added again a program by same type, type clip watch next is next the type is next like next item and just adding when I inserted it in so it will be in the system and again I have another card on the watch next channel the last thing my app does is actually listen to the watch next as you remember in the demo we long press on the program click the add to watch next what happens is that the system actually takes the program from the apps channel and copy it over to the watch next channel we do that so that the visual effect will be immediate but what we also do is we send an intent to my app to tell it this program was copied and added to the watch next and what I did in my app is again putting a receiver to listen to these intents and the next thing I do is start a job and I save this information to my own database so when this program is updated maybe the number of view change I get a refresh of the image I can actually also update it in the watch next channel so the content will be fresh okay so let's recap what should an Android TV app do so first of all it needs to add channel right after install at least one keep the channels up to date refreshed, engage the user the app can suggest more channels inside the app use the watch next channel and it will help you to continue watching the content and listen and react to user interactions and of course do not remove your first channel thank you for coming and listening if you would like to learn more we have sandbox area, we are going to see the demo again we have code labs you can actually go and build your own TV app and come to the office hours I'll be there and you can ask questions thank you so much for coming and one more thing I just wanted to do before we go is to give a shout out to the Google play awards on May 18th we're going to do it at 6.30pm on stage 3 there's an Android TV award as well so go there and cheer whoever is going to win thank you guys if you have any questions feel fun to come over to the microphone and ask it was all the way to 11 that's great so I'm here with David Singleton he's going to catch us up on all the latest things with Android Wear thanks Timothy, so it's great to be back here again at Google I.O in the wonderful sunshine every year it's sunny so what's new with Android Wear earlier this year we launched Android Wear 2.0 which is our biggest update to the platform since we launched all the way back at Google I.O in 2014 and with that update we were really focused on making some of the things that we see people do with their watches better and faster so we start with watch faces obviously we all love to have watch faces that express our style but actually what we're finding is that people love to have information that really matters to them right there at a glance throughout the day so we made it possible for developers to put data from their apps onto any watch face that the user might choose and for users that's really powerful because it means you can have a watch face that matches your style but has that information from the apps that you love we made major updates to the system UI to make things like messaging much more fluid and fast and then finally we completely revamped the fitness experience with Android Wear 2.0 so with 2.0 having come out earlier this year what we were talking about at I.O this year is a lot of new stuff that we're building for developers and during the keynote we shared some of the momentum that we're seeing for Android Wear in the category which we're really excited about we shared during the keynote that we now will see 24 brands right now that have Android Wear watches and we didn't say it's in the keynote but that means there are actually 46 different Android Wear watches you can choose from right now and right from the beginning we felt like it's really important for a product that you wear right on your body to be somewhere that you can express your personal style and passion and so having that choice of devices we think is a tremendous testament to that some of the ones I'm most excited about Tiger Warrior just launched their second generation product which is called the Tiger Warrior Modular 45 so you should take yourself into our sandbox you can see the best thing with this product is there are all kinds of variations you can swap out everything from the bracelet to the little horns to the bezel on the watch and really create a look that's very personal to you and it also has watch faces that are personally customizable to match all of that we're also working with some new partners for the first time this year so Movado earlier in the year announced their product and it's really exciting to see the kind of minimalist design that they bring to their watch and then some of our other fashion partners like Michael Kors for instance have updated their lineup with a new product called the Michael Kors Access Grayson and one called Sophie inside which is a really nice small watch which I'm really excited will take the product forward for women in particular so we're seeing this really tremendous momentum and actually that began before Android Wear 2.0 so when we look at our new device activations for the holiday season last year we actually saw 72% growth on the year before and with Wear 2.0 coming out and all those new devices we're really excited that that momentum will continue through this year and beyond let's talk a little bit about fitness it's one of the areas that I'm most excited about with smartwatches in general what are some of the new devices and some of the features there that you see users really engaging with thanks for the question one of the things that we did with Wear 2.0 was completely revamped the fitness experience and one of the things that we really see is that what people want to do with their watch is fall into two buckets two distinct kinds of use case so one is I'm actually like looking at right now I want to track it right on my wrist so we call that fit active mode you can start fit activity you can use all the sensors on the product maybe you're running or cycling to see exactly how hard you're working use your heart rate compute things like your distance and calories burned the other kind of experience is just using the product to set some goals that matter to you maybe I just want to be more active and I want to be active perhaps for one hour every day set that goal and then just go and live your life the watch automatically keeps track of your movement so we can see how many minutes of the day you really were active and by seeing that at a glance on any of those watch faces it really helps me be more active through the day and you can see that right there the watch I'm wearing now so we find that tremendously powerful but also smartwatches with beautiful and vibrant screens like Android Wear devices are a great place to coach the user and so with the update that we recently launched we introduced challenges where you can do things like sit ups and push ups and the watch will actually show you exactly how to do them and then it will use the sensors in the watch to see how many you did and are you doing them with the correct form and that's really cool because it means that there's kind of no cheating you can't say yeah I did my push ups it's actually going to count them for you and you're going to do more every day and it really helps keep you motivated I'd love to tell you a bit about what we have new for developers that we're talking about at I.O. this year so we're really excited to build on the things that we're seeing developers do with the Wear2.0 platform and in particular let's talk about watch faces so I already mentioned that we have the complications API that lets your app put data on any watch face but if we turn it around to the watch face developers they can take those data and actually render them in a very format that fits the aesthetic style of the watch face and that is cool because it means that you can have a watch face that really represents the particular visual, graphic design or whatever it is that you really care about but some of our watch face developers have told us that they actually find it quite challenging to take all of the different kinds of data that apps could provide and render them in an enchanting fashion for their users so we're making this easier by using several things here at Google I.O. that mean that if you're a watch face developer and you're dealing with this data coming from apps you can render it really easily in an enchanting form so one is a text rendering system that allows you to fit text into any size region on the screen and it will automatically resize That's one of those things that's not as trivial as you would think it would be That's right, it actually took a lot of work to make this work really well and then beyond that we have something called complication drawable and that means that the system will actually take care of rendering the complication data right where you tell us and then we provide some APIs that allow you to style it so it can still fit with the visual flow of your watch face and then beyond watch faces we're also taking a lot of the work that we've done to build UI components and going through the process of open sourcing it so it will be able to evolve faster and you'll also be able to understand exactly how it works as a developer So what are some good next steps? What are some things that developers can play with today? So today you can go ahead to the Android Wear website and you can download the SDK you can try out all of the new APIs they are live on GitHub right now or as of four minutes from now so take yourself over there and have fun building watch faces that use the complication API Awesome, David thank you so much Thank you