 Okay, now we're at the Museum of Developer Arts and I'm standing here with Alex who actually curated the pieces we're about to take a look at. Alex can tell me a little bit about why you chose these pieces for a developer festival. So the two artists that are here, Ezra Miller and Vincent Jose, are both developers themselves and artists and so their work is really about using code to create beautiful natural simulations. Vincent creates fluid dynamics that are interactive, which is a very hard thing to do, and Ezra creates data visualizations that are essentially transforming into flocking starlings. And why these pieces at this festival? So I wanted to find artists that could understand the sort of developer mentality and communicate to developers and show that by using developer tools you can still create really beautiful work. So that's really the main reason. Awesome. Well, we were looking at them earlier and that really comes through. Thank you so much. Thank you. I'm standing here with Vincent in front of his art. Now I have a few questions. It's amazing and I really need to know more. But first, can you tell me just a little bit about this piece, your process making it, what you used and what it represents to you? So this piece is called fluid structure. It's an exploration of how liquid like shape form kind of deforms and react under various like forces and stimuli. So I've been fascinated for a long time by fluid dynamics because it's like you see it everywhere in nature like stream, rivers, smoke, volcanoes. And it's all this like incredible variety of motion, very intricate. And historically it's been very computationally expensive but now recent progress makes it possible to do the simulation real time. And that makes it very exciting for me to build like interactive piece using this technology also to like explore like all kind of different how it's going to react like playing with different forces and what happens if we slow the time down or make it really fast. And I think it seems familiar and both mysterious because we're kind of used to how things react to physics flows. So I think one of the challenge for this piece was to scale and make it interactive for like a large number of people. So it's using five different Kinects which is a depth sensor that can tell me in space where the people are. And it's like computing like forces based on where people stand to like push the liquid around so that most people who are like not always familiar with like sensors and like interactive installation would get it. There's little hints like you're going to see your silhouette and like some modes where the water falls on you is easier to like get your interaction and how you influence the result. And I think and watching the people tonight so it was a premiere for me as well to like I tested with like a few friends but it's like what are people going to do and like how they're going to react and a lot of people like are really getting into it and that's like pretty amazing to to see that. Okay so one more thing is that the size of this is massive. I don't know what your studio space looks like but I presume you did some prototyping that wasn't this big before like how did you kind of conceptualize a piece this large? So I was lucky I could prototype like like a smaller slice and then I used like VR and an Oculus to kind of like get a feel for the scale of the piece which I had done in the past for other art installation which were also pretty big scale and I'm based in New York and so large space is kind of like a rare thing to get. So so VR for me it's like it's really useful to get a sense of scale and the other thing is that since everything is running real time I have like a number of parameters I can adjust quickly once I see the piece for the first time on site at scale which I did tonight so I slowed down something and like tweak some timings to make it like better so that's the two the two way I worked on the scale. Awesome well thank you so much for sharing your art with us. Yeah you're welcome thanks it's it's great to to be here and like have this opportunity and thanks to Alex for curating this this medium of developer art. Thank you okay now we're looking at the second piece and this one is by Ezra. Ezra could you tell us a little bit about this? Yeah so what we have here is my piece data memorization which I built using WebGL so it's just running in the browser and it's a flocking simulation as well as a data visualization. So one of the things that I noticed about the piece is you definitely have something to say about the really clear data that shows up every once in a while on this graph but then it breaks out all these data points into this flocking behavior like what brought you to that artistic choice? Right so yeah it oscillates between this sort of like formless cloud of birds swarming and then this really well-defined graph. I think to me it's sort of trying to get at questions about whether humanity can organize itself as we face a lot of problems as a society and I think the like the difference between this like formless swirling cloud and that get at that idea I don't know. All right man hey let's talk a little bit more about the text since we're at the developer conference you said this is then the browser? Yeah so I make websites using WebGL this was built with 3js which is a pretty like the go-to WebGL library and I use it in all of my work pretty much and it's built using shaders which are programs that run on a computer's GPU that let you do computations a lot faster than on the CPU and it's basically creating everything procedurally which means there's no real image textures being used or any other real assets I think the main asset is the data and then everything sort of comes from that so. And I also noticed in addition to the flocking there's this almost waterfall effect what is that? Right I actually added that in like right recently before we set up I think it just adds a little more dynamism to the to the background kind of cuts through the clouds and it's a feedback loop process which is something I use in all of my work pretty much. Yeah I just thought it looked. I like that. All right Ezra thank you so much for sharing your work with us today. Thanks man I really appreciate it. And I'm standing with Sarah who actually helped build the experiences that attendees are going through right now can you tell me a little bit about what they're experiencing? Yeah so we're Google spotlight stories and we are creating content that directors create their own stories in this medium so we have Pearl by Patrick Osborn which was nominated for an Academy Award this year for Best Animated Short Film and we have the Gorillaz newest music video which is a first music video for our group. Now you worked with the production teams and the artists in building these experiences right? Yes yes. What's that like? Do they have all sorts of questions like what's different from a normal production experience? Yeah it's definitely a new medium for most people but both took it to took to it very well so it's you have to think about creating a film in 360 space a little bit differently than you would a normal 2D narrative film so you have to make it interesting in all directions if you will. Very cool. Sarah thanks so much for joining us. Thank you. Use ball table I have ever seen coming to you live from the world championships of Pear Ping Pong here at Google IO. I'm just kidding we're having fun so I'm going to go have some fun. So anything that you do looks cool it's going to be you get to share it right away. Does that sound good? Yeah all right so the main rule is you don't leave the platform till I catch the arm so you don't get hurt. Other than that you can do anything that you want you can jump move you can move in quickest cool standing perfectly still as cool either way. Don't move slow. Yeah because it'll be kind of a jump or something be very yeah that'll be good yeah you're good to go. All right I'm just testing it out I don't really know what's going on but I'm gonna go with this as I did. Can I start? Okay thank you thank you for attending our session Android meets TensorFlow I'm Kaz Sato I'm a developer advocate for Google cloud platform team and hello good morning thanks for coming in very very early morning so I'm not the morning person so I'm a bit sleepy now anyway yeah my name is Hakura Matsuda I'm a developer advocate of the Android gaming and native technology. Okay so today in this session I'd like we'd like to discuss these topics so for the first part of the sessions I will be discussing what is AI machine learning neural network and deep learning and how Google has been using this that kind of technologies for implementing our services and what is the TensorFlow that is an open source library for building your own neural network and then I will pass the stage to Hak he'll be discussing how you can build the Android applications powered by TensorFlow and how you can optimize it and lastly I'd like to do a sneak preview of the new technologies such as TensorFlow Lite and Android neural network API so what is machine learning and neural networks how many people actually tried neural network by yourself how many people oh so many people like 10 to 20 percent and how many people actually used neural network on mobile devices oh oh thank you so much I found like 10 over 10 people okay today I would like to learn how you can use the TensorFlow and machine learning model running inside mobile applications and there has been so many buzzwords such as AI or machine learning or neural network or deep learnings we have been hearing about those buzzwords for the last few years the first difference the AI or artificial intelligence you can say that is a science to making smart things like building an autonomous driving curve or having computer drawing a beautiful picture or composing music and one way to realize that vision of AI is the machine learning machine learning is technology that way you can let the computer train itself rather than having a human programmers instructing every steps to how to process the data by itself and one of the many different algorithms in machine learning is neural networks and since around 2012 we have been seeing a big breakthrough in the world of the neural networks especially for the image recognition voice recognition or the natural language processing and many other applications and at google we have been focusing on developing the neural network technologies for several years so what is neural networks you can think it's just like a function in mathematics or function in programming language so you can put any kind of data as an input and do the some the matrix operations or calculations inside neural networks then eventually you would get an output vector which has the many labels or speculated values for example if you have a bunch of images and you can train the neural networks to classify which one is the image of cat or image of dark and this is just an one example of the use cases of neural networks you can apply the technology to solve any kind of the business problems you have for example if you have a bunch of the gaming servers then you can try converting the all the user activities or player activities converted into a bunch of numbers such as vectors and try training the neural networks to classify the types of the players on your gaming server for example if you want to find any cheating player who will be using the automated script to try to cheat on your server or you can try to using the neural networks to find the premium players who could be buying more and more items from your gaming server so that is just one example possible example of the possible thousands of applications and at google we have been using deep learning technologies for implementing the smart functions on over 100 production projects including google search or android or play and many other applications for example if you are using google search every day that means you are accessing the deep learning technologies provided by google every day because 2015 we have introduced rank brain which is a deep learning based ranking algorithms that generates the signals for the defining and ranking of the certain results and if you take a look at the mobile applications from google for example google photos is one of the most successful mobile applications that has been using deep learnings for analyzing and understanding what's the content of each image is taken by your smartphones so you don't have to put any tags or keywords by yourself instead you can just type dog or your friend's name or the waging party to find the image space on this content smart reply is a feature that shows the options to reply to each email thread so it uses the natural language processing powered by the neural network model to try to understand the context of the email exchanges and now the over 12 percent of the responses are generated by the smart reply features so you can say that now email has been written by the computers and not by humans anymore google translate app recently introduced a new neural translation model that has improved the quality especially the fluency of the translated text significantly so there are so many possible use cases for the combination between machine learning and mobile applications starting from the image recognition OCR or speech to text to speech or translation NLP and especially you can apply the machine learning for the mobile specific applications such as the motion detection or GPS location tracking and why you want to running your machine learning model or neural network model inside your mobile applications because by using the machine learnings you can reduce the significant amount of the traffic and you can get a much faster response from the cloud services because you can extract the meaning from the raw data what do you mean by that for example if you are using the machine learning for image recognition easiest way to implement that is to send you all the raw image data taken by the camera to the server but instead you can have the machine learning model running inside your mobile application so that your mobile application can recognize what kind of the object in each image so that you can just descend the label such as flower or human face to the server so that can reduce the traffic to one tenth or one hundredth it's a significant amount of the savings of traffic another example could be a motion detection so here gathering all the motion sensor data now sending the just raw images to directory to the server you can use the machine learning algorithms to extract the so-called feature vectors feature vector is just about just a bunch of the numbers like a hundred or one thousand numbers that represents the characteristic or signature of the motions uh from the emotional senses so you can just send the a hundred or one thousand numbers in the feature vector to the server and to what is the start starting point to build your own mobile application that is powered by the machine learning the starting point could be the tensor flow which is the open source rivalry for machine intelligence from google and tensor flow is the latest framework for the building the machine learning or AI based services developed in google and we have open source it in november 2015 and now tensor flow is the most popular framework for the building neural network or deep Linux in the world and one benefit you could get with tensor flow is ease of development so it's really easy to get started you can just write a few lines of python code or tens of lines on python code to define your neural network by yourself and actually tensor flow is this technology is really valuable for people like me because i don't have any sophisticated mathematical background so when i was reading start reading the textbook of neural networks i found many mathematical equations on the textbook like a differentiation or back propagation or gradient descent and i really didn't want it to implement everything by myself instead now you can just download tensor flow and where you can write the single line on python code like gradient descent optimizer so that single line of code can encapsulate all the sophisticated algorithms such as gradient descent or back propagation or any other latest algorithms implemented by the google engineers so you yourself don't have to have the skill set to implement the neural network technologies from scratch also the benefits of the tensor flow is the portability and scalability for example if you're just getting started with the technology you can download tensor flow on your mac or windows where you can play with the hello world kind of the very simple samples but if you're getting serious about the technologies for example if you want to train the model from scratch to recognize an image of cat then you may want to use a gpu server because gpu is like a 10 or 50 times faster than cpu or mac or windows to train your model but many large large companies like google or any other enterprises are using tens or sometimes hundreds of gpus running on the cloud because the computing power is the largest challenge for the for the deep learning technologies but still you don't have to make any major changes on your tensor flow neural networks because tensor flow is designed to be scalable so once you have defined your neural networks you can learn it train it and use it on a single cpu or multiple gpus or hundreds of gpus or tpu or tensor processing unit which is the ASIC or customized LSI designed by google and once you have finished the training of your model you can copy the model which has about for example for image recognition each single model can consist of the 100 megabytes of the data the parameters you can copy those parameters to the mobile devices such as android ios or raspberry pi and if you go to the tensor flow.org website you can find the sample code for those the embedded systems and mobile phones and another benefit you could get from tensor flow is the community and ecosystems so if you want to find any practical production quality solutions then the tensor flow can provide the best answer for that because there are so many large enterprises and developers who are using the tensor flow for serious development such as airbus or arm or ebay intel dropbox twitter they are all using tensor flow now i'd like to pass the stage to hack who will be talking about how you can implement the android applications part by tensor flow okay oh thank you thank you cousin so let's move on to android part as cousin mentioned we have found a lot of great use cases of running tensor flow influence on mobile devices let's take a look how we can integrate tensor flow influence on mobile devices and how we can optimize it tensor flow supports multiple mobile platforms including android ios and raspberry pi in this talk we'd like to focus on mobile devices like android and ios building a tensor flow shared object from scrunch was a bit tricky required multiple steps starting from git cron from github install basil install android studio android sdk and dk and finally editing a setting file etc but we have a good news today announced this isle we just added jcenter integration which makes the steps a lot a lot easier thank you thank you so just add one line to the build grade and grade takes care of the rest of the steps android library archive holding tensor flow shared object is downloaded from jcenter linked against your application automatically also you can fetch prebuilt model files such as the inception stylize et cetera from the cloud as well and it's easier for ios as well we just added a cocoa pod integration as well it's quite simple now let's take a look how you can use the tensor flow api we released android influence library to integrate tensor flow for java application the library is seen up from java to the native implementation and the performance impact is minimal at first create tensor flow influence interface opening the model file from the asset in the api k then set up the input feed using feed api on mobile the input feed tend to be retrieved from various sensors like a camera accelerometer etc then done inference and finally you can fetch the result using fetch method over there you would notice that those calls are all blocking calls so you would want to run them in a worker thread rather than the main thread because api would take a long long time like several seconds this one is the java api and of course you can use regular c++ api as well if you love c++ as i am okay let's move on to the demo so this one is tensor flow sample running on android the sample has three modes first one is running inception v3 that classifies the camera image and also we have a classified face and stylized photo sample this one is stylized photo that is applying the artistic filter on the camera preview thank you and one thing one special thing on the demo is that i tweaked the demo a little bit using the gpu using bulk and compute shader that way it's a regular sample just supports cpu and neon optimization but i tweaked that using the gpu and this one was just for experiment and just for fun that was pretty fun and i learned a lot to optimize tensor flow for gpu basically on android devices performance limiting factor is mostly coming from the memory bandwidth rather than computing itself so that reducing memory bandwidth helped a lot for instance in some convolution 2d kernel it was fetching the 32 by 32 by 32 by 4 samples just to generate one one output value which is a huge amount of data from the viewpoint of the compute shader so memory bandwidth is the key crucial for the android and the mobile device optimization anyway everybody can tweak tensor flow code because it's open source it's a beautiful open source frame so now we can integrate tensor flow inference on mobile quite easily as i explained however there still be a challenges performance even the mobile device performance has been increased significantly it was less computing parts than cloud or desktop counterpart also it has limited RAM which is a precious resource on mobile let's say if your application takes a one gigabyte of RAM then your application is highly likely to be killed by the system itself when the application goes to background that's not a happy situation right so let's take a look how we can optimize tensor flow graphics reducing memory footprint increase runtime performance and improve the load time as well so this one is the model of the inception v3 model the model takes around 91 megabyte in storage with 25 million parameters and the binary size would take 12 megabytes that is a huge and we have multiple techniques to optimize the graph such as the freezing graph and using graph transform tool quantization memory mapping etc let's go through freezing graph is one of the load time optimization which convert variable node into constant node what's a variable node in tensor flow variable node are stored in different file but a constant file constant node is included in the graph itself so moving a variable into the constant node can concatenate multiple files into one file like apple pen that would be a slight performance swing in mobile and easier to handle to do that we prepared freeze graph dot python script and the graph transform tool is your friend the tools support various optimization tasks such as stripping and used node files inference but used only in the learning phase when learning the inference node those are not necessary a thing is that currently it would require some manual steps to determine which node is the start node and which node is the output node so the tool requires both start and output points specified manually let's talk about quantization neural network operation requires a bunch of matrix calculations which means tons of multiply and add operation current mobile devices are capable of doing some of them with specialized hardware for instance sim day instruction in cpu general purpose computing in gpu dsp etc roughly on mobile cpu it can perform around 10 to 20 gigabyte flops in total and using gpu it would achieve 300 to 500 gigaflops or more that sounds great number but still less than a desktop or server environment so that we want to perform some optimization quantization is one of the technique to reduce both memory movement print and computer load usually tensile flow takes a single precision floating value for input and mass and also output as well as you know single precision floating point takes 32 bits each but we found that we can reduce the precision to 16 bit 8 bit or even less while keeping a good result that's because a learning process involves some noise by nature and adding some extra noises wouldn't matter much so a quantized weight is the optimization for storage size which reduce the precision of the constant node in the graph file but please note that quantized weight with the quantized weight optimization the value will be expanded on memory when the graph is loaded so we have another optimization we can call it the quantized calculations with the quantized calculations we can reduce the computing precision by using the quantized value directly this one is good for both memory bandwidth which is a limiting factor in mobile devices also hardware can handle less precision values faster than single precision floating value but we still have an open issue to do the quantized calculation we need a maximum value and a minimum value that specifies the range of the quantized values we still don't have a great solution for that it's still manual but active research is going on so that hopefully this issue will be resolved pretty soon this one is an example how quantized calculation optimization works in TensorFlow TensorFlow has some operations that support quantization for instance convolution 2d matrix multiply, ladle etc we think that's good enough to cover most of the inference scenarios however we still don't have all of operations quantized yet so that we need to quantize the value and de-quantize value output right before and after each node and the graph transform tool another is the path of the each graph node and sometimes we can reduce unnecessary quantized and de-quantized value memory mapping is yet another optimization for loading time with this optimization the model file is converted and can be mapped directly using the memmap API which could be slightly performance optimization on some Linux kernel based operating system like Android another one is reducing executable size that's an important topic mobile as well because on mobile devices executable package sizes limited to specific size for instance Android 100 megabyte for Android including binary and graphics and any other resources by default mobile device supports multiple selected operations that is mostly good enough to cover inference operations but ops used learning process missing so if you want to do the learning on mobile device you need to register extra operations also if your graph wouldn't require some of pre-registered operations you would also remove some of them to do that you can do the selective registration so for instance for inception v3 by doing the selective registration the binary size original binary size was 12 megabytes and after the optimization it can be reduced to 1.5 megabyte note that this optimization requires the building's shared object in your local so you would need to construct the build environment as well so with these optimizations now inception v3 graph now becomes 23 megabytes and 1.5 megabyte in binary size which is the 75 percent smaller now let's get back to Kaz thank you hack so as hack mentioned there are so many thank you as hack mentioned there are so many tips and tweaks to optimize your TensorFlow model to squeeze into an Android application of mobile applications and that is what you can do right now these techniques are available right now but now I'd like to discuss a little bit about the new technologies coming in the new future such as TensorFlow Lite and Android neural network API what is NN API it's a new API for neural network processing inside Android and that will be added to the Android framework and the purpose of the adding a new API is to encapsulate and have an abstraction layer for the hardware accelerators such as GPU DSP and ISP and modern smartphones have such a powerful computing resource other than CPU such as GPU or DSP especially the DSP is designed to do a massive amount of the matrix and vector calculations so it's much faster to use DSP or GPUs to do the neural networks inference on it rather than using a CPU but right now if you want to do that you have to go directly into the library provided by the hardware vendors and build some binaries by yourself it's so tedious task and also it's not portable so instead we will be providing a standard API so that developers don't have to be aware of any the hardware accelerators from the individual vendors and on top of the neural network API we will be providing TensorFlow Lite that will be a new TensorFlow runtime optimized for mobile and embedded applications so TensorFlow Lite is designed for smart and compact mobile or embedded applications and also it is designed to be combined with the Android NN API so all you have to do is to build your model with TensorFlow Lite and that's it so eventually you'll be getting all the benefits you could get from the Android NN API such as the hardware acceleration so that will be coming as an open source in the near future so stay tuned and if you're interested in these new coming technologies please take a photograph of this QR code so that you can join our survey for the ML on Android where you can give your feedback or request for the new products okay okay thank you so lastly I'd like to show some very interesting and fun applications real-world applications built with TensorFlow on mobile and embedded systems the first application is is running on the Raspberry Pi built by a Japanese cucumber farmer so they are the actually I by myself took these photographs I went there cucumber farm and took this photograph and they have the you can see there's a story there's no pointer so you have the one person in the middle he's the Makoto-san he's the he has he has started helping the cucumber farming two years ago and he found out that this sorting cucumbers into and correct classes is the most tedious tasks his mother spent eight hours a day for classifying each cucumber based on its shape or length or color into nine different classes and he really didn't want it to help her so instead he downloaded TensorFlow and beat his own cucumber and sorter so what he did is took a 9000 photograph of the different cucumbers and put the labels crushed by the his mother and 20 TensorFlow model by himself and he built these sort of robotics by himself by spending 1500 dollars and the TensorFlow model is is running on the Raspberry Pi to detect the cucumber's put on the plate and it can classify cucumbers into nine different classes based on its shape and colors and this is a system diagram of the systems so it has three parts the Arduino micro is used for controlling the servers and motors and Raspberry Pi is has a camera to take a picture of the cucumbers on the plate and it runs the very small TensorFlow model on it and this is actually a really great example how you can split the task the workload for machine learning into an edge device and cloud part because he found out running the full set of the TensorFlow model inside Raspberry Pi is too heavy weighted so decided to split into two tasks so the TensorFlow model running on Raspberry Pi only detects whether there's a cucumber on the plate or not and only when it detects the there's a cucumber on the plate it sends the picture to the server where he has a more powerful TensorFlow model that can classify the cucumber into nine different classes so let's take a look at the another interesting applications running on Android and iOS that is a gymnastic exercise squarer what is that they actually every Japanese knows the exercise called radio exercise because we have a national broadcasting radio broadcasting network they play the same exercise music at the same time every morning and tens of millions of Japanese people are doing the same exercise every morning did you know that yeah and this application is designed to record the scores or how well you have been doing your exercise with the music and to capture the motions with the motion sensor they have used the TensorFlow and they were able to train the TensorFlow model to capture and extract the patterns or features from the data from the motion sensors so that it is able to evaluate the emotions made by human hand and they also built their own TensorFlow compiler so that they were able to apply the techniques such as the quantization or approximations and they were able to reduce the TensorFlow model from tens of megabytes into a few megabytes so that was the key technology to build the production quality Android and iOS applications with the TensorFlow power so let's take a look at the live demonstration of the exercise score so can I switch to here so this is the applications where you can choose the various kinds of the exercises and I'll be playing the most standard one so this is the music so just like that so let's stop it's enough it's enough so now the TensorFlow model is trying to evaluate how well you have done with the exercise and you can see the bar chat here that is the evaluation by the TensorFlow model inside these applications it's a real thing okay so go back to the slide so that was the thing we wanted to show and so in this session we have learned many things including the one yet another weird thing from Japan and some optimization techniques for the building TensorFlow applications with the production quality Android and iOS applications and if you are interested please go to the TensorFlow.org where you can have the lots of the getting started materials and there are so some of the good codelabs on the codelabs website okay thank you so much thank you it's all my life it's my passion I also learned how to program computers and then in 2001 we started the first video games company for mobile phones in Spain called microjokes in 2013 my studio was acquired by a big company some of the guys and myself we decided that we should do something fresh something new and we found that on the run Titan role is a real-time strategy game it's considered as a MOBA MOBA is a massive online battle arena but especially designed for mobile devices the game is today as it is thanks to the early access program we changed many things from the learnings from the community since we launched the game on early access we got more than 2 million installs on android devices we started in the early access program back at the very beginning of it the difference between the early access program and traditional soft launch is that users are actively giving the team feedback so you don't only check the metrics you have but they also provide possible solutions so you end up by doing the game players wants to play the thing about not having the ratings but do having the constructed feedback was very good the early access was a great opportunity for an indie developer someone starting and very key for us in amnesty when we started with the early access program we approached it in different stages so the idea was at the beginning to focus on the engagement of the games once we started that out we focus on the retention of the game and finally we focus on monetization to do a valid product for the market we managed with early access to improve our retention in a 41 percent the engagement by a 50 percent and the monetization by 20 percent from the very beginning of the program till worldwide launch the game i feel very happy working on the video games industry because it has been my passion since i was a child and it's really inspiring that through Omnidron we have a real chance to shape the new era of the video games in response to popular demand the android framework team has written an opinionated guide to architecting android apps and they've developed a companion set of architecture components hi my name is lila a developer advocate for android and i'm here to introduce you to these shiny new architecture components components persist data manage lifecycle make your app modular help you avoid memory leaks and prevent you from having to write boring boilerplate code your basic android app needs a database connected to a robust ui the new components room view model live data and lifecycle make that easy they're also designed to fit together like building blocks so let's see how i'll step through with new sequel like object mapping library to set up the tables using room we can define a plain old java object or pojo we then mark this pojo with the at entity annotation and create an id marked with the at primary key annotation now for each pojo you need to define a dow or database access object the annotated methods represent the sequel like commands you need to interact with your pojo's data now take a look at this insert method and this query method room has automatically converted your pojo objects into the corresponding database tables and back again room also verifies your sequel light at compile time so if you spell something a little bit wrong or if you reference a column that's not actually in the database it will throw a helpful error now that you have a room database you can use another new architecture component called live data to monitor changes in the database live data is an observable data holder that means it holds data and notifies you when the data changes so that you can update the ui live data is an abstract class that you can extend or for simple cases you can use the mutable live data class if you update the value of the mutable live data with a call to set value it can then trigger an update in your ui what's even more powerful though is that room is built to support live data to use them together you just modify your dow to return objects that are wrapped with the live data class room will create a live data object observing the database then you can write code like this to update your ui the end result is that if your room database updates it changes the data in your live data object which automatically triggers ui updates this brings me to another awesome feature of live data live data is a lifecycle aware component now you might be thinking what exactly is a lifecycle aware component well i'm glad you asked through the magic of lifecycle observation live data knows when your activity is on screen off screen or destroyed so that it doesn't send database updates to a non-active ui there are two interfaces for this lifecycle owner and lifecycle observer lifecycle owners are objects with life cycles like activities and fragments lifecycle observers on the other observe lifecycle owners and are notified of lifecycle changes here's a quick peek at the simplified code for live data which is also a lifecycle observer the methods annotated with at on lifecycle event take care of initialization and tear down when the associated lifecycle owner starts and stops this allows live data objects to take care of their own setup and tear down so the ui components observe the live data and the live data components observe the lifecycle owners as a side note to all you android library designers out there you can use this exact same lifecycle observation code to call setup and tear down functions automatically for your own libraries now you still have one more problem to solve as your app is used it will go through various configuration changes that destroy and rebuild the activity we don't want to tie the initialization of live data to the activity lifecycle because that causes a lot of needlessly reexecuted code an example of this is your database query which is executed every time you rotate the phone so what do you do well you put your live data and any other data associated with the ui in a view model instead view models are objects that provide data for ui components and survive configuration changes to create a view model object you extend the view model class you then put all of the necessary data for your activity ui into the view model since you've cached data for the ui inside of the view model your app won't require the database if your activity is recreated due to a configuration change then when you're creating your activity or fragment you can get a reference to the view model and use it and that's it the first time you get a view model it's generated for your activity when you request a view model again your activity receives the original view model with the ui data cached so there's no more useless database calls to summarize all of this new architecture shininess we've talked about room which is an object mapping library for sql light live data which notifies you when its data changes so that you can update the ui and importantly it works well with room so that you can easily update the ui when the database values change we've also talked about lifecycle observers and owners which allow non ui objects to observe lifecycle events and finally we've talked about view models which provide you data objects that survive configuration changes altogether they make up a set of architecture components for writing modular testable and robust android apps you can sensibly use them together or you can pick and choose what you need but this is just the tip of the iceberg in fact a more fully fledged android app might look like this for an in-depth look at how everything works together and the reasoning behind these components check out the links in the description below to jump straight into code and get started working with these objects you can check out the codelabs and samples for lifecycle and persistence happy building and as always don't forget to subscribe i'm vojta karciński this is android tool time and let's talk a bit about the espresso test recorder and how it can help with adding ui tests to your app but first a short explanation for those unfamiliar with espresso espresso is a testing framework designed to provide a fluent api for writing concise and reliable ui tests however it is often the case that developers are reluctant to add ui tests to their apps or simply don't have time to learn the framework this is where the espresso test recorder comes in it lets you create and add ui tests to an existing app in an interactive way you might have previously seen the beta version of this feature but in android studio 2.3 we're promoting it to stable with a few enhancements to get started with the test recorder click on record espresso test under the run menu the device selection dialog pops up and after you make your choice the test recorder runs your app in debug mode simply progress through your app's ui as a regular user would by clicking buttons swiping and typing into input fields and all those actions will appear in the test recorder window you can also click here to add an assertion to your test at any time during recording which will trigger the test recorder to dump the current view hierarchy to select the view you want to assert on click on the screenshot that appears in the recorder window and choose between the assertion type from view exists doesn't exist or check that it contains the specified text when you finished recording your test the test recorder generates the equivalent test code to run your actions and assertions and puts it at a new file in your projects instrumentation test folder it also checks if your build file contains the required espresso dependencies and adds those if needed when you look at the source file that espresso test recorder created you will see that it's perfectly normal human readable code so if you need to further customize your tests or alter them when your app changes you can simply open the file again and make the alterations you need as you can see the espresso test recorder is very simple to use but it does come with some limitations as of android studio 2.3 only a few most common assertions are available through the recorder ui so if you need anything more complicated than that you will need to edit the generated code by hand also at this stage the test recorder cannot handle all situations where additional synchronization is needed to deal with delays and async operations in your apps i highly recommend getting familiar with the espresso idling resource api and using that in your tests to signal when a long running operation happens for advanced users who want to tweak some aspects of test code generation there's a settings page for the test recorder in android studio preferences here you can change the maximum view hierarchy depth that will be used for view identification and if app data should be cleared every time you record a new test the espresso test recorder is a great way to start adding tests to your app whether you want to learn espresso by examining the generated code or simply to quickly build a test suite which you can customize later we look forward to your feedback on our social channels and happy testing hi timothy jordan your friendly developer advocate here at google i o 2017 i'm in the machine learning area and we're going to check out a lot of really cool stuff a reminder before i get to it if you'd like to see any of this footage or any of the other sandbox tours that i've been doing go to g.co slash i o slash guide first off i'd like to check out the cloud tpu in person so much heat sink it's look like a skyscraper it's like a miniature city down there this is seriously one of the best tron moments i've had in years this is among the amazing most amazing things i've ever seen in my life okay magnus everybody this is magnus you may know him from well-being magnus and all sorts of things hi magnus hi how are you i'm doing super well so you've been hanging out with all this really cool machine learning stuff all day long yeah that's right we've been here all day doing different machine learning stuff i'd like to check out some of it and just play with it uh let's try ai duets first so what is this so you can actually play on these things and then the network um generate something back yeah okay so it's kind of like the machine playing music with you yeah it's kind of constructing music based on what you play it's trying to create something similar but still different okay i want to try it out a lot of fun it feels a little like that scene in close encounters with the third kind yeah absolutely all right so there's uh one other thing that i would like to check out while we're here and that's the candy sorter yes it's an amazing thing it consists of so many different machine learning technologies in one single demo shall we check it out yeah for sure as promised we're going to check out the candy sorter what is the candy sorter you may ask well Dave's here to tell us what that is okay great so what we're going to show here is how you can infuse machine learning into your apps without actually being a data scientist and what we're going to do it is through candy and through labels now we've what we've done is we've gone through we've trained a model um up in the cloud that for uh for basically we've taken an existing model and we've trained it for candy so we've put labels and next to candies and we've got a little camera here that's taken pictures of the candy and we put labels that have been associated with the candy we sent it up to the google cloud machine learning engine and we've trained the model we're using exception v3 model and using transfer learning and so what we're going to show now is now that the model has been trained is actually the serving of the model so i'm going to take candy here and just throw it out in front of the camera and this is just any random candy that we've we have trained this model for so i like i like gum i want to make sure there's plenty of gum out there and you'll want to make sure there's a little bit of space in between here again when we train this this image which it has been it has been a well trained model that has been modified with these images and these labels you can even write your own labels which is awesome so now it's trained now the fun part of this is actually the serving the prediction so you're going to speak into this mic you're going to hit this little mic button here you're going to speak into this mic and ask for some kind of candy and it's going to make a call to our api it's going to understand the text it's going to do a speech api so speech to text api and then it's going to understand the intent of what you're asking for using our natural language processing api and then it's going to make a prediction based upon the model that's sitting out in the cloud and the best part is hopefully cross your fingers it's going to give you it's going to pick make the prediction pick it and then give it to you awesome okay all right so click on that and speak into the mic may i have some gum so it understood what you said may i have some gum now it's going through natural language processing it's identifying the noun the noun there and gum so now it's trying to it will then match based upon the model that's been modified come on come on come on and it is picking chewy gum and there the camera identified extra long lasting watermelon gum now the camera's the and and over and there's your gum that's great machine learning in an app so i'm uh i gotta keep this right yeah actually i've got like seven boxes back there so please take every big one Dave thanks so much all right thank you that's pretty great all right so that is the machine learning area i hope that you've enjoyed these experiences as much as i had it's really cool to see machine learning up close and personal see how it can be used in real life through these demos and i hope it inspires you to do some cool stuff with ten thousand with cloud see you later good morning everyone good morning my name is tall openheimer i'm a product manager on the chrome team and i'm here with maria and robert to talk about some of the things we've learned from building products that can work for billions of people so just before we jump in as a quick overview there's three main things that we're going to be covering here the first is just talking about where we found that our users actually are and where we expect them to be in the coming years second we'll be talking about some of the things we've learned from trying to make sure that all of our products work for everyone and third we'll be sharing some specific tips that we have for how you can build products that work for everyone as well so to jump right in when we're talking about billions of people we really need to think about where people are accessing the internet and the reality is is that internet growth is happening everywhere so we really need to take a global look when we're thinking about this so this is a chart from 2014 of the overall number of internet users per country where darker means more internet users and what you'll see that is that in 2014 China has a tremendous number of internet users with about 675 million and we'll see that the United States also has quite a large number of internet users at slightly under half that number but if we fast forward a little bit we start to see a different story here so by 2015 you see a few other areas light up most notably India here with about 354 million internet users and by 2016 we still see a similar picture but we're seeing a few other areas light up as well with areas like Brazil having 139 million internet users and Nigeria at about 87 million internet users but what's really interesting isn't just looking at the absolute number of internet users but looking at the growth that we've been seeing year over year in each of these countries so looking at the same data a little bit differently this is a chart of the number of new internet users that came online every year and in 2014 you already see a bit of a different picture here you'll see that well we saw a lot of total internet users in the United States the new internet users was quite minimal and because the United States is honestly pretty saturated the number of internet users in comparison both China had a lot of new internet users and India but if we fast forward a little bit more we see this start to shift as well in 2015 there's clearly a dominant India here with new internet users and this gets even more pronounced in 2016 last year and if we dig into this and looking at India specifically last year we saw over 100 million new internet users come online in India alone but what's really remarkable here is that this is the second year in a row that we've seen over 100 million internet users coming online just in India and this isn't just the case over the past couple years we still see that 65 percent of India's population is not yet online so projecting forward various estimates suggest that by 2020 there will be about one billion unique mobile subscribers in India alone and this is just India remember we saw other areas on this map also having quite a few new internet users and we've seen the shift in a lot of our google products as well to give one example on chrome and android we've seen that we now have hundreds of millions of internet users from all of these next billion user countries and this is already the case and it's not just limited to the browser we're also seeing this with how they engage with certain products for example with google search we see that brazil india and indonesia are all in our top 10 based on search query volume for google searches so the reality is that it's not just about where our users are going to be but it's where our users are today as a result we've been working really hard to improve our own products and to share some of the things we've learned i'm going to invite maria on stage by 2020 right it just blows my mind all of these people are coming online rahu yesterday day before and his mobile keynote mentioned that seven people are coming online every minute in brazil so these people are going to be your future users and a lot of them are already our users and we've been doing a bunch of research and also iterating on products to make sure that we best suit their needs and i want to share with you a little of the stuff that we've learned so that you can apply it when you are building new stuff on the web or elsewhere for your own products now brazil india and indonesia might seem like they are very different you know different continents different types of population languages cultural gdp but actually what we found is that they share a few very common things and i want to share three of those things with you first to put you in the right frame of mind when you're thinking about building a product on the web what kind of things you should be considering the first thing that is shared across all of these countries is that there is a really wide range of devices now all of us have a phone in our pocket and probably you bought your phone what maybe a year ago two years ago but in india and indonesia and brazil people use a lot of secondhand phones there is a super wide range of screens storage and memory so what might work on your phone if you're building and testing it on this kind of device might not work for these users in fact what we know is that about 33 percent of users in india run out of storage on their phone every day can you imagine this you look at your phone in the evening you're like oh i can't take any more pictures i have to delete something now and that's what they do actually 83 percent of people delete stuff every week to make space for new pictures or videos or other things so what does this mean for you if you're building an app or you're building something else and it's large for a small phone with limited storage you're going to have a hard time competing with everything else and you're always going to be in a precarious position if someone is wondering okay i want to have some more space for videos what should i delete should it should it be this two megabyte app or should it be this 45 megabyte app right the other thing that we found which is shared is the very poor quality of connection so a lot of us here on wi-fi we probably don't even notice we're completely online all the time for g networks really fast connection but half of the users in india currently are still on two g connections and two-thirds of the users in nigeria are on two g connections so a lot of them actually are not even connected most of the time and if you're building a product which relies on the updates over the air or you're relying that people have to be connected in order for your product to function that is not going to work out for you so we learned this the hard way so you need to make sure that your products are either fully functioning online offline so offline is a state of its own it's not an exception or you need to make sure that at least they degrade gracefully so people can still use them in some kind of decent way finally data is really really expensive so a lot of people in those countries actually use prepaid data so they will buy it in small packets of 10 megabytes or 100 megabytes and they're extremely conscious with how they're using this data so they keep track which app is using up how much and they will turn different things on and off and sometimes even go into airplane mode in order to preserve the data so for us what we perceive as a free app and it's only 40 megabytes to download is actually a big decision for them because they have to spend a bunch of their very very precious data in order to download it so if you're requiring people to install stuff and it's large in size think again so we took these three trends and then with research we've actually extracted five principles that have made our product successful and I want to share with you an example of what we've made in terms of changes for each of those five principles and I'm hoping that this will help you when you're building your products on the web so the first really big principle that we learned is that before people even start interacting with your product you need to remove the barriers for them to do so so remember the things that I mentioned before in terms of high cost of data poor connectivity and low quality low range devices those are things that stand between you and these users even before they start using your product so you need to think about okay do I can I make a solution that will not require an install can I have something which is very small which work off works offline and here's an example of how twitter did this I don't know how many of you went to the session that they did on Monday but they very recently released a progressive web app which is a light app and it solves for all of these challenges to remove these barriers so it doesn't require any installs it's less than one megabyte in size and it allows to read stuff offline even if the user is disconnected if that stuff has already been preloaded and it also comes with a data saver feature so that people can pick exactly which images and videos they need to download so they need to they don't need to see all of their stream they can just download the things that they like and they've just launched it for a few months now but it's really enjoying a lot of success so here's some of just some of the metrics that they share with us it gets over one million launches per day and since they've launched it they're seeing 20 percent decrease in bounce rate so by removing these barriers they have delighted their users and they're doing pretty great and they actually built it specifically for these emerging market users where the connectivity is flaky and the data cost is high and the devices are kind of poor the second thing that we've learned is that in order to succeed in these markets you need to optimize for speed your load performance needs to be amazing and a lot of people are like well I don't need to do all this special work and believe me you're not special casing for a specific market when you're making your stuff fast I have yet to hear anybody anybody anywhere on the planet that is complaining that something is loading too fast so if you make your stuff work for users in Indonesia on a 2g connection your your users in the US will be also super delighted now we've been working to optimize our car products as well for speed and one thing that we noticed is that for users in India on 2g connections it actually could take sometimes up to eight and a half seconds for the search results page to load for them so imagine that you're you're searching and you're counting one two three four five and still no results right so we did some work to optimize our page and we also didn't stop there we actually looked at what can we do after the click because we noticed that in some cases again on 2g connections it could take up to 25 seconds for a page to load in India so what we launched is something called transcoding pages and it leads to great results so this is an example of one of the biggest newspapers in Indonesia compass and as a result of this transcoding and page optimization we are now consuming 90 less data and these pages load five times faster and what do you think users react how do you think users reacted when they saw this well actually there's been 50 increase in traffic to these transcoded pages because users just perceive that everything is loading much much faster and then they just browse more so it's a win-win for both sides the next big thing that we learned is that offline is a state in itself we need to make sure that our stuff functions even when people are going in and out of connectivity so not everybody has Wi-Fi in fact a lot of people especially go to stores or train stations in order to get access to Wi-Fi they download a lot of stuff there and then they spend most of their time offline the rest of the day so one thing that we are experimenting with in order to make sure that people have a little bit smoother experiences when they're going online and offline throughout the day is in Chrome if you're searching so this is the example on your left if you're searching and you're trying to fetch a page in Chrome and we noticed that you're offline first we'll show you adorable dinosaur and second we show you this download button so when we detect that you're if you click on this button we detect that you're going back online later we'll download this page for you then and then you can see it so we are aiming to experiment in order to smooth the experience for people similarly in search if someone does a query and we notice that they're offline we'll offer them to do the search later and then send them a notification that they got their results the next trend and a really important one is that in a lot of these countries people use more than one language to accomplish their daily tasks they might do their homework in English they might talk to their grandma in one dialect and they might use a completely different language in school and so depending on that and what product you're building you need to be ready to handle these challenges for example in search what we've noticed is that sometimes people will search in English but the words that they're typing are in Hindi so we launched something where if you're searching in Hindi and in English we'll show you the results side by side and you can actually click on the top and then just switch and people really like this we saw that there is a 50 increase in Hindi searches on mobile after we launched this now the last trend and a very important one is that a lot of these users are coming online in a completely different context and so you shouldn't expect that the product that you have right now with the UI and the flow and the graphics will fit their needs even though it has a very specific function so a lot of them are coming online for the first time on a phone they don't have any experience with desktop a lot of them have never used email and they have different cultural expectations or color preferences so we did a lot of research for our products in order to make sure that what we're launching for them is actually suitable for their needs and you need to keep this in mind as well so here is an example this is the chrome default page and if you look at the one on the left it's the standard one right google is pretty much known for its super minimal interface we have the search box and that's it but what we found when we did research with users in India is that this empty page reminded them of like a big vast empty lobby where you have to walk all the way and then ask a receptionist something so it felt very cold and not welcoming and what they wanted to see were they wanted to be surrounded by their favorite things and so on the right side you will see a version of the chrome homepage that we're experimenting with where the user can see their favorite sites that they're visiting the most often and then also they can see articles that are related to the searches that they've been doing so this feels a lot more welcoming and we continue to experiment with that so to sum up five important things they need to remember if you're building stuff for these markets which we learned the hard way and you today by waking up early and coming to the session have managed to learn the easy way first before you even start thinking about how people are interacting with your product make sure to remove the barriers for them to adopt this product second make sure to optimize performance as much as possible because these connections are super slow and data is very scarce third third yes make sure that your product works offline almost as well as it works when it's online fourth don't assume people speak only one language all the time and make sure that whatever user flows you have in your product are addressing their multilingual needs and finally but not last make sure to guide the users and that your experience in your product matches what they're expecting in terms of culture and habits so you've seen the examples of how we've done this in our own products on the web and hopefully by now you're inspired and you want to know how you can do this well you're very lucky because we have Robert here who's going to give you specific examples for each of these five things on how to build to succeed in next billion user markets thank you Maria so first and foremost thank you for showing up like morning of day three i i love you i'm so happy you're here so i'm robert and i'm looking at how the world is evolving and looking at what we learn i'm going to be here and talk about our best advice for catering to these challenges like our best practices and how you also carry all of this into the things you're building so the first and really most important thing though and Maria was talking about this as well this is not only about advice for a certain market or a certain country or a certain person like all the advice we'll be giving you here will be applicable everywhere it's going to help all your users wherever they are in the world so let's start with removing barriers for instance as tal was saying in 2020 we expect about one billion unique mobiles subscribers in india alone that's a lot of people so how do we cater to that and how do we make sure to remove barriers for them as you saw with twitter light when you're building something you really make sure that you have no friction or a slow friction as possible so no install being necessary saving as much data as possible and working offline these are all crucial features but you also need to have the best possible reach with what you're building and also that the content is actually being kept up to date for your users as well and meeting these seeds of no install or using as little data as possible are really big and hard challenges for native apps so what's the solution as i'll be showing you from our experience progressive web apps really just respond to all of these challenges and and also these opportunities and i'm going to walk you through this with a few examples and also a number of different use cases as well historically with native apps once they've actually been installed they had really great ux experiences uh homescreen icons offline support push notifications background syncing access to sensors and much more and the idea with progressive web apps is bringing the best of native together with the best of web like what are our learnings and how can we get it all in one good package and i'm sure you already heard a lot about progressive web apps so far before google io and also here at google io as well so i'm not going to delve too much into the technical details here i just want to quickly mention a few of the core basic features like at the home screen push notifications offline support and all of these features it's not just something that's completely new right this is being built on top of what the web already has the existing reach that the web has and then already staying up to date the capabilities that the web has as well one quick thing to mention though if you haven't already implemented that is the web app manifest so you can add this to your html code and this is really foundation for the features like add the home screen and push notifications and the way to do it is really really simple just reference the manifest file from in the html code and then within the manifest file it describes the app name the icons the start url and much more and an important thing to note here if you look at the start url is also to make sure to have a parameter in there as you know if your users are coming from the home screen icon or if they're coming from visiting your progressive web app through a browser tab you can also control other things like having a splash screen theming and background color display type orientation and much much more and we've had a number of talks here at IO about progressive web apps so I really recommend looking at those videos but also in our developer website as well we have very detailed information how you go about implementing all of these things but also when we talk about progressive web apps we can talk about a lot of technology right but at the end of the day it's about user experiences and this is really the key thing like user experience need to be reliable fast and engaging for whatever you build this is the key thing and then for us these are the key words for progressive web apps so these are whatever you do this should really be your new mantra look over your building does it actually match this and we also see this kind of mental approach as the key to removing barriers for your users and when we talk about being reliable it means that it will always load instantly for users you will never show the users that are actually offline and use service workers to cater to that and their clients have proxy to put you as a developer in control of things like cache and network request but it's also about being fast you need to load fast we also need to stay fast as well you don't want to have any janky scrolling or slow experience like it always needs to be very very smooth for the user and also needs to be engaging and we see features like at the home screen immersive full screen experiences push notifications and much more that's really really good things to get there but another part of removing barriers that is sometimes overlooked is actually storage constraints like app size matters more today than ever before and we see a lot of users never installing any new apps or never installing any apps at all or they remove the apps they're using up the most of the space on their devices and as Maria mentioned 33 percent of smartphone users in India run out of memory space every day that's a lot of people and for instance we only have one gigabyte of app storage you can only install between five and ten native apps so with this in mind we were really excited with seeing Twitter's progressive web app with the relocator to this specific issue so twitter lite is under one megabyte and compared to the native apps that only makes up one to three percent of their sizes as well and this is great like it's a really small file size but they launched very recently and already so far they see a 50 percent increase in page views and 60 percent increase in pages per session and in India a company called oba cabs they had fairly big native apps 60 megabytes on android 100 megabytes on ios and these aren't like super big sizes for native apps these are like but fairly big right so they decided to build a progressive web app to address this and the result was really stunning so their progressive web app now is only half a megabyte so you compare that to the size of their native apps and you can still give the users the same exact experience and they also work with Polymer to achieve this both to get the small file size and the good experience but also working with Polymer to make sure that they would have support for the uc browser which is one of the very popular browsers in their main market and Polymer is natively supported in safari chrome and opera and with polyfills it works on firefox edge uc browser now and i11 as well and whichever framework you use i mean you should use what works best for you this is just one example but make sure it also has a small file size like in the case of polymer 2 that was just released it's only 10 kilomites on chrome 30 kilomites on safari and slightly larger on firefox and edge and a couple of the other ones and in brasil we saw a company called terra and they recently built a progressive web app as well and for them it's already a success they're seeing user visiting twice as many pages as before but also spending twice the amount of time in their progressive web as well compared to their previous mobile experience and the second challenge but also opportunity here is really optimizing speed you want to make sure to reach your users but also make sure that they don't go elsewhere as well so coming back to ola cabs their progressive web app has an enviable first load of only 50 kilobytes and repeat load sizes can be as small as 10 kilobytes and that actually means 500 times lighter than the iOS app and 300 times lighter than the android app so i mean it's completely different worlds here right but also this low data consumption make sure that the first load is about three seconds and then repeat load times could be down to about 1.8 seconds which is good on its own like it's a fairly good loading time but in this case these loading times are in 2g or ultra low 3g networks which of course is the defining factor to actually reaching your users and for them catering to many many millions of people in india this is the key thing to actually being able to reach them to begin with another really interesting number that we see is that 53 percent of the users abandon sites that take longer than three seconds to load and 53 percent of users that's over half in three seconds that's not a long time so you can work a lot with progressive web apps and achieve a lot of really good performance but generally with progressive web apps we've seen that for the second and subsequent loads using progressive web apps and servers workers it's a really good experience but what about the first load what about if no matter how hard you work on it what if it's not fast enough for your users one option we've been looking at here is called amp and accelerated mobile pages and one thing that i want to stress here is that it's an open source initiative really trying to make the web better through fast and high performing pages across different devices and distribution platforms and the idea with amp is that you're guided to only do things that are fast it's almost impossible to make slow amp pages so i'm going to talk briefly here about what what amp is but also how it ties into progressive web apps and amp consists of a few different things the first thing is amp html and it's basically regular html with some restrictions for reliable performance i like these cute little balls it's like a morning shot or something um but basically it's html but it's extended with some custom properties that you will need and some html tags are being replaced by amp specific tags or so-called amp html components the second part is the amp javascript library and the idea is really to ensure the fast rendering of the amp html pages to manage resource loading and also give you support for a few other different custom tags i mentioned but it also makes everything that comes from external resources load asynchronous so nothing in the page can block anything age anything else from rendering and finally you have the google amp cache and the idea with the amp cache is that you can serve cached amp pages so it's a proxy based content delivery network for delivering all valid amp documents and it caches them improves them and improves the page performance automatically so these three together make up the foundation for amp and if you want to get started and create your first amp page it's really quite easy it looks like any basic html page you've ever seen before just a few regular tags but if you see at the top you have the lightning icon indicating that this is an amp page you have some boilerplate css needed to make sure you have the consistent rendering and look across browsers and platforms and then finally you include the amp javascript library and then you're good to go that's your first amp page it works and then if we're going to do things like including an image you have the amp image component and if you look at it it kind of looks like the image element the nature milder used to it has the source attribute the alternate text dimensions and and all of that but the interesting part here is that the amp runtime can choose to delay or prioritize resource loading based on different things like viewport position system resources connection bandwidth and other factors so basically the runtime will take care of it for you to manage the image resources make sure that it loads the best and most suitable things at that specific time and one company called somatos that are working with amp pages and they had a four second page load before and they're down to one second now and again on really slow networks as well and I mentioned I was going to talk about amp and progressive web apps and how they're connected to each other and we see two different approaches one is using amp as an uh come on one is using amp as an entry point into your progressive web app and the other one is using amp as a data source for your progressive web app so taking a look at the first one and using amp as an entry point amp selling point is really almost instant delivery of pages whereas progressive web apps enable much more interactivity and engagement enabling features so a good approach here could be have an instant first load with amp and then upgrade the user to a progressive web app basically start fast stay fast and the way to do it within an amp page is to use a component called amp install service worker it basically allows you in the amp page to warm up a progressive web app in the background so then for the next click from the user you can instantly load a fully featured progressive web app and looking at this component you have the source attribute here which is specifying the url of the service worker you want to register and then you have an optional attribute called data i from source which could be the url of an html document to register a service worker and then finally you need the layout attribute which must have the value no display and looking at this combination between progressive web apps and amp wego started working with that and for their amp pages they had a previous page slow time of about 12 seconds and is down to 2.75 seconds now and for the progressive web app shell they had a loading time that was almost nine seconds before and they're down to 1.63 seconds now and again these tests are being done on a nexus 5 on a slow 3g connection so just looking at improvement and again coming back to the user experience it makes such a massive difference for them but it's not only fast right this directly leads to actual results so with the new implementation these now see 74 percent of all visits actually turning into conversions and all in all the conversions have gone up now on the site 95 percent and looking at the other option of having amp as a data source for your progressive web app maybe you started with building something with amp for instance but you haven't built a pwa yet and the idea here is that you can still reuse your amp content within the progressive web app and a common scenario could be that you're building a progressive web app as a single page application that connects to a json and api through ajax and and i know it kind of looks like i'm trying to set a world record in acronyms here i'm sorry about that it's a lot of fancy words but the thing here is like with the json api here then it will return a number of data sets to drive the navigation and then the actual content of your site so the way you can do it is that you start by including shadow amp in your progressive web app you load the amp library in the top level page but it won't actually control the top level content it will only amplify the portions of the page that you needed to and then secondly as you would normally do you handle navigation in your progressive web app but then the final twist here that you then use the shadow amp api to render amp pages in line so it could look something like this you have a progressive web app shell around everything but then when you start tapping you get the amp content of the amp pages instantly within the progressive web app so it's been a really good combination in that way another thing i want to mention is the network information api and and we had this for a while but people have been really asking for detailed information about bandwidth to convey the htp client's network connection speed and network speed here is provided as an estimate of the current transport rgt and the network bandwidth and also takes constant variation and user variation into account as you might know 4g doesn't actually always mean the 4g speed you would expect but still though if you're building things you're doing a really good job but it's not really fast enough for users at the end of the day how do you know when to deliver even lighter experiences and one really good option here is to check for the save data request header and this is available in chrome in opera and in yandex browsers and the idea here is to deliver fast and light applications to users who have opted in to the data savings mode in their browser and for instance in chrome if you turn the data saver on it will send this header with on as the value so it basically looks like something like this it's easy to check the header and then know what the users want and need here but also if you're building something or using something like a service worker you can inspect the request headers and you can apply the relevant logic to optimize experience so you can for instance return an alternate response like different markup smaller image files more video size and much much more and within the chrome dev tools if you start inspecting headers there you would just find save data really quickly and you can adapt the code the third part here is about addressing intermittent connectivity and maria showed us before in the what we learned section that around 50 percent of the users in india are on 2g so slow or intermittent network conditions are really big challenges for reaching users but also giving them a good experience and here we really see service worker as a key technology to address that so what are they and how do they work basically it's a programmable proxy sitting between a network and the browser but it's more than that so when a page registers a service worker it adds a set of advent handlers to respond to things like network requests push messages updates the service worker and much more and because it's being event-based it doesn't need to be resident in memory unless it's handling one of those specific events and also a kind of a side note here there are two interesting options for poor network conditions but also devices with slow disk access and the first one is using timeout for slow networks so for instance if you have a slow or intermittent network condition you can create a timer in the service worker where you give the network a chance to deliver a resource within say one second but if it doesn't then you can fall back to the local resource a small note here like don't set a timer that's too long though because the service worker might get killed or replaced by a new service worker the other really fun thing is race conditions because with some devices it might actually be faster to go to the network and get the content than going to disk which is not what you would expect but this is a really good interesting way of testing that and of course both with the timer and with going to disk or not it's always important that you got to get data from the network to have it locally just in case and Maria also mentioned Nigeria and seeing two-thirds of the users being on 2g here and one company there called conga added service worker to streamline the site to help consumers really reach the parts of the site they wanted to to browse so you go through categories to can review previous searches and can actually even check out by then calling into order all by being offline and as you can see the results was much much faster than native like 92 percent less data for the first load and 82 percent less data to reach the first transaction but it wasn't just faster than native it was much faster than their previous mobile website as well so 60 percent less data for the first load and 84 percent less data for to reach the first transaction and the fourth part and and Maria was talking about over 20 percent of users in countries where we see the next billion users are coming online are searching in two or more languages so we have something called the search console which is tools from google to show how your site is doing in google search but you also have something in there called search analytics which looks like this which shows you queries positions and results clicks click through rate and much more and you can filter information in many ways in here like applying the country and the device filter for instance to see from which countries we're getting the most mobile users and also in this example the majority of the users of the site are coming from China Indonesia and Malaysia so it's really worth in guest investigating whether the contents available language they understand and also if they can complete the key user journeys and you can also track performance specific pages specific queries and much more and finally about guiding your users it's really about making sure to help users when they come to your website the first time and we've seen really good numbers from add to home screen here in this case with olex like the users browsing around they're looking to see what it's like and then you get to a certain point where you get to add the home screen prompt pop up and they then choose to add it to the home screen then they get the icon of the home screen it's really easy for them to get back as well and one really really interesting number to share with you here now that I actually don't think has been anywhere else here at IO so far is that we see users five to six times more likely to use at the home screen than installing the equivalent native app that's a pretty big difference and in India we saw flipkart with 63% of the users on 2g and again has been fast was essential right and user service worker and all that to achieve this but one really interesting part here is that they really worked on when to show that the home screen prompt and now they see really great results so 60% of the users are now coming from the home screen icon which is fantastic but it's not only a lot of visitors it's also high quality visitors so people coming from the high screen home screen icon they convert 70% higher than the other ones so summing it all up as you see in applying the five principles that we learned about removing barriers through using progressive web apps and the web optimize speed by using amp and progressive web apps together but also the network information api and the save data header and for addressing intimate connectivity service workers is really your best friend here and make sure to have multilingual support and understand your users and which languages are actually using and then adding features like at the home screen which has been really successful to make sure they actually guide users throughout the journey and with all of this we hope this is good advice and good numbers and good learnings for you to go out and build for your next billion users and if this sounds great which it does right you would go to this website and you would get a lot of advice and guiding through to actually how to get started and also tal marie and i is just going to be out around in the corner and the big round fluffy thing in the mobile web sandbox so please come over ask us questions and you know tell us what you're building on behalf of all of us thank you and go out and build great things i was an HR i was working as a recruiter and i was completely unaware as to what i'm supposed to do i used to feel that i just don't want to be a recruiter i want to be on that other side talking technology and talking about gadgets and how technology is changing our day to day lives one of our relatives he challenged me that you are a girl and engineering requires a lot of designs and drawings i wanted to prove him wrong and that was the time when i heard about eudacity i started taking some online courses in java and android very basic things it is little difficult but it's just that i don't want to feel that you know that sense of regret i just don't want that so eudacity has actually been a savior in my life the quality of the projects is very good it's like you know after completing those you feel like you have conquered something actually i finished my degree yesterday and so today it's just going to be the celebration you couldn't be happier to be here to see the launch of the android skilling program there's going to be so many new great android developers here in india good morning everyone thank you so much for coming some of you know i'm from deli always fun coming back and and meeting all of you we can scale up developers and scale up mobile developer training to help make india a global leader in mobile app developer having the universities teamed up with us in the skilling program is going to be a huge opportunity and make a huge difference finally we've launched it it's been a year since we first introduced this program two million developers i think that it's a really achievable goal and i think that it would do a lot for improving the environment in the country in terms of hands-on programming so i think it's great it's a massive number the possibilities are immense india will be the largest developer based globally and just to get everyone to start thinking about android and developing for android we're at the cusp of a revolution let's do something big more games more users more success yes everything more developing a successful app isn't easy to reach a broad audience you'll need to consider your ios android and mobile web users and to build for these platforms you'll need a back-end server to store data and support the apps of course you want to get your users logged in hopefully lots of users which means your back-end will have to scale then after you've solved your scaling problems you have to find more ways to spread the word to get new users but have you found a way to measure all this activity and oh no your app is crashing and causing servers to melt down and you haven't even made a dime yet don't you wish this could be easier this is why we built firebase it has all the tools you need to build a successful app it helps you reach new users keep them engaged scale up to meet that demand in addition to getting paid from the beginning with firebase you'll have test lab and crash reporting to prevent and diagnose errors in your app your back-end infrastructure problems are solved with our real-time database file storage and hosting solutions acquiring new users is easy with invites adwords and dynamic links and using the authentication component you can get those users logged in with minimal friction once installed you can keep your users engaged with notifications cloud messaging and app enhancing then with remote config you'll have the freedom to experiment with new features and optimize the user experience in real time and of course you can earn money with the same ad mob component that's been monetizing great apps for years last but certainly not least our all new analytics component designed uniquely for firebase brings insight into how well these components are working for you and your users with firebase analytics you can measure and optimize your advertising campaigns discover who are your most valuable users and understand exactly how they are using your app now all these components work great on their own and provide a solid infrastructure to build out your app but they work even better when combined in creative ways so let firebase handle the details of your app's back-end infrastructure user engagement and monetization while you spend more time building the apps your users will love to get started right now with firebase on android ios or the web follow these links for more information then to manage and monitor your apps connected to firebase there's a web console to view crashes set up experiments track analytics and a whole lot more and to learn more about firebase and all of its components you can read the documentation right here we can't wait to see what you build thank you for joining us here today india is coming along way as i just mentioned today india is the second largest country in the world in terms of number of developers soon it's going to be number one what we want to invest in is actually training the faculty from your content the potential is so great and what google is doing to help catalyze that innovation is it's really an exciting time for these campuses we are really trying to provide the best possible experience to teachers in these faculty hubs because the first step to training two million developers is to train the teachers that are going to teach those two million industry as of now demands a lot of updated curriculum developing two million android developers being working in a technical university we can contribute hugely on developing those uh million app developers so we're excited that all the raw materials are there to create an innovation revolution in india i really think the students are going to make some great things and i can't wait to see what comes out there's a lot of potential in india and we need to take it forward with google we can provide rich opportunities to all that is the essence of google program which i have seen this is a good move and this program will definitely be useful to uh to the students because app development is going to rule the world for the next few years really let's take a quick look at google cloud there's two things that i want to check out and jenny is here to tell us all about them hi jen hi demothy so what's this because i really want to play with it so this is kubernetes whack-a-pod and it is a demo in which you battle kubernetes uh where you are trying to take down a service by being the cast monkey while you're doing that kubernetes which is a container orchestration framework an open source project will try and keep your web service up that sounds like fun i'm going to totally try and take it down give it a spin i'm going to hit the button and you're going to battle for 30 seconds ready here you go so you're doing great kubernetes bringing those containers right back up the yellow mole special it'll take out the entire web app in one go oh and it's down but kubernetes is already bringing those containers right back up you're doing great three two one and you are done so during your 30 second battle you took out 58 sir our pods and caused nine seconds of downtime but it seems kubernetes is victorious in getting about 70 up time but it's still very good very good attempt awesome that was a lot of fun all right so i'd also like to check out spanner because i think spanner is like one of the coolest things ever created spanner is pretty amazing so i i want to check out what this visualization is maybe you can give us a quick rundown of what spanner is and then tell us what's happening sure so spanner is a managed sequel database um what makes it special is it is massively horizontally scalable and can handle huge amounts of traffic and you still have the wonderful features you would expect from a sequel database like queries and transactions and other neat shiny things like live migrations um yeah and it's fully managed which is pretty cool who likes managing your own database this is a view of the schema that is uh an example schema on spanner which is the same thing being used to power the demo on the screen next to us no if i can i think one of the things that i've always really liked about spanner is the fact that uh it stays in sync even though things are happening in different times yeah it's a great point it's one of the the special features of spanner spanner um in this particular demo we have spanner running on three different data centers on three different continents all over the world and yet due to some magic involving atomic clocks and very precise timekeeping we can still serialize all those transactions and maintain that consistency that's cool so what what is actually happening on the screen here so on this screen right here we are simulating a pretty amazing ticket sales event where apparently uh whatever this ticket sales event is we are selling 137 000 tickets every minute um and so far this demo which i believe that we started this morning has sold 238 million tickets uh during its run through um you can see that they're distributed across the world and even despite all that really high throughput we're still maintaining about a second of latency on those transactions that's amazing okay uh is there anything else you want to tell us about what's happening here at IO uh it's a great show but other than that i think we found the phone spots all right thanks jen thanks so much simithy okay see it i'll be honest it's not a party if there's not a photo booth so i found the photo booth and the guy who built the photo booth alex is going to tell us all about it hey guys uh so this is a talking photo booth uh for an operating system he uses android things and uh you talk to using the google assistant and it uploads photos to the cloud using firebanks awesome should we try it out yeah let's do this let's talk out of there what about the speaker okay okay google let me talk to the IO photo booth sure here's IO photo booth hi i'm the IO photo booth okay taking a picture three two don't look so serious you're in a talking photo booth one do you like it yes i can style for your photo yes just a minute downloading styles that's 56k i can tell here you go i'm uploading your photo now can i share this on my twitter too got it uploading your photo printing a link in the photo now don't forget to share your photo with hashtag IO 17 and have a great time that i own that was awesome all right thanks alex here with Doug from firebase and there's a lot going on in the firebase area a lot that developers have already been able to play with and like know pretty well there's also some new stuff and i've asked Doug to show us some of that new stuff yeah so uh announced today uh actually yesterday at uh keynote we found out that firebase now has firebase performance monitoring which is a set of tools you can use to measure and monitor the performance of your app so let's take a look at that all right so this is the dashboard uh what we have here is an overview of the performance of your app so we have traces by frequency you can think of a trace like a window of time that's of interest in your app uh also we can look at the latency of your network connections all over the world we can also see the uh success rate of your app over time so it looks like things are getting a little bit worse for this app as time goes on so that might be something to pay attention to now if we drill into traces a little bit we can see that uh there are a collection of automatic traces so we capture these for free automatically and here's some custom traces that have been defined by the app so you have the flexibility to write code to get some performance or you can just let it do do things automatically now if we go to network requests we can get a breakdown of all the http transactions that are going in here and you'll notice that there's some wild cards so we're actually bundling up different kinds of requests that look the same but actually have like different parameters and you can see for this set of uh apis what's the average response time what's the success rate and the number of requests that have gone through the system you can click through to this and see uh how it's going so you know you're probably using a lot of different stks you probably want to know how they perform you probably want to know how the app is performing from your user's point of view and this is a great tool to get that done awesome Doug that's really cool and it's really neat to see all the data in this way yeah it's definitely very helpful um it's definitely also very interactive it's not just a dashboard that you look at it's a dashboard that you interact with so it's actually one of the really cool parts about being at IO is that if i were just to set this up on my app right away i might not have all this data right away but we're showing a demo here where we've collected a bunch of that data so you can see what it's like once you've invested for a while yeah definitely there's nothing worse than opening up a dashboard and having nothing staring you in the face right you want it's nice to be able to see what you're getting into before you actually get into it absolutely good morning everybody my name is john pallet and i'm a product manager with the chrome media team and as well and i love working with web APIs we are here to talk with you about the future of video and audio on the web and we are going to talk about a little bit about how we got where we are today we're going to talk a little bit about what you can do today and show you a couple of demos and then take a look at what's coming in the future that's an awful lot to cover in 40 minutes and so we're going to get right into it so uh to give a bit of context let's start back in 2000 when i bought my first hd television the super bowl was just broadcast in hd for the very first time i was pretty excited there weren't any dvrs for hd or anything like that but it seemed like a pretty good thing and it looked great and it played great but over the next few years there wasn't a ton of stuff that happened dvrs came on to the scene skype came out and we were able to do postage stamp side video conferencing on the desktop which was pretty cool youtube came out in 2005 amazon and netflix started streaming about 2007 but for a large degree of us for most of us this really meant that we were watching video on the desktop or sitting on the couch watching television at home so at the end of the day over 10 years there wasn't a huge amount of change in terms of mainstream user behavior well now let's look at what happened with my daughter about 2010 is when she really started consuming media and around about that time is when tablets came out and all of a sudden you could take your video and you could watch it anywhere this is also about when video conferencing really did go mainstream with apps like facetime 2011 twitch video game streaming comes online and all of a sudden people are sending video all over the place and 2013 is when snapchat came out 2014 musically and suddenly millions of teenagers are creating little music videos and sending them to all their friends and then in the last couple of years 360 video augmented reality virtual reality you can see that in the last few years the pace of innovation has really accelerated for video and really when you look at it the as mentioned the state of the union 70 percent of the bytes being shipped over the internet today our video and that's actually projected by sysco to go to 80 percent in 2020 this is a really big deal this is the decade of video things are happening fast and it's a really great time to be thinking about video so if you're wondering what session to be in this is in fact the right one for this time slot it's a really really pivotal point but here's the problem we look back over that time where was the mobile web some of the apps I mentioned actually have URLs as names but when you go there it tells you to install an app the mobile web didn't play a big part in the innovation that's come so far which is a little weird because it's frictionless you want people to get in you want to get out you want to be able to see things quickly these are all things the web is really really good at you'd be forgiven if you thought well yeah that's because the mobile web is not very good at video and it's true over the past few years as recently as a year ago there have been a number of problems with the way the mobile web handled video let's look at first one buffering a lot of video publishers were so focused on flash that html 5 was a secondary thought the mobile web wasn't even a thought and in a lot of cases what you'd have is you'd have a giant video file sitting on a server and when you went to access that over the mobile web it would take a long time to download and you'd be stuck waiting for it to play or they would think about the mobile web and they'd create a very small video file and now you would download it and it would look awful so great no buffering but the video quality is just terrible the publishers who didn't think about the mobile web from a video perspective often would do the layout for the desktop site and not check some of the mobile sites now to be fair a lot of the apis to do this correctly didn't even exist a year ago so there has been a big change in the web over the last 12 months in terms of ability to do video but this is what you were looking at a year or two years ago and of course if you went offline your media experience was pretty terrible I like the dinosaur game it's not going to keep me going on an airplane for four hours so now let's take a look at where mobile web media is today to show you this we have a demonstration application which was written by Paul Lewis and a developer relations team let's go to the the application please and what I'm going to show you is what you can do today on the mobile web so this is a progressive web app and it's called biograph when I went to the website for the first time and asked me if I wanted to install on my home screen which I did and so now when I launch it you can see it comes up gives me a nice splash screen and goes right into the frameless app if I take a look at the task manager you can see there's no chrome frame around this I'm going to keep re-emphasizing the point this is a web page this is what you can do with a progressive web app it looks great it doesn't look like it's in chrome but under the hood it's actually being delivered by the browser so I can scroll through you can see it's fast it's seamless it's very responsive if I hit play on video it comes up very quickly we didn't get a buffering event at the beginning now I wish I could say that the wi-fi here at the show is so fantastic that you will never experience buffering with video but that's not what's going on here in reality because this progressive web app it's able to pre-cache the first second of video or a few seconds of video so when I hit play it was instantaneous let's take a look at some UI elements here if I rotate I can go full screen you can see that I've got custom controls allowing me to go back and forward I can also drag on the timeline I can get some great thumbnails not really what you'd expect from mobile web playback is it if I go back into portrait mode the device automatically pops back let's take a look at what happens if I go into lock screen so a lot of cases where I might want to hear the audio on the video but I might not necessarily want to be watching it you can see here it's a little dark but there's a background image filling the entire frame and I have media controls video in this case and media true with audio as well is now a first class citizen on my mobile device and it lets me know what's going on in the background if I unlock and I go to the notifications you can see I have controls there as well now this is all great but what if I'm getting on that airplane well I have the ability to take media and make it offline in this app this is using an API that's still in development called background fetch what's neat about background fetch is that now it's pulling down the video on the device but if I switch pages or even exit the browser and then come back let's go back you see the download continued and finished up even while I was on a different page and this is great because now if I go into airplane mode and I go back to the home screen the app can actually keep track you'll notice that some of these video items are grayed out it knows that they're not available offline let's go into this one I'm going to hit play on this video which is offline and we'll notice that after a brief second it starts playing so you really can do a complete excellent video and audio experience on the mobile web today one thing I want to mention about this video this is actually protected content before I hit play on this it already had a preauthenticated license and played it so we talk about doing a great media experience this is something that everybody can do for all use cases it's ready right now so for those of you who are interested in trying this out the app name is biograph you can access it at the URLs on the screen it's also open source there are a lot of really good learnings in here Paul has been doing a series of developer diaries explaining a lot of the tips and tricks that he has used to build the app including things like how to get those thumbnails to scrub and the player control there's a lot of really good information there we'll come back to these links later in case you miss them right now so what did we see let's start breaking this down I think that was a pretty great experience because there was fast playback you had the ability to watch anywhere it had great ui and it had really high quality video so this is the anatomy of a great video experience let's tackle these one at a time it's been mentioned before that you'll lose users if your page doesn't load fast enough akamai did a study and they show that after two seconds you start losing about six seconds of your viewers if your video doesn't start playing fast enough so it's not enough to just have a great page load you've got to be able to deliver up video fairly quickly and the other thing that other studies have found is that you need that playback to be sustained it needs to be continuous it doesn't matter if the network goes up and down anytime a video buffers you're going to lose people well let's start with the playback the challenge with playing back video is you have to pull a lot of data over the network and the network is not constant the user might go through a tunnel they might lose bandwidth so there's a way of dealing with this called adaptive bit rate streaming what we'll do is we'll encode the video at multiple bit rates here i have low medium high in practice you'd have six ten twelve or more different bit rates that you're encoding to the next thing i'll do is i'll break the video into segments in this case six second segments i'll encode the video segments at each of those different bit rates the reason i'm doing this is now when the user hits play the player can look at a playlist which knows where all of these different segments or fragments are and as the video plays it can adapt to the bandwidth going up in quality and resolution when there's more bits available on the pipe and going down when the bandwidth goes down this in the demo was done through the open source shaka player and what it does is it reads a playlist and then it pulls the appropriate fragments of video and feeds them into the video tag using msc media source extensions another thing we need to talk about is startup it makes sure it plays back quickly this is a proof of concept done by pheo player and what they're doing we'll wait for it to come back in a second what they're doing in the first second section here is they're using service workers to pre-cache the first few seconds of video as the user browsers page so here we go the user is browsing their page and eventually what they'll do is they'll select a video and start playing but they don't wait for that indicator to get ready for video playback even while the user is here they've already pulled down the playlist or the the presentation description and you notice on the left where it's been pre-cached the video playback starts right away on the right it takes a few seconds and in fact if you're pre-caching with service workers in their example they were able to get playback from over three seconds before the video started down to about a hundred milliseconds so service workers really can make a big difference progressive web apps have a lot of powerful capabilities that they bring to video and in fact you saw this pre-caching in the biograph demo you remember when i hit play it started pretty quickly well that's because under the hood biograph had used a service worker to pull both the presentation description and the first segment of video into the browser cache and then when shaka went to access it that presentation description and that first segment were pulled by the service worker from the cache after that fragment started playing shaka then went and started pulling the next fragments of video over the network what this means is the user gets fast fluid instantaneous playback which really is like almost every other element in html 5 you want something responsive and quick that happens right away i cannot overemphasize how useful and important service workers are to optimizing the speed of your site one recent case study that we did with viacom 18 on their vute site optimize their mobile web page their page load times went up by 5x so five times faster and this had a big impact this is a media site it had a big impact in terms of return engagement for their users as well as the engagement of new users 77 increase percent increase and the conversion from new visitor into an actual video viewer and then a 15 percent increase for all of the users in terms of the number of videos that they watched on average so you can see a little bit of optimization and a little bit of performance goes a long way in terms of increasing the engagements so that's fast and fluid playback let's move on to offline offline is really really an important use case for a number of reasons one is obviously the airplane which is something that i of course because i fly around care about there are also a lot of cases where you'll have viewers or potential viewers who would like to access the video in places that do not have internet access we've actually seen those users trickle load using the default html 5 player videos waiting until they get all the way into the cache and then they'll take the device and go somewhere else saving the video for later that's really not the best way to do offline for anybody who's thinking oh that's going to be my new offline strategy please don't do that what you can do instead is use service workers and when we saw this in biograph what biograph had done with the offline video is it pulled down the playlist or actually the presentation description in this case it pulled the presentation description and all of the media segments at a reasonable bit rate into the cache so that i could play it back here at the show now you might ask which bit rate would i want to choose do i want the highest bit rate do i want the lowest bit that's really up to you you control the logic and the service worker you control what quality or you can give the users control let them download an hd version if you want or let them do a low bit rate one if they want to save more on the device i keep talking about video this is a really great use case as well for audio this is something you could do with a podcast it's also something you can do with a variety of other audiobooks other types of audio material now i mentioned during the demo that the actual fetching of it the pulling into the browser cache was being done by an api called background fetch this api is still in development this is a good example of something that we very much would like you to look at it now and give us your feedback it's available if you turn on the experimental web features flag but take a look at the spec this is the time if you have feedback on how this should work please do let us know i mentioned a second ago that there's a decision either for you or on the part of the user in terms of how much video do you really want to put on the device wouldn't it be great if you could get twice as many videos on the device without sacrificing quality well of course it would be and this is where video compression really comes into play video compression is what gets used to take video and turn it into something that we could actually send over the network and vp9 is the web m approach to video compression it's also known as a codec which stands for coder decoder and so if you hear me say the vp9 codec that's what i'm talking about it's a compression technology what's great about vp9 is that compared with a lot of the other common video compression codecs it can get up to about 50 percent better compression efficiency meaning you can cut your file sizes by 30 percent 40 percent up to 50 percent while preserving the same quality or just offer a better higher quality level now the other thing you can do with vp9 is you can actually deliver higher quality i mentioned this as a key pillar to a great mobile web experience vp9 is supported on on over two billion devices and so if you want to play high quality video vp9 is an excellent codec choice to look at so good that when youtube adopted it they saw videos starting 15 to 80 faster using vp9 compared to some of the other codecs 50 less buffering and more hd worldwide there's some really great gains that come from using vp9 a really great high quality experience the last piece of the anatomy is the user experience and one of the things you saw was the lock screen there are a lot of cases where you might not want to watch a video you might just want to listen to the audio if what you're deal with is audio then this is absolutely a primary use case for you now the great thing about this api which is a media session api is that it allows you to put your metadata and images on the lock screen as well as unwearable devices it's also great for the user because they can tell what's going on on their device and they can control it so let's take a look at what's going on in order to use the media session api um the first thing i'm going to point out is this is a great example of a progressive feature the first thing we do is start with an if statement if the device supports media session if the browser supports media session great we'll use it to make things appear you simply provide a little bit metadata title artist album and then artwork typically you might have a longer list of images that you would provide at different form factors for the sake of brevity here i have two 512 by 512 which is the most common android size for the lock screen as well as 256 by 256 which is useful for some older devices once you provided that metadata you'll also want to be able to respond to the controls and here you want to be able to seek back forward play pause next track previous track and what you do here is set up action handlers so that when those actions happen when those events happen you can take care of them you may also want to be able to control the playback state so that the user if you're doing custom controls get reflection of the media session state inside the lock screen or the notification in terms of implementing these action handlers it's not that hard all you're doing is you're setting controls to the audio or video tag just like you would if you were doing controls on the web page so it's pretty easy and it gives a much better user experience here's another key part of user interface full screen mode this is a good experience i hit play if i turn into landscape it automatically goes full screen this is also a good example of something that you couldn't do a couple of years ago let's take a look at how you do this with the screen orientation api again a progressive feature if the device supports the orientation feature then what we'll do is we'll listen to when there's an event changing that orientation if the orientation has become landscape we'll go full screen otherwise we'll go out of full screen that's it that's what eight lines of code and all of a sudden you can make the user experience significantly better for media on the mobile web so you really can do great media experiences on the mobile web today fast playback the ability to watch anywhere great user interface and high quality playback and this is really great stuff it's all available today and anybody who's sitting here remembering the title of the talk is going to be saying wait a minute i thought you were going to be talking about the future of audio and video on the web this is actually the beginning of the future and a lot of sites are just now adopting this technology i hope people in the audience you're looking at this and saying we were going to do that as well so to some degree this is the short term future but let's look a little bit further out now because all of that is available today let's talk about what's coming out afterwards so let's start with color there is a new set of there are a new set of standards that are coming out around video which are dramatically improving the realism of what can be displayed and there are a new set of displays today televisions coming soon to mobile and desktop near you which increase the realism dramatically part of that is color so for people who don't work with photos and videos all the time like i do it may be a surprise that your display cannot actually reproduce all of the colors that you can see with your eye this curve shows the full spectrum of colors that the eye can see in fact it doesn't because that projector and this screen can't display all of the colors that your eye can see but let's pretend it does your standard s rgb monitor today can only represent part of it the new video standards around bt 2020 are dramatically extending that color so colors like my shoes these shoes probably are not going to represent properly on that screen but some of the new televisions are going to get a lot closer another aspect of the new video standards is the ability to show a wider range of brightness and what i mean by that is brighter brights and darker darks if you look at a standard monitor today it's doing what's commonly called standard dynamic range or s dr and the range of brightness there compared to what you see in the real world isn't really that dramatic the blacks aren't really that black and the brights aren't really that bright as we move into high dynamic range what's happening is that displays are coming out that cover a much broader range what this requires under the hood is a change in terms of the electro optical transfer function the otf which you may have heard of as the gamma function there are a whole new set of functions for converting digital values of brightness into what actually gets displayed on the screen what this means for anybody working in video is you need to know whether or not the device can support those otfs and also make sure that you understand the characteristics of the device you're playing it on so let's take a look at how we can do that both for color and for hdr from a color perspective uh this is where the css medi queries level four come into play the color gamut query is now supported in chrome and will allow you to query the device to determine the breadth of coverage of color on the device on the bottom there's new is type supported uh strings coming which will allow you to query for vp9 which by the way does do wide color gamut and hydamic range with vp9 profile to full 10 bit allowing you to determine whether or not hdr is supported as well as the electro optical transfer function these new advances as well as some of the most demanding low end bit rates all come bring me back to video compression and there's some exciting news going on here um what i want to talk about briefly is the alliance for media or the aom which is a cooperation between a number of companies to create a new open source royalty free compression format this includes youtube google amazon twitch netflix microsoft mozilla hulu the bbc are all part of this cooperative effort to create a new compression format that will tackle not only hdr and white color gamut but also 8k and 4k 360 video as well as an arguably more importantly providing video in the most demanding low bit rates situations imaginable important for billions of people around the world who do not have the same level of connectivity that we do this work is going on now the name of the codec that they're developing is av1 and just a few weeks ago netflix came out with some of their first analysis on the performance of the codec and what they found is it's not even done yet and it's already getting 20 percent better compression vp9 this is not yet available we're in the future section of the talk but this is absolutely something to keep an eye on as it rolls out but when it does roll out one of the questions you might ask yourself is great well can my device play it back and this is something that's interesting and unique to the web not all devices perform the same way ultimately you want to give the video the user the highest possible video but the 1 gigahertz system on the left probably doesn't have the same kind of hardware decode support as the device on the right this is important when av1 comes out it's also important right now quite frankly for different video playback capabilities so let's look at vp9 if i want to detect whether a device can play vp9 today i'll use the can play type function i'll pass in web mvp9 it'll say can you do that and it will say probably and you'll say okay here's some vp9 please go play it now what that doesn't really tell you is are you going to drain the battery are you going to play this back at high quality or are you going to be dropping frames so there's a new api coming out to address this called the media capabilities api and this allows you to fine tune the video playback experience for the user on the top again a progressive feature you say if we have this media capabilities api i'm going to pass in the resolution and bit rate and the codec of the video i want to play ask for the decoding information and then it will tell you number one is it supported number two will the playback be smooth and fluid and number three will it be power efficient now you can use this information to make decisions about not only what type of video you want to send but also what resolution of video you want to send to optimize the viewer experience based upon your specific use case but we haven't talked about what i think personally is the area of greatest innovation and growth in video and audio that we are seeing right now particularly over the last few years my daughter's ability to make mobile music videos on her phone and her ability to put things on her face and tell and communicate with other people is all social it's all about sharing it's all about communication here this it's hard to remember this was not mainstream as recently as five or six years ago but it has quickly become very mainstream the whole premise here with social is you want people to get in get out and be able to do things quickly which is great for the web so let's talk about the most personal social media video communications web real-time communications web rtc is not new but what is kind of interesting over the last year or two has been the rapid adoption both by the browser manufacturers as well as the app platform there are sites now that are using it on the scale of tens and approaching a hundred million users on the site so what is new is that progressive web apps make web rtc really interesting on the mobile web you have the ability to provide people with a website that uses web rtc on the phone and let them jump in communicate jump out add to the home screen they can come back so there is a next step here as far as peer-to-peer personal communication and that's fine for one-to-one let's talk about another type of social event which is live streaming how is this social well if you're sending video to 10 people they probably want to communicate with you if you're sending video to a million people they probably want to communicate with each other it really depends on the event and what's really neat about this is that where you look at all of the platforms to support live streaming today the web has a pretty unique advantage in terms of the feature set as well as the way people use it if you want to share something you simply send somebody a URL they don't have to download a link an app in order to watch it because this event is happening now don't make users wait this is where the web can really come into play now the truth with live streaming is that there's a whole infrastructure challenge underneath the hood and what i mean by that is if you're deploying to a million people and you want latency a low latency rather you might take a different approach using files that are getting placed out onto the cdn or even files that you're reading while they're growing as they're placed out on the cdn or you might use data channel web rtc and do peer to peer cds to deliver it i am not going to go into all the details on that because that would use all our time but what i will let you know is that the web does support live streaming both in terms of web rtc as well as the ability to put out the video segments in the form of files and play them back and in fact shaka has the ability to do this by reading dynamic presentation descriptions and then as the segments become available in the network knowing the appropriate seek range as well as the end so that it knows what to play back and all of that's great but there's the last piece of social media that we want to talk about and that's creation if you look at the way that something where you are creating video and sharing it with your friends works it's really a beautiful cycle it starts with somebody finding something and watching it and getting inspired and then they say now i want to make something and then they capture and they create and they encode and they upload and they share it with their friends and then their friends say ooh that's very cool and they get inspired and they want to capture and so this really only works if it's easy to discover it's low friction to watch and then also low friction to get into capture scenario that this cycle can go very very fast as long as the whole path is very easy the web has a lot to bring here and to show you an example of what's coming in this space i'm going to take you to french war thank you john i'd like to share with you a simple web app i've been working on and use it to showcase some awesome media capabilities coming to the web may we switch to the android device please so this web app is called moustache and you will understand why soon i have previously added moustache to my home screen and specified in the web manifest that i wanted to run it in standalone mode so that the brother ui does not show up it is quite hard almost impossible to tell that this is a web app so now you can see me with moustaches and wearing a funny hat not that there is no lag here it is perfectly smooth may i say creamy okay i said it let's press the record button and that's all at the bottom right of my screen i have a preview video of myself that i can share now click the chevron and let's do it for instance boop or not please and that's all let me describe what you've just seen and may not have realized by walking you through all the api i have been using for that let's start with the basic this is how you get access to a camera video stream on the web today set the video attribute a chassis object to the asynchronous result of navigator dot media devices dot get user media and you're good to go as soon as the video plays not that the only media constraint i pass there is video true but my moustache app actually asks for more the ideal video width height frame rate and facing mode the browser will do its best to accommodate your request so it doesn't hurt to ask my custom draw function is going to be called is going to be called sorry every time the browser asks for a frame to be animated thanks to the request animation from function here since i'm looking for the smoothest experience there that means the draw function is going to be called approximately 60 times per second and each time it's called i'm going to draw a live video frame on my canvas at that point we have our face on the screen and that's not bad but let's go further now using the experimental shape detection web api i was able to draw some moustaches and a hat on my moving face while keeping the app running smoothly at 60 frames per second as a matter of fact that wasn't the case last week so thank you miguel for working hard to get this demo ready for today as you can see this is a pretty simple and easy to use api as hardware accelerated detectors may potentially allocate in whole significant resources i recommend you reuse the same face detector object when doing several detection the fast mode option here is a hint for the browser to prioritize speed over accuracy which is exactly what i'm looking for in that case let's look at what is happening now in the draw function calling face detector the data canvas return asynchronously an array of detected faces in the canvas note that the processing is all done on device so there is no internet connection required there when the face is dated i use its position x y width and height to draw some element on top of a video frame one best practice is to use a simple Boolean like is detecting faces to avoid overrunning the api as the draw function is called all the time pretty cool right this api along with the barcode qr code and text detection api will be available to everyone later this summer for testing purposes in chrome's table note that you can already play with it today by enabling the experimental web platform feature flag in chrome by well we love feedback so if you want more features such as eye or mass detection that's a bad example we have it but if you want something else something more or if you simply stop a number of bugs please let us know now how about media recording is that how it is not like really this is the full code i've used in my moustache app grab the stream from the canvas by calling canvas dot capture stream and use it to instantiate a new media recorder the mime tab option tells the browser which connect to use for the recording it can be h264 or vp9 for instance if you leave it and specify though the browser will choose the best one which usually boils down to the hardware accelerated one if any you can also pass the bits per second option to customize the video quality of the recording recorder start actually start recording the media and each time the recorder delivers some media data we store them in an array object i call chunks when the user stops recording we call recorded stop which fires a stop event allowing us to create a new blob object containing all chunks we recorded so far and upload that dvd develop to our backend i could also have chosen to directly stream these chunks to an RTC per connection by the way the media recording api has been in chrome for a year now so it is pretty stable has proved some popular chrome extension use it intensively to record up to one hour of content such as video and tutorials when i click the share button did you notice that the native android share ui showed up and included all my native app that supports sharing text on my device i was able to do that with the experimental web share api which enables sharing data such as text and soon images from the web to an installed app of user choosing as you can see this is as simple as calling navigator dot share and pass it a title some text and the video ur and that is pretty much it these four lines of code enable seamless sharing and i personally can't wait for this api to be available to everyone later this year thank you thank you fred swath that's pretty awesome right there's a couple of things in there one is a media recorder api which i want to highlight because what is great about that is the web creating media for the web that is something that we see over the next few years is becoming more and more common but what's really great about this is i love the way that with in a plug-in free world you can connect these items together like nodes meet and pipe them together using media streams so camera to canvas to media recorder to upload microphone to web audio to rtc peer connection there are a lot of different things you can do with these tools if you want to access the mustache demo here again to remind everybody this is a relatively new api it's still in development so you do need to turn on the web experimental features flag if you want this to work and we would ask that you would use chrome canary for that so that was a lot i told you at the beginning we were going to cover a lot so what did we see today we showed you a couple of demos and we talked through a lot of apis many of which are available today so in the biograph demo which was really about playback service workers were used heavily shaka player was used for media playback and then there was a long list of supporting cast of apis including the media session api the full screen api the screen orientation api and a variety of others you don't have to remember all of them fortunately because friends who has written a wonderful article on best practices for mobile web video playback which is available at the link at the bottom and if you're looking to do your own progressive web app doing media playback please do take a look at the sample code provided by biograph on the mustache side media stream recording api as well as media capture and streams these are both available today and at the bottom is the link so that you can access a mustache demo again chrome canary on your mobile device and make sure to enable the experimental web features which brings me to arguably the most important point we want feedback some of what we showed you today the title of the talk was the future three of the apis in particular background fetch to allow you to do downloading of media even while the user navigates away from the page or closes the browser it'll resume when they come back the shape detection api the ability to look for qr codes texts faces and other objects and the web share api the ability to share items socially all of these are being developed this is your opportunity this is a great thing that's coming in the future but frankly it'll be better if somebody in this audience looks at and says i would like this and our developer team and the other developers on the web working on the apis say that's a pretty good idea please do help us make this a reality by going to these sites and providing feedback look at the apis and tell us what you'd like to see or you can see us in person we will be in the sandbox behind the stage and it's a lot of great apis we're happy to talk to you about them and frankly we are just really excited if you look at the pace of innovation over the last few years the web is coming to play the mobile web is coming to the media game and we just cannot wait to see what happens next thank you everybody so you've built an amazing mobile app that your users are going to love but you want to get into people's hands and let them see just how awesome it is well adwords helps you do this putting ads for your app in front of billions of people that use search youtube google play and more you can quickly set up an ad campaign to reach the type of users that might be interested in your app you only pay if the user clicks on that ad and you can set the budget and acquisition costs that you're comfortable with but how do you know you're reaching the right users maybe some will install your app and forget about it while others will make it part of their daily lives firebase analytics helps you do this you can define events that happen in your app that you consider to be important such as reaching the first level of your game purchasing a fancy new pair of sunglasses or returning every morning to check out new products you can tell adwords which of these events are most important to you then adwords will display ads to people who are more likely to complete these important actions in the future you can also build audiences which are specific segments of users and have adwords display your app to them for example imagine that you have a group of users who are very active have added a product to their cart but haven't purchased yet well you can use firebase to create an audience of just these people and then use adwords to give them specific ads and encourage them to come back to your app and take action understanding your users and engaging with them at just the right time and in the right way will help you build loyal users for your app firebase and adwords working together to help you grow your user base get started today your new users are waiting android instant apps make it possible for users to access your app without having to install it first imagine users opening your app just by clicking on a link in an email or a text message we've recently made android instant apps available to all android developers to take full advantage of this we have some best practices to help you make your instant apps user experience as great as that in your installed app or maybe even better you can find all this and more at the url in description below it's important to keep in mind that by enabling your app to run instantly without installation you're not creating another additional app we're thinking of instant apps is another way to use the app your users already know and love it's the same app just without installation by adding the ability to access your app directly from a link a search result or another app it's much easier for users to engage with your app and get excited about it if they decide to keep your app on their device permanently they can then install it right from within the instant app the ability to launch an app without having to install it provides an enormous opportunity for a long time app developers have focused on the number of app installations as a proxy for the metrics their business really cares about without installation users simply weren't able to engage with the developer's offerings at all removing this barrier to entry enables you to think about the metrics your business really cares about your audience is now just one tap away from engaging with your service your instant app is just another mode your app can run in so don't branch your UI and make any unnecessary changes regarding the layout interface design or experience of your instant app the transition from instant to installed mode after installation should be as smooth and seamless as possible your users should have a rich and full app experience even if they haven't installed your app rather than thinking of instant apps as a limiting factor to what your audience can do think of it as an opportunity to get them to your functionality quicker and a way to foster your relationship with them avoid prompting your users to install the app when they're in the middle of a task there'll be much more inclined to place your app onto their device permanently after it has had the opportunity to prove its usefulness refrain from bouncing them back and forth between your instant app and your mobile web offerings you can probably tell by now instant apps are all about removing friction for your users and getting them closer to your functionality think about ways you can remove further barriers for your users for example wait until users can see the benefit of making an account and signing in until the value of doing so becomes apparent asking users to create an account after installation seems like a small additional ask when they've already gone through the app installation flow and are only just getting started however when they're coming from a link looking for specific content or functionality being asked to register can feel very disruptive additionally make sure to use available apis to make your and your users life easier using google smartlock for example make signing up and signing in a much simpler and straightforward experience in summary we really think that instant apps will unlock a lot of opportunity to engage your audience more directly users will be able to focus on what it is they want to accomplish rather than having to spend time maintaining and updating apps on their phone we're super excited to see what you come up with everything i talked about here and much more you can find on g.co slash instant apps thanks for watching firebase makes sending messages to your users easy with firebase cloud messaging it's a free service that allows you to send messages to your users apps across a variety of platforms messages can be addressed to single devices groups of devices or even topics building and notifications or messaging system with firebase cloud messaging is easy first you register your users app instance with the firebase cloud messaging servers then on your server you write code that allows you to address these devices by id group or topic and which tells the firebase cloud messaging server to send the messages for you it's powerful it's scalable delivering hundreds of billions of messages per day with 95 percent of messages being delivered within 250 milliseconds to connected devices across many different platforms it's that easy it's that powerful firebase cloud messaging people love to use different mobile devices in different ways that suit their situation and lifestyle michael uses a phone to play games on the go while tony enjoys using a large tablet as he relaxes on the sofa and jen carries a small tablet in her purse for reading on the bus but they all want to use your app on the devices they prefer so you'll want to make sure they each have a great experience regardless of screen size os version and the features of the app they use the most it can be taxing to test each one of these situations so that all your users can be happy we know you'd rather not have to buy in store stacks of devices and test your app in all these circumstances that's why we built firebase test lab for android to make it easy and affordable for you to test your app with a variety of devices and be sure it works great for all your users our device lab hosted in the cloud offers a variety of physical devices ready to test your app the selection of devices is always growing so your tests will stay current with the latest hardware and operating systems the easiest way to use firebase test lab is to run a robo test this is an intelligent automated test that crawls your app to discover and exercise its features you won't need to write any additional code to make use of a robo test for more advanced testing you can also script the interactions with your app to simulate specific use cases and verify that everything works as expected test results include a detailed report for each device used including screenshots device logs and any crashes that may have occurred during the test this allows you to verify that your app is working correctly on the variety of devices and configurations you selected it's easy to make firebase test lab a part of your daily development routine and we have multiple ways to help you test regularly and spot errors early first you can use the firebase console to upload and test your app there is also a command line interface for testing with continuous integration servers so you can automatically test every build during android development you can deploy your app directly to firebase test lab using android studio 2.0 and finally in the play store developer console there is a special automated launch test that will run for android apps published to an alpha or beta channel to get started using firebase test lab and learn how to regularly test your app on different devices and configurations you can start with the documentation available right here happy testing when i was 10 years old my dad got me a book in pasco and just a few chapters in i immediately realized i can create some amazing things all by myself my name is shruti sarath and i'm a founder and game developer at blacklight i always remember my mom playing solitaire on a computer but when moving to mobile she struggled to find a good solitaire for her phone so she insisted that being a game developer plus her daughter i should develop solitaire for her so i took the idea to sumit i'm sumit sony and i'm co-founder at blacklight i'm 32 years old and i have 25 years of game experience playing them and now designing them we played solitaire for hours we even bought a deck of cards real cards to understand the game better 60 percent of our users are mommies and crannies so we designed solitaire keeping them in mind big bowl cards soothing colors and minimal design taking a lot of cues from google material design guidelines it took us six months of research and design to make solitaire look great and be really fun to play and then we noticed a lot of visitors are visiting our page but very few of them downloaded the game so we thought maybe the logo is the problem we experimented on google play with a modern design and a very old-fashioned one and wow the old-fashioned logo had 54 percent more downloads and after we adopted it our total downloads went up by 90 percent plus we moved to android studio which allowed us to integrate google play game services and emulators helped us design user interface for different screen sizes and varied pixel density the feeling that something you have created is adding fun and value to so many people around the world most of them from places have never been true can be quite profound shruti's love for coding and her mom's love for solitaire led to this today we have five more games in the pipeline and we had five more android developers to work on that it definitely wasn't a smooth journey but through hard work and great use of feedback we are where we are today and we can't wait to see what we do next android 2.0 is the most significant update to android since its launch in 2014 i'm hoi lam and i'm here to take you through three highlights of android 2.0 first material design has arrived on android the new android interface uses a darker color palette which makes the watch plenty more in socials environments it also helps save battery by older displays we also suggest that developer adopts vertical layouts and reserve horizontal swipe for activity dismissal in our research this makes user interface easier to understand to help developer implement these new design patterns we have introduced new user interface components such as the wearable drawer layout and wearable recycler view check out the material design for android site for more details the second highlight for android 2.0 is watch face complications before android 2.0 if a watch face wanted to display data from another app they need a one-to-one agreement in both business and technical terms to alleviate this complexity with android 2.0 we have introduced the complications api complications are a traditional watch making term for areas of the watch face that displays information other than the time such as the date or phases of the moon we have extended this concept to smartwatches so any app can publish data for watch faces to consume and watch faces can display this data in a style that fits into their unique designs for app developers this helps increase user engagement and mind share for watch face makers this adds utility to the watch face users can now choose whichever watch face they want as well as getting the data that they need check out the watch face and complications sections of the developer documentation for more details last but not least standalone functionality previously android 1.0 apps require a phone app to communicate with the cloud with android 2.0 wear apps can access the internet directly without the need for a corresponding android phone app installed this means the android app can access the internet even if the user has an iphone to help with app distribution we have also put the google play store on the watch users can now download apps directly on their wrist apart from these three areas of improvement material design complications and standalone apps there are numerous other enhancements with android 2.0 check out the android developer site for more details i'm hollam happy coding have you played app ship 3000 no but i'm ready to and i'm standing here with ape who helped build this fantastic experience ape tell me all about it yes this is app ship 3000 this is a collaborative firebase trivia game where three people work together to launch their app ship and hit their moonshot it's pretty crazy played with arcade controllers out here in this very loud situation where it makes you yell to each other and get all hyped up it's a lot of fun awesome so we're gonna play it and maybe i'll give you the microphone and you can sort of narrate what's happening and maybe talk a little bit about the tech great so uh for my fellow team players we've got dug over here dug take a bow and todd over here who you may know from such things as everything todd does all right i'm gonna give you the mic yep everyone's already logged in and they're sitting with their rocket primed for ignition in order to take off everyone has a specific color they need to hit so let's give it a countdown in three two one take off there we go there we go now in the game of app ship 3000 the very first thing that happens is you will get prompted to ask have you played before everyone here has is good and we'll dive right into the game the very first question who has it there you go there you go those blinking things up at the top are asteroids which will absolutely destroy you good good good good i'm amazed you knew that good job how many google engineers does it take good call good call yeah you cannot be a true not be a true fire baser if you think uh pineapples are ugly all right while this game's running i'll talk a little bit about what's happening but it's a little hard since they're all shouting all the time somebody picked something you're running out of time all right so while they're playing this game every movement they make is being thrown off into the firebase real-time database and the real-time database is taking that doing some analysis updating high scores and things like that so absolutely every move they make is getting thrown off into google cloud and firebase for analysis oh you got to watch out for those asteroids guys no it is not it's definitely not it up or we're low on fuel low on fuel if you miss this next question you are done oh you're so close that is the end of a 754 feet that is an okay score that is supremely okay good job everyone thanks for playing all right Abe thanks so much for that absolutely anytime so we're actually going to play another round so uh bye all right good morning google i o i'm noah fidel and i'm going to speak with you today about how to go from research to production with tensorflow serving so i'm going to start by sharing a few stories uh from my past and observing industry so the first one show hands who's taken a commercial flight in the last 10 years all right everyone's awake and you've taken flights so this is good you might be shocked to know that circa 2005 i was working on the flight planning software that plans the majority of the world's flights this does things like compute how much fuel to take on the plane that you probably care about but before 2005 that system did not use source control and for those of you who know that means that's pretty scary uh it does now uh and i raise this just to show how even though we know what best practices are and we publish them and blog about them and talk about them it sometimes takes a while for them to be adopted across industry fast forward to 2010 i was at a startup and we had a mobile photo sharing app and we tried to do all the right things we had source control continuous integration using jankins uh in one day we decided to add machine learning to our app uh we trained up a model that could do face detection auto crop to faces this is really great our users loved it uh our investors really loved it in 2010 um but for machine learning what is source control and continuous integration these best practices don't really exist so i trained that model on my workstation and if i got sick or if my workstation crashed no more training for us we'd have to start over so here we are in 2017 and at talks like this and other conferences around the world we have all kinds of great uh tools and best practices for machine learning uh so we have TensorFlow and a whole ecosystem around it and many other tools but there are many areas still to be defined uh so just as one example of many uh what's a continuous integration test look like for machine learning so you're deploying models every day you might uh when you start doing this you might make a test that runs some sample data from today through your model and it does that all the time what happens when your users behavior changes what happens if they look at bigger objects or different kinds of objects or the distribution changes so there are many of these things we're figuring out and i'm going to make a bold statement that i think machine learning is 10 to 20 years behind the state of the art in software engineering and so we have a ways to go one of the ways and there are many where TensorFlow serving is helping is by making it easy to safely and robustly push out multiple versions of your models over time as well as the ability to roll back and so this seems pretty simple but we've seen you know people inside and outside google that don't have that capability there are many other things uh that are part of this as well all right so here's the agenda for today's talk for starters TensorFlow serving is a flexible high-performance serving system for machine learn models and it's designed for your production environments and before i dive into the details of TensorFlow serving i'm going to describe what is serving for those of you who might not be familiar it's pretty simple serving is how you apply a machine learning model after you've trained it many of the talks on machine learning both in academia and industry are focused on the training side but serving side is kind of left as an exercise to the reader so that's where we come in all right so on the right side of this slide you're hopefully all familiar with the orange boxes here if you had a TensorFlow talk you know you should be pretty familiar with these you have a pile of data you have somebody doing the role of a data scientist and you're training a model and you'll use something like TensorFlow to do that now you have your application on the right side and for the sake of example let's say that you are ranking videos so you have an app that users come home from work and they want to look at some fun videos and just relax so your application has a list of candidate videos and your model is sitting over there on disk how are you actually going to apply it so the really straightforward and common answer is you're going to have an rpc server so this server is going to take your model it's going to load it off of disk and it's going to make it queryable by your application so your application can give a list of videos along with their features and the server will reply back with maybe the probability of each of those videos being clicked on all right so moving on to some goals for serving and in particular for serving machine learn models so the first one and this is where serving differs quite a bit from what we're used to when talking about training of ml is that requests are coming in all the time asynchronously from users you can't control them you're not reading them off of disk but whenever a user sits down on their couch and loads your app that's when you're going to get a request so we want to answer these requests online and at very low latency and consistently low latency for all of your users the next ones are also subtle departures from training so the first is that you might want to have multiple models served at the same time so when many groups start with machine learning they'll have one model that might as this example goes serve rankings for videos but now what happens when you know week two comes along you have a model it's working well in production but now you want to launch a new one you're not sure if it's going to work you might run an experiment so now you want to run two models and maybe you have a couple different experiments you want to run so that's really common as well and lastly who's familiar with mini batching okay just a few of us so I'm going to describe it so we're familiar with this because it's an important part of both training and serving so when you take a neural network graph and you process data through it there's some overhead that happens with all the nodes of the graph and scheduling the work across them and to get good efficiency almost all of the training libraries out there will produce what's called a mini batch so instead of you're putting one video through your trainer at a time it'll come up with batches of 32 64 128 push them through the graph together and you get massive throughput improvements by doing that in particular on CPU on GPUs and TPUs and so on so we aim to achieve that efficiency of mini batching but at serve time when all the requests are coming in asynchronously and you don't have nice neat 32 size batches off of disk and you know the multiple model support in efficiency you know they're neat challenges on their own but what makes them particularly interesting is doing those while also maintaining those standards for low latency all the time all right so just wanted to say again and throw this up there for you all to read that TensorFlow serving is a flexible high-performance serving system for machine learn models and it's really designed for your production environments TensorFlow serving has about three major pillars the first one is a set of C++ libraries these include standard support for the saving and loading of TensorFlow models pretty straightforward something you'd want to do with a serving system we also have a generic core platform and this is truly generic it's not tied to TensorFlow although TensorFlow has first-class plugins for everything you might want to do so let's say you have a legacy ML system in-house and you might want to mix and match for some period of time with say TensorFlow you can do that you can write adapters that plug in different ML platforms and run them in TensorFlow serving building on top of our libraries we have binaries and these incorporate all of the best practices that we've learned out of the box so they make it easy we have reference Docker containers and tutorials and collabs for running on Kubernetes with auto scaling and so on and the third pillar is a hosted service with Google Cloud ML as well as our internal instance that many Google products are using lastly I want to highlight that across across all three of these different offerings your models are portable so we worked really hard to make sure that one model format will work in any of these environments so you can take your model you know try it on a binary you want to do something custom try it in a library and so on so you can seamlessly migrate back and forth all right so super excited to show for the first time that we're used very very broadly across Google so these are just some of our key customers inside Google and for the first time I can say that we have over 700 projects inside Google using TensorFlow serving so whoo thank you all right so I'm going to dive in a little bit to the libraries so I mentioned the generic core platform I wanted to mention also that the components are all a card and this means that you can mix and match them to suit your needs so if you're doing something really advanced and we have some you know incredibly advanced internal customers you can mix and match these components and use just the ones that accomplish your needs you don't have to buy the entire set of libraries you can take the ones you want let's see there's the batcher for inference performance this gives us that mini batching performance but at serve time and lastly you can plug in different sources if you have different storage so maybe you have a cloud storage of your choice you can write a plug-in for that all right so we're going to go through a pretty big slide here I'll try and make this digestible for you so let's imagine and this is zooming into the libraries as they would exist inside our server in the binary so say your server is up and running and it's serving your video ranking model version one and it's cruising along everything's working well and say you've trained version two and you want to deploy it so I'm going to walk you through how that would work so on the bottom right of the slide here you'll see the green source API with the yellow orange file system plug-in so it's a really straightforward plug-in it simply monitors a file system observes the new versions of your model in this case version two and the source is going to emit a loader in this case it's going to emit a loader of a TensorFlow saved model it's important to note that the loader doesn't immediately load that TensorFlow graph it keeps the track of the metadata it can estimate the RAM requirements and other resources used by the model and then it can load the model when it's asked to and this is really an important differentiator between a straightforward model server that you might build yourselves so this loader that's very lightweight is emitted over to the manager now the manager actually knows the state of the server how much RAM is available and other resources such as GPU or TPU memory and only when it's safe to do so is it going to ask the loader to go ahead and load version two of the model so let's say that the manager has plenty of memory it goes ahead and loads version two the model you might think that the first thing it's going to do immediately is let's unload version one we don't need it anymore but there's another important detail here I see some smiles so you probably know this is say you have client request threads on the top left here and they're actually still processing some queries on model one so you actually have to do is keep track in this case we have a handle on top of that TensorFlow saved model and it keeps track with a high performance ref counted pointer mechanism of exactly how many clients are still outstanding processing requests only when all of those requests have queues then the manager will go ahead and unload version one all right so I'm going to cover some strengths of the libraries I'll just highlight a few aspects of this slide so the one I'm going to highlight here is the fast model loading on server startup so this is another really subtle detail but it's really helped a lot of our users so let's say you're starting up a server there's a couple of reasons you might want to do this let's say you're running an experiment on your workstation you don't want to wait a long period of time you want that to load as quickly as you can another example where it's even more critical is autoscaling so say your users all get home at six o'clock they all sit on their couch pull out your application and send you a big traffic spike a video ranking request using a system like Kubernetes you're going to want to autoscale as quickly as possible and you want those servers to start up quickly so at the same time most of the time with servers you only want maybe you know one or two of your threads to do model loading you want all of your threads and CPUs to be performing inference for your users but say on a 16 CPU machine why not use all of those cores and all of those threads for model loading so it's just a small optimization and one of many that we do to make it easier to autoscale and run your actual models in production let's see we use the read copy update pattern which is a high performance pattern for doing concurrent memory access of these models I mentioned the ref counted pointers and the simple plug-in interfaces so it's really easy to extend to support your own data store cloud storage or even a database of your models all right another show hands who has used TensorFlow servings libraries okay a few advanced users so especially for you folks but also for anybody who's thinking that you might want to use our libraries definitely take a look at server core so what we observed was that our libraries were low-level very powerful and very configurable but for most people you really wanted most of the sane and sensible defaults you wanted to load some set of models you wanted to load new versions over time and so we made the class server core which does this for you and it's configuration driven so you can probably if you move to it from our libraries you can remove about 300 lines of code and just have a nice little config so give that a try all right so moving on to our binaries and this is what we recommend for most users so I mentioned a few times there are things like sensible defaults how many threads to use and so on so the binaries come out of the box with those enabled so you only need to specify the minimum amount of configuration they're very easy to configure I'll show you on a coming soon slide how easy it is to launch these and they're based on gRPC which is Google's open source high-performance rpc framework all right so hopefully you can read these code samples so the line at the top this is how you build the model server it's a one liner with basil the second line is how you're going to run the server for a single model in just a one line command no config files needed just three flags so you're going to specify the port the name of your model and the path to the model on disk and this is we call it a base path because you might have multiple versions over time so over time you can just drop more and more versions in that directory and they'll be loaded and lastly we have the command to run the model server with a config file and this could have as many models as you want as many as will fit on your server all right so this is great we have a server how do we actually talk to it so I'm going to speak a little bit about our inference apis that are supported by the model server so the first one is called predict it's very very flexible and powerful so it's tensor oriented you can specify one or many input tensors and output tensors and so you can do basically anything you can do if you're familiar with a tensorflow session if you've been playing with that in say python you can do just about anything there with the predict api now moving up a layer of abstraction we've observed that the vast majority of machine learning used in production is actually doing classification and regression so we have two apis regress and classify and these use tensorflow's standard and structured input format called tensorflow example it's a protocol message it's feature oriented so you can have different values for different features and a nice thing about these apis is you actually don't have to think about tensors at all so if you're new to machine learning you just want to try something out try a codelab or even if you want to deploy in production which we see you know many production users inside google uh using these apis you know go ahead and try the high level apis first next up and I'll talk about multi inference a bit later in the talk uh multi inference allows you to combine multiple regressions and classifications into a single rpc and that has some really cool benefits all right here's a total toy model for you and this is just to show you know what does the syntax look like you know I've talked about these apis if you haven't seen them before so we have a model spec and this specifies the name of the model and this is because your server could have multiple models running at the same time we'll have one feature uh in this case the key for it is measurements we have three floating point values and we have a structured result uh so again you don't have to inspect tensors you'll actually get a structured result called regressions which has one score so just an example all right so I'm going to move on to some of our key challenges that uh for the most part we've solved but we still have quite a ways to go okay so this is a story of isolation and latency so from anybody who's ever looked at a latency graph before you can probably tell that those spikes are really bad you don't want to have those right so uh those spikes were happening pretty much Monday through Friday around 12 or one o'clock so does anybody have an idea just shout it out uh what caused those latency spikes lunchtime strongly correlated so uh we had this fantastic engineer we were working with on one of our largest client teams they were serving multiple models these were large multi gigabyte models and they were serving several hundred thousand queries per second on a fleet of servers and what was happening is this great engineer uh he really a close collaborator um he would uh go to lunch but right before he went to lunch he would push a new version of the model so you can probably pretty quickly figure out what's going on here so uh every time a new model a particularly large one about five gigabytes was loading inference would slow down at the upper end of percentile so you might wonder why is this going on once we figured out the cause is pretty easy to figure out so for most of machine learning you have one model in a server at a time whether you're training or serving until now and so most systems are optimized around throughput not latency and so all available threads are available for any computation that's ready to run in this case model loading is uh parallelizable in tensorflow and so tensorflow would gladly use all the threads available and go load that new model starving inference of uh threads so here's the after slide and definitely note that the uh the right side of this slide the axis uh just dropped by 10x so we went from over a second in terms of our latency spikes to about a tenth of a second this fully met the needs of the customer they're serving sLA they're happy and the way that we achieve this was we added an option to tensorflow that lets you explicitly specify the number of threads to use for any call to your graph and in this case we wired them up so there's one thread pool for model loading and one for inference so pretty straightforward by default since all of our users uh want to use all the threads for inference all the time we actually only need users to configure the number of threads for loading so usually specify one or two and then you're good to go all right so this is going to be uh moving on to batching several slides I mentioned earlier in the talk and this is a really key challenge to get great performance and throughput um and so you know let's look at the right side of the slide you have these touches blocks that are falling down so imagine these are requests from your users maybe those blocks represent uh individual videos that are being ranked by your server and their different uh height represents the time at which they arrive at your server so what we do is we wait for a very small and tunable period of time and we wait for a few queries we aggregate them together and stitch them into one set of tensors that we feed into the tensor flow graph so this enables you to get very efficient use out of uh GPUs and TPUs that have really good batch parallelism uh one external paper the spin paper saw that moving to batched inference on GPU led to a 25 time speed up I'm also uh excited to share that on CPU we've seen for models up to a 50 percent improvement in throughput so it's a tiny little tuning knob that can save you you know quite a bit of your inference costs another thing is that uh once you're actually uh doing batching on top of custom hardware hardware accelerators and GPUs you'll typically only want to have one unit of work being done on that chip or device at a time and so one thing we noticed is when you're going to have multiple models and multiple versions but perhaps one piece of hardware you're going to need to schedule the work carefully across those models and so we have a shared batch scheduler that lets you do just that lastly the batching capabilities are all available in both library form as well as they're configurable in the binary so you can access them very easily uh in either approach now we're going to dive into what happens inside one of these session run calls all right so hopefully you can parse this utilization graph so there are two different kinds of utilization i'm showing here so the first is the blue dash line this is the utilization of your cpu and you'll notice it's not used through the whole period of this request the orange line represents the gpu or tpu utilization so walking through uh we'll start with some single threaded cpu bound work like say parsing of your input it's really common pretty much all models are going to do this then you might do something else that single threaded like do a vocab lookup or an embedding lookup okay you're done pre-processing your data now you're going to shift over uh and do computation on the gpu or tpu so your cpu basically goes to idle your tpu or tpu is maxed out for some period of time it returns and then again you're going to do some post processing on cpu and the reason i show this is that naively if you just ran one inference at a time on this set of cpu and gpu you're going to be vastly under utilizing both pieces of hardware so one of the ways that we solve this is by having multiple threads you know very very common in serving systems and what you can do is you can have you know a good number of threads and limit how many are running at the same time and you can make it so that you're constantly using your gpu and tpu and that most of the time you're using your cpu but the key takeaway from this slide is that there's this queuing time that happens at the beginning of each request so once you enable batching i mentioned earlier there's this tunable window of time where you're going to wait for a few requests to come together and then you'll process them this totally makes sense for the gpu and tpu work but for the work done on cpu there's no point in waiting all this does is add some latency to that work all right so we have some challenges here there's you know the scheduling and utilization of both the hardware accelerator and cpu there's the issue of saturating one resource so maybe you've tuned your model it's working really great you're using all of your cpu but only half of your gpu that's not ideal and vice versa i just mentioned queuing latency so you really don't want cpu single threaded work to wait to be batched together all that does is add some latency and the last one for sequence models where you have input data of variable length and the computation will have variable cost it's really challenging to batch these pieces of work together one of the common approaches is add padding to the shorter sequences they all match but this means that your gpu and tpu will be doing work over padding which is wasteful so there's you know challenging tradeoffs there so we think we know the way forward i'm going to share with you some work that we're currently doing is ongoing but it's still in the experimental phase we are moving batching inside the graph and we think this is going to be huge for throughput in performance in particular for challenging models moving batching inside the graph enables you to surgically batch the specific subgraphs that benefit the most from your custom hardware your cpu's and so on for sequence models where you have things like a while loop with a single step of code being run inside the while loop you can actually run the batching only inside that while loop so it's really neat you're never doing batch inference or batch computation over padding and you know early results look super promising here another area is for complex models that might have for example a sequence to sequence model maybe it has an in-code phase and a decode phase that have different costs this would let you batch those subgraphs separately and tune them separately all right next up i'm going to share some new technology that we recently released as part of tensorflow 1.0 so the main piece of tech here is save model so save model is the serialization format for tensorflow models it's available for very broad use it's not specific to serving you can use it for training you can use it for offline evals and you can use it for other interesting areas i'll get into save model has two new features to tensorflow so the first is that you can in one save model have multiple metagraphs that share the same assets such as vocabs and embeddings as well as the same set of trained weights you might ask what i want that for i've just been training models well it turns out that the metagraph for serving is very very commonly different than the one for training and now let's say you want to have a you know third metagraph that's going to be used on gpu that's also something you would want to customize so the multiple metagraph support lets you have you know a metagraph for training one for serving on cpu one for serving on gpu all right the second major feature of save model is signature def so a signature def defines the signature of a computation supported by a tensor flow graph you can think of it like a function in any programming language it has named and typed inputs and outputs so as humans we're really good at looking at that graph you can probably all guess what the nodes in that graph are i guess if we pulled us you know all of us would figure out that the middle node on the left side of this graph is where you feed the input but if you want to take your model and have a whole ecosystem of tools such as serving systems that can interpret those graphs that they had never seen before you would need to annotate that graph in some way that's exactly what signature def does so in this case the middle node on the left is where you feed your inputs the top right node outputs a set of classes and the bottom right node outputs the scores all right so building on top of signature def is multi-headed inference this is also known as multi-task and multi-tower inference so another show hands who's familiar with multi-headed multi-task inference oh wow okay so this is pretty new there's some emerging research and great work going on here i think you'll be really excited about this so let's take that example earlier you're serving video rankings your users are using your app you get lots of content creators you're successful now and so users are drawn to your platform and maybe some not so good people are drawn to your platform maybe they're uploading clickbait we've seen this in many places so your first model the orange model computes the click-through rate and you know these bad guys are you know training models optimized to get clicks so you might decide well let's define a new metric and train on that let's call it conversion rate and it's going to track users watching the majority of a video so we'll call that cvr and now we're going to go train a new model for this but wait a second all of the careful feature engineering and other things that we did to get our data ready for the first model apply to the second model it's very very likely and in many cases it's a certainty that those features are usable in both contexts so you can actually train one model for multiple objectives in this case ctr and cvr so i've listed some infrastructure benefits on this slide and there's some really really big infrastructure benefits your inputs are only sent once over the network so that's already a pretty big one you only have to parse the data in your model once you only have to compute your hidden layers one time so this is going to save you bandwidth cpu latency and even ram overhead on your servers this is great i could stop right there there's another really important attribute of these uh why they're uh becoming more and more exciting and the research that's going on so you're likely in this example to have many more clicks then you'll have conversions so when you go to train these models you have much more labeled training data for your ctr objective than for your conversion objective so what happens when you actually train one model for multiple objectives so we've actually seen some early promising results where in one case we were able to see a 20 improvement in a key quality metric by moving an existing separate model into a multi-objective model with another objective so there's wins here on infrastructure and on quality multi-headed inference is available in the tensorflow estimator api so you can train models with multiple objectives today and you can serve them as well so we're really excited about this all right i wanted to show you know really quickly what does a multi-headed inference request look like so it's pretty simple you can specify one or more inference tasks they each have the name of a model as well as which signature to use so really simple okay uh all of this you know power and functionality sounds really great uh from early adopters we also heard you know this is challenging to get right i might want to do things like inspect a model and see you know what's actually going on with it run a sample query look at the metadata and so on so we're introducing the save model command line tool and it lets you do all these things with the model so the first query you can do with the save model uh client you can look at what metagraphs are in the model so pretty simple so in this case we have a save model it has two metagraphs one is for serving and one is for serving on gpu now let's say we wanted to look at the serving metagraph and see its metadata in this case the metagraph contains uh two signatures one is called classify x to y the other one is called serving default so serving default is a documented key and it's a constant uh in tensorflow and what this does is it says if the user has not specified which signature to use just use that one and in many cases for people getting started you have just one uh signature so it's really easy to get going all right now say you wanted to actually look at the input and output tensors in the serving default signature so you add on one more flag to the client and you'll see the inputs and I skipped the outputs for the slide here you'll see there's one input tensor called x it's of type float note the method name on the bottom so the method name here TensorFlow serving predict this is kind of like a type in a programming language um it's informing another system in this case TensorFlow serving that this signature was intended for use in the predict API we have similar ones for TensorFlow serving regress and classify but you can override these so maybe you have an in-house offline evaluation system you can make your own method names and check for them in your models all right and lastly let's say you would like to run a graph you have some input data maybe you're debugging the model or you just want to try it out for fun so uh I'll highlight here all of the prams are the same except instead of show we've asked to run the model we also have two ways of expressing input that you can mix and match so maybe you have a numpy file you can actually specify the path to the numpy file and it'll just be read you can also specify a python expression and it'll be interpreted all right so in closing wanted to say that collaboration is very very welcome on this project uh we sync our internal repository with github about weekly we have a developer on call rotation that includes facilitating your pull requests answering questions and so on uh we have lots of open-ended research problems you know so feedback is encouraged on apis techniques anything I've talked about and any challenges you have as well you can reach us at discuss at tensorflow.org for some links on how to get started uh just search for TensorFlow serving very easy to find uh we have a great kubernetes tutorial this will let you uh you know launch an inception model server uh and it's gonna run an autoscale for you we have a google cloud ml uh as well as I mentioned our mailing list and you can also use our uh hashtag on stack overflow uh hashtag tensorflow all right let's uh go to questions thanks very much hi there ios developer interested in getting started with the firebase platform in your app well you've come to the right place there are two main parts to getting the firebase platform up and running adding your app to the firebase console and installing the sdk let's go over these one at a time for starters let's go to the firebase console at this url here depending on when you're watching this video the ui might look slightly different but the general concept should remain the same now depending on your situation you might see a blank create a new project screen or you might see a list of existing projects oh before we go further let me take a moment to explain the difference between projects and apps a project consists of one or more apps now all the apps in the same project use the same firebase database backend and if you want you can use features like firebase cloud messaging to talk to all of them at once you don't have to but you can which is sometimes convenient so if you're a developer that has a cross platform app you generally want to put the ios and android versions of your app in the same project now that'll give you some nice cross platform benefits your user can access the same data if they switch back and forth between the ios and android versions of your app things like dynamic links will work across both platforms you can send one notification to all versions of your app and so on on the other hand completely different apps you should put those in completely different projects there's nothing gained by cramming them into the same project except tiers and heartache i guess so if you're working on a cross platform app and your android or web team has already created a firebase project you should probably select that project and connect your ios app in there otherwise if you're the first one to be adding firebase e capabilities to your app you can be the one to create the new project in my case i'm the first app associated with the project so i'm going to create a new project here i'll give it a name and there we go once you've selected or created a project you're going to want to connect your client app i'm going to select the ios button here and i'll give it my apps bundle id you are eventually going to need to add your app store id here if you want features like firebase invites or dynamic links to work but you can leave this blank for now and change it later now when you click continue your browser should automatically download this google services info dot plist file for you note that it needs to be named this exactly so if you get that little one in parentheses after the name like i just did you're going to need to do a little bit of renaming and find her okay next up drag the file into your xcode project like so and uh let me go back to the console and hit continue here and it's telling us that this would be a good time to install the firebase coco pods now i'm assuming you know something about firebase coco pods but if you don't here's a little video for you to check out it's fun so i'm going to jump into my project directory here and do a little pot in it we'll open up the file and i'm going to uncomment this line because i am using swift and uh this line because my app happens to have a base sdk of eight dot oh although at the time of this recording firebase is supported as far back as seven dot oh next up let's add some pods now it's important to remember that to keep your app nice and spelt you should only install the coco pods for features that you need in fact there is no all-encompassing uber firebase coco pod that installs everything for you you're going to need to pot install each individual feature and you can find a full list of the coco pods and the features they correspond to over here now for starters i'm just going to add firebase slash core which includes everything needed to get the basics up and running and also enables firebase analytics so now i'll make sure my project is closed and i'll run pot install and then let's open up the generator workspace all right we'll build it and make sure everything compiles okay and it does so we can move on to the next step okay looks like the last step here is to add some initialization code i recommend putting this in your app delegate did finish launching method uh first things first let's import firebase in my app delegate uh note that this is usually the only thing i'll ever need to import no matter what we've installed we're doing some pretty nifty work behind the scenes to make sure this works properly and i know this sounds like the exact opposite of my only pod install what you need advice from like two minutes ago but trust me here this makes development a whole lot easier all right so next up we'll add the line for app dot configure to make sure firebase gets set up properly and uh that's actually all you need to do this configure method will take a look at what libraries you have installed initialize them grabbing all the appropriate constants from that google services file that you dragged in earlier into your xcode project so we'll give it a quick run and if everything is set up and working correctly you should see a few lines in your console about how firebase analytics is up and running all right so congratulations you are now up and running with firebase um so there are a lot of places you can go from here you can add sign in using firebase auth or get your app talking to the real-time database or start tracking more of your user's usage with firebase analytics you can check out these links to get started and have a little fun what's up everybody david here and today i have a quick and easy firecast for you we're gonna get up and running with firebase and the web and this is actually going to be the first of many screencasts in a series so make sure you subscribe to get notified of tutorials on authentication storage hosting and web push notifications with firebase cloud messaging also if you're a fan of javascript frameworks i'm going to be dropping videos for angular one and two polymer react and ember so you better subscribe because you don't want to miss those but today we're going to start with the very basics i'm going to show you some mad copy and pasting skills by getting the project initialization code from the firebase console and then we're going to set up a small web app so let's go and dive in so i'm in the firebase console at console dot firebase dot google dot com you can see i'm logged in as myself up here just smiling at you but to get started i'm going to create a new project so i'm going to click create new project i'm going to call it web quick start and then we'll create it my project is now created so i'm going to click add firebase to your web app and this brings up a little model with all the initialization code i need to get started it has things like my api key off domain database url and storage bucket so now i can go to the bottom right and then i can click copy and that's all the code i need to get started but just as a little fyi you can access all of this information by clicking off and then going up to the top right where there's web setup but now to the editor so here on my editor i'm going to get crazy i'm going to create this web page from scratch so i'll start with my basic html boiler plates give it a title and now i can just paste in all the code from the console and this is all you need to get started and just to prove that it works i'm going to use the database as a little tiny demo so i'm going to create an h1 and give it an id and every single time the value changes in the real-time database i'm going to sync it to this h1 so the first thing i need to do is get that h1 by its id and then now i'm going to create a database reference using firebase.database.ref and then create a child location to the text location and now i can synchronize any changes using the on function and then using es2015 arrow functions i can just do it all in one line so to the left right here i have my project in the firebase console and to the right is just my blank page to use the database i'm going to remove all security so i'm going to click rules and then i'm going to say read is true and write is true and click publish and you should totally know that you only do this while you're developing because that means anyone can read or write to your database so now i'm going to give my browser a refresh and then i'm going to add a text location and it synchronizes to the browser and so i can change it and then it changes as well so keep in mind that the real-time database is just one of the many features firebase offers for the web you can also use authentication storage hosting and even firebase cloud messaging so that's all it takes to get started with firebase in the web and if you want to go and learn more then check out the link in the description for our official documentation and if you're super excited to learn more about firebase on the web then please subscribe to our channel because we're gonna have tons of more content so that's all for this time and i will see you all later we started is on a living room couch and we really started because the problem that we had which was asking the same question to our closest friends where are you what are you doing we were baffled by the fact that there wasn't a solution that solved this problem and we felt like we could build one that was better the value that is drives for all users is knowing which of your friends are nearby so if you look around where we are right now an arena how many times have people gone to a basketball game hockey game or a concert and found out the next day that they had friends were at the same event and think about all those moments that are missed because they didn't know they had friends there so what we're solving is letting people know who's nearby and making those moments matter my name is diesel pelts and i'm the founder and CEO of is i'm mark french co-founder of is we felt there was no reason users should manually go fetch data but i get a text message there's no reason for me to tap refresh and we felt why should it be different from anything else and firebase let us solve that firebase really allowed us to enhance user experience by making it real time simplify the ui by not having a fresh button and cut down and develop in time like any startup the most valuable asset that you have is your team and your time and what firebase has allowed us to do is save 50 percent in terms of time by moving that much quicker with a product it's a game changer we're using eight features from firebase right now they're analytics remote config dynamic links the real time database and more traditionally that would have been in eight different places and now we go to one place through the firebase console we're eager to launch this product in a big way we're seeing how people are using the product and how they're inviting more and more friends that we're concerned we're growing very very quickly so we sleep a lot easier at night knowing that we got firebase that's really there to build that infrastructure if your developer use it we love it and it's enabled us to focus on developing the user experience and not have to worry about the things in the background that should be and with npr one we are reimagining what a listening experience could be outside of the radio it's the radio but better it has all of the great stuff that we've spent 40 years perfecting with npr one we see the opportunity of reaching a more diverse audience that have a device in a pocket all the time my name is might safe lahi and the lead mobile developer for npr digital media my name is nick to pray and i'm the innovation accountant and npr my name is tejas mystery and i'm the senior product manager of npr one there's some of the biggest challenges in any mobile app are that first impression when the user first installs the app you've got a very limited amount of time to convince them to keep the app and to get engaged in the experience trying to figure out how we can get users into the content as quickly as possible was the real focus of integrating firebase and dynamic links using dynamic links we are able to shorten the number of interactions it takes for a user installing the app to get from the promoted content to the content from 20 to 3 so that user is able to get right into the content we're driving more and more listening for a user every week it's really astounding creating playlists of content that are configured by the podcaster or by a member station or by us internally and with firebase we have that at our hands having the analytics product interact with things like dynamic links remote configuration cloud messaging it adds a real multiplier effect and the integration with the broader firebase suite i don't have to go outside the platform to figure out what's working so it's not just about shipping the product faster it's about analyzing the results faster and with the integration with all the other firebase products we're really excited about all the things we can learn from it raised labs is a company that is focused on building excellence in software technology and design we do that through our work on mobile applications and websites and technologies in general my name is Gregory raise i'm the CEO and founder of raise labs we really want to understand the human problem and oftentimes the hard problems in software aren't just the technology problems the api the how do you connect these things but really getting at the heart of what people are trying to accomplish and do in their day to day my name is ben johnson i'm the managing director at raise labs in boston we decided to put our hat in the ring for the google certified agency program the first leg is just having access to a lot of what google is doing today so there's access to design reviews invitations to events and that's sort of the base level and i think that's hugely rewarding even in and of itself having google review your app from a design perspective is amazingly helpful so that's sort of the first year the second tier comes with certified status you know there's a long application process for that and once you have it it's something that you can really say to your clients that gives them comfort that we're a reputable firm that we're building great software in a way that google believes in the certification is a higher bar for us to really differentiate ourselves from many of the other companies out there it required us to really dig into what that means to be truly world-class and we wanted to set that bar for ourselves as well my name is john green i'm a vp creative at raise labs for the google developer agency program allowed us to have access to engineers with the map team the design team to figure out how can we actually do some of these things and we could reach out to them when we needed and also allowed us to set up and say we can make this success they might look closer at this app because we're part of this program which has actually been super helpful some of the challenges in building the six flags app and which touched on some of these are certainly mapping technology and payment technology material design or the api's having access to the google team to really ascertain how we're approaching certain software and ensuring that we're building technologies the right way makes for a smooth development process we set off to build the six flags app with a pretty lofty ambition and it was to bring in park navigation and commerce to the app and the comfort of knowing that google is there to help us understand where they are heading as an organization and that we are along for that ride is it a really helpful thing to know and as a business we know that going forward we're going to be at the cutting edge of whatever google is doing through access to programs through you know the collaboration with their teams it's really helpful for us to know that six months nine months down the road will still be a part of that process and we'll still be working with them to figure out what's next it's going great this is brad abrams brad is a product manager on assistant that's right yep how's the going timothy it's going great um now that you're here and i'm here i think we should talk about the assistant what do you think i think that sounds like a great idea because the only thing i know to talk about so perfect all right let's start with square one okay what is assistant and why is it cool for developers well the assistant is really a conversational interface to google it's uh the kind of smart brain in your google home or on your android phone and actually at this event we just launched it on ios as well uh and you can have really a conversation with google to help you get things done that's awesome i think so what are the different uh surfaces for developers to integrate with oh great so assist there's a couple of different ways one you can extend the assistant by adding your own custom apps to the assistant um and then you can also host the assistant in different devices so if you're building your own hardware or whatever you can host your assistant there and that's the difference between actions on google and the assistant sdk right exactly so the assistant sdk is where you host it with your hardware and actions on google is extending the assistant okay well we're at io and i know some new stuff has been announced why don't you tell us about some of your favorites oh man it's hard to pick there's so many uh i'm really excited about actions on google coming to phones to android and ios so uh up until this event you can only build actions on google for home for ice eyes only experience or ice ice free experience um and now we have actions on google for the phone as well so we have suggestions ships lists image carousels they can really give you a deep multimodal experience oh would you mean like seeing and hearing that's right you can see and hear yeah we also launched uh transactions so you can buy things via the assistant i think that'll help users a lot i know i want to buy a pizza uh on you know from my phone i can just say okay google i want a pizza and uh it can connect me with somebody to go do that and and for developers it also gives uh you know a financial incentive which i hear some developers like to get paid yes so indeed they do uh and then of course deep uh for me is developer experience and we've done a lot of work we launched the platform first in december and we did a lot of work on developer experience so we launched a new version of the developer console we call the actions console uh developers can use that to work better in teams together to get deep analytics about their application as well as it integrates really well with the firebase and google cloud consoles awesome and what can i do one more absolutely see i can just add them so one more uh would be discoverability i think that's been in the whole kind of chat bot space generally having people be able to discover the them has been a problem and so we've done a lot of work in that regard uh the first is that we have a new directory for our apps and it's right there on the assistant surface just one tap away and you can see a category view of all the assistant apps that are there uh we also gave you a way to get a web URL into that so if you wanted to promote your app via your enormous twitter followers you could totally go do that which is awesome and of course people who love your assistant app can share it with their friends via the deep link um and finally we did some work to help discover when to invoke your uh assistant app from information in the directory other information provided by the developer we can just respond to queries like uh play a game and we can actually find the right uh assistant apps to go trigger in that case that's totally awesome yeah it's really fun it's really it's kind of a big release for us here at io yeah i mean there's a lot of different stuff yes it certainly certainly is but i think it comes together in a really cohesive package though um because you know if you're going to do transactions you're going to buy things a lot of times having a visual surface to see the see the different options see the card and what now it really helps with that um and then of course if you're going to have a financial interest there being able to have people discover your assistant apps uh really helps awesome okay so let's take a step back and i would love to hear your thoughts just on conversational interfaces in general yeah i mean i if you think about this evolution that's been happening you know like when i started using computers it was like mouse and keyboard it was very indirect right you move wait you had a mouse i did have a mouse you don't think i had a mouse well i mean you know you start with like the common or right yeah yeah okay before you had a keyboard and then we added a mouse but these are indirect like you you do something with your hands and something happens on the screen and then you know we got capacitive touch right so you could actually touch the screen and like my kid when he was two knew how to touch the screen right like that was very intuitive and i think we're at the verge of making the next step and that is rather than just being able to like touch the button what if i could just express what i want why do i have to dig through a bunch of menus find the right button to press not really sure what's going to happen when i press that button but what if i could just express my idea this is what i want to happen i want to play this game i want to order this pizza i want to do whatever and that's what i think this conversational interface is enabling awesome um i and i do want to say that i think one of the things that's been holding us as developers back from that is the right tooling to do so you know you know as i described that i think the developers are going like wow sure but how do i do that when the user how do you think about the huge number of ways people could say they want a pizza order toppings on the pizza how can they turn the huge vagaries of human language into something a developer can understand and that's where tools like api today i can really come into play api today is our own conversation building tool and what it does is you let's you as developers define intents just by saying giving us some example phrases of what a user would say and then we can do the um pull out the entities from that and do the natural language understanding so that we can do slot filling and actually figure out what users are saying so you might give an example like i want a large pepperoni pizza with tomato sauce as your example we know it's large is a entity pepperoni is an entity um and tomato sauce is an entity we can actually pull those out and then recognize other queries like i want a small pizza with tomatoes and green olives uh with you know no sauce and and that uh because we can do that it makes it much more tractable for developers to actually understand human language seriously how many times have you ordered a pizza with no sauce well you know that's pretty rare but you know i actually i'm for extra sauce you know just because it dribbles down your oh okay that's that's that's good all right is there anything else you want to say about assistant or any of the developer tools before we go uh no i think we pretty much covered it all right brad thanks so much all right thank you is this the clicker everyone welcome and thank you for coming to our talk today we are members of the android ux team and we're excited today to talk to you about the new notifications features in oh from a ux perspective so i'm julie i'm a researcher on the android team and today i'll first tell you about the research that inspired the new notifications features in oh so in november of 2015 our team went on what we call a ux expedition to learn more about how people experience notifications so our team of researchers designers engineers pm's we all spent a week in new york and new jersey speaking to and observing users we approached the topic as attention management to include interruptions from devices other people around us ourselves and everything in your environment we didn't want to limit our scope to just on-screen smartphone notifications we used multiple methods in the research we started by interviewing people but then we also observed people because notifications are so contextual and in the moment we wanted to observe people receive notifications to see what they would do and how they would react to them we also had group discussions amongst friends who did design exercises together and lastly we synthesized the data together as a team and we held a workshop later and came up with ideas that turned into the features in oh so what did we learn the most striking observation we made is that there was this addiction to phones people were bound to their phones all day because they were afraid they might miss something important phone notifications were a major source of stress people were very hyper vigilant to any kind of signal they could be getting from their phones afraid to miss something important and most notifications were actually not important to them only a few were necessary we heard lots of things like this from our participants denise said if i don't have my phone for two seconds i freak out what if i miss something and she was a busy lawyer and i observed her in her house the whole day constantly going like this where's my phone where's my phone and she'd leave the room and come back looking for her phone just really anxious all day just in case her phone wasn't with her and then there's Alice who's a professor and she said i wish i could have something like an amber alert for my parents ex-husband and son's school so that those would override everything else she gets lots of notifications from different sources but the only ones she cares about are these and she wishes these would always override the others we observed different coping strategies from users to block out all of this noise the first one is to physically put the phone away so people would force themselves to put their phone in a purse put it in another room someone even put it in a box in another room just anything to prevent yourself from checking it the other strategy was to not let the device make noise so let it just vibrate or don't even let it vibrate at all another strategy was to not let sources contact you that means to block all notifications from an app and the last one is to do nothing but be constantly annoyed every day and all day Stephen he mentioned a particular social networking app sent me way too many notifications so i deleted the app so he got fed up and deleted the app and actually every day he uses the service in a mobile browser multiple times a day purely just so we wouldn't receive notifications in fact the pattern that we're hearing now is when people get so annoyed with notifications whenever they download a new app by default they'll just block it all block all notifications without even giving it a try this is obviously something we don't we all don't want so we need to really be careful about this cumulative effect of all of our apps notifications people often want only some notifications from an app but not all of them so even if they want just one type of notification they still receive a bunch of unwanted ones and many people don't adjust their settings even if they know they can so even if they know there's a way to do it they might not know how to do it or it might feel like a hassle so they just live with the problem Angie said notifications are like junk mail you can take the time to set up filters or just hit delete i just deal with them just deals with it every day Stella said if you allow push notifications i know there's a way to go into settings and adjust it but i don't know how to do that so what else do we learn there were only two types of notifications that people needed to know the first one above all notifications from other people are most important so these include sms messaging apps emails from people phone calls etc anything that came from a person on the other end these were most valued Flora said notifications from other people are most important i always respond those are the only ones that she pays attention to and then there are more important people so notifications from the inner circle matter even more so these are people like your spouse significant other parents kids closest friends they can reach us at any time and we keep channels open and reserved for them andrew said work emails and texts for my wife are the only things that matter basically things i need to tend to so i don't get in trouble so those are the two types that he cares about above everything else the other kind of notification that's important to people are reminders for things that they need to do so we observe people having things they needed to remember throughout their day like picking up your dry cleaning calling a credit card company to follow up with an issue cleaning a chinchilla cage if that applies to you setting up a kids play date making new keys for the car things like that right we all have things like this in our lives and people need to remember these things david said sometimes i get a notification for something that i actually want to do but just not at that moment and then i forget about it later because i don't see it again and shelly says i know i have to pick up my dry cleaning but telling me right now is not going to help me it's like nagging tell me when i'm near the dry cleaner so when a notification isn't addressed right away it's very easy to forget it out of sight out of mind we realize that we need to make it easier for people to remember these tasks so while this research as i mentioned was conducted only in new york and new jersey we've observed similar patterns in many other studies across many locations since then whatever topic we're studying from productivity to wearables to driving we keep hearing over and over again from people about the stress and the burden of all the notifications are being bombarded with so here's some other locations that we've done where we've heard similar patterns across the us and many cities and internationally so based on this research our team felt it's important for us to respect our user's attention and to allow them to have more control so this inspired some major changes in oh this year now here's rachel to give you more details about the features hi i'm rachel garb and i'm an interaction designer and manager for the android ux team the first new feature i want to talk about is channels a show of hands from smartphone users in the room who here has ever wished that they could get some of the notifications from an app but not all of them that's that's almost all of you yep so knowing that we definitely wanted to add support in the android system so that users could have more granular control over an app's notifications in oh we're introducing channels and here's how it works you define a channel for each notification type that you want to send and then your users are able to make decisions about notifications at the channel level let's look at an example suppose that we are all the designers of an app for crane air a fuss free corporate airline that caters to the frequent flyer and here's all our existing notifications we've got a pretty sweet program for our frequent flyers or as we like to call them executives we have a set of notifications for our customers about the points they've earned and their status in the program now all of these notifications you see here have to do with the executive program so let's make a channel and call it executive program now we occasionally also offer discounts and deals to incentivize our customers to book more flights so let's group these notifications together in a channel and call it discounts and deals now when we categorize notifications into channels we want to look for similar subject matter and similar importance to users here we have a number of notifications that could be sent in the hours and minutes leading up to the flight so they're pretty important let's create a channel for them called flight updates and now with this example in mind let's shift over to how users experience channels the ui design for channels is optimized for the number one most important use case which is i don't want this type of notification from this app now without channels any support for this use case has to come from within the app with settings to block notifications by type now how does a user find out if an app provides this they have to open the app and look at the settings and it's not like all apps are going to do it the same way but a lot of the time there won't be any way to block notifications by type and at that point the user has to decide what they're going to do they can ban the app from sending any notifications at all they could uninstall the app or just live with it and none of these are ideal but with channels it's a different story one that's straightforward and consistent let's check out the ui now we at crane air know that a segment of our users love to celebrate when they earn points or reach a new status so whenever they land we notify them about how many points they earned on the flight and how they close how close they are to the next status level but maybe not all users want to be notified like this some might not want any notifications except the ones that are essential to functioning in their daily lives and they know that they can get this information in their monthly statement they're fine waiting for it so to stop this type of notification all a user has to do is touch and hold it which lets them see the channel and then they turn off this switch and tap done no more executive program notifications from crane air but they'll still keep getting the others like those handy flight updates now our aim with channels is to make it possible for users to get most if not all of their notification control needs met in line or one step away in settings now in settings users can decide to block all notifications from an app and new for oh an app can show a dot on its icon when it has notifications waiting for a user but if the user doesn't ever want to see a dot for that app they can prevent the app from showing it users can see all of the channels at a glance and decide which ones they want they can view and edit settings for a given channel more on that to come and if the app wants to provide other settings beyond what's here the app can have a link for the users back to a screen in the app this approach bringing more notification controls into the system makes the user experience more convenient and consistent so that is an overview of channels and next i'd like to talk about the new behavior model for notifications in oh and i'll start with the research that informed it some people are putting up with unwanted notification behavior because they're reluctant to use settings you heard about this from julie so the opportunity here was to make it easier for users to understand and control notification behavior now let's compare the current behavior model with the new one for oh in the current model the app developer uses a field called priority and sets a priority level for each notification that level maps to a set of behaviors and some of them are customizable by the app for example if you choose high if you choose priority level high as the app developer you can have your notification make sound or not you can have it vibrate or not you can have a peak or not it's a lot for a user to follow in the new model priority is deprecated and instead the app developer sets an importance level for each of their channels rather than for individual notifications and there's fewer customizations that the app can make on that behavior here you see the inner workings of priority levels i won't read this out to you but let's compare it with importance levels as you can see we've gone down from five to four levels and reduced what can be customized we are intentionally trading flexibility for simplicity so that users have a better shot at understanding what the heck is going on we're letting go of allowing each notification to behave differently so that we can guarantee users that all notifications from the same channel behave the same way in the current model a user can control an app's behavior but has no insight whatsoever into the app's intentions as they're making decisions but in the new model the user gets a clear picture of app behavior as they make decisions so it's the difference between i can make sure that this app won't be able to do x versus i can see this channel does x but i'm going to have it do y instead i'm going to show you what i mean by comparing the UI in the current UI if show silently here is turned on then it's predictable but otherwise the user has no clue how notifications from the app will behave in the new UI the user just needs to look at the important setting to see how the channel will behave and if they don't like it they can choose a different setting in this new model android creates a setting screen for every channel from every app this is the place where users go to see and control its behavior you the main settings are to block the channel and change importance if the importance level is high enough the user has sound and vibration options too and there's more advanced options now you can also optionally provide a brief description of what the channel does if it's not obvious so that users can make more informed decisions if your app already offers channel like behavior please leverage android solution so that users don't run into the situation where there's discrepancy between an app and system settings you can link to the system settings screen either at the channel or the app level and if you want to offer channel controls beyond what the system offers for example letting users specify notification and email preferences in the same place in your app you can have a link back to your app settings and that's an overview of the new importance model next I'll turn it over to Justin to talk about other new things for oh thanks Rachel I'm Justin Barber and I'm a visual designer on android and I have to start with an apology because many of you probably showed up for this session hoping for Justin Bieber concert and it's not happening but I saw some pretty interesting stuff to talk about so visual hierarchy is the focus of oh and if you remember last year in n we completely redesigned notification templates to become more flexible modern much cleaner and the new features in oh that I'll be covering we're really built on the foundation that we set in n and also on Julie's amazing research so that's where I'll start with the insight research sorry with the research insights that really led to what our approach in oh so the first insight was that people are overloaded with notifications and it's hard to find the important ones I'm pretty sure we've all been here before and in Julia's research we uncovered that users were building associations think to determine what was important for them so if the if the phone made a certain vibration tone or they saw a certain color of light blinking or they'd see a color in their notification they were making associations to determine whether or not this was important enough for them to spend time on but that can be frustrating misleading and above all time consuming so our goal for oh is to present notifications in a way that helps users sift through the noise not just to build more associations but to build a consistency and a trust you should know where the important stuff's going to be before you have to go looking for it the second most important insight that led to our approach was that people are the VIP notifications in absolutely all scenarios who is more important than what or when so the opportunity here is pretty simple make people notifications VIPs of the drawer people matter the most and notifications is really communications hub so we really want to emphasize that in this space so as I mentioned before and focused on individual notification templates and then oh we've zoomed out to kind of see the bigger picture and how they all fit together in one experience specifically by improving the hierarchy of notifications that appear in the drawer we did this by designated four different types of notifications and but four different buckets for them represented by these yellow blocks before I get into the detail of each different bucket I'll give you the the high level overview so the first bucket is for major ongoing this is things like a phone call the next is people to people like a message from a friend the third is a general notifications the best of which are timely reminders and the fourth are by the way or a BTW and these are the four buckets of notification types and how they're organized within the drawer and the purpose of this is to ensure that notifications most critical to read first are shown first it also creates consistency when you pull down the drawer to find that specific notification look you're looking for you'll already know which area to be looking at because you'll become familiar with these sections so we're enabling users to build associations through these through this ordering and lastly we're going to be ranking notifications within each section this means if you have two message notifications they're not going to be separated by something else in the middle they'll always appear together so it'll be a lot easier to scan the notification drawer so now you have a big picture let's drill down in the details of each bucket starting with the first one major ongoing notifications are really for time sensitive content users should actively be aware of this is like phone call driving directions timer music and as a developer to send this type of notification you need to be running a foreground service with importance higher than men or using a media style with media session for music notification and this type of content is for content that's using device resources that directly impacts real-life context and the things that match that is a really really short list the type of notification will rarely be appearing in the user's drawer because the context that requires it is a very rare situation so in end this is kind of what these types of notifications looks like right they're not very visually distinguished they don't necessarily always appear at the top of the drawer they can be lost amidst the other notifications and if you wanted the proper prominence to really gain that association for the user you had to go with customization which led to inconsistency with the rest of the notifications so an oh what we're doing is we're pulling that color from the app and filling the background with it and guaranteeing a top spot in the notification drawer so these will always appear at the very top but again because they're so prominent misusing this type of notification for something it's not meant for will be very obvious to users and with the channels that Rachel just described it'll be really easy for them to block this type of notification if it doesn't align with what it's really meant to be falling into this category is also music so in end you can see that the templates are the music template the media templates the most different template that we have compared to all the others but it still feels really generic and it doesn't really look different from other notifications so an oh we've integrated the album artwork into the notification and drawn out the most prominent colors for the background and as an accent the result is a beautiful lively notification that really celebrates the artwork and the artists behind the music that we really love and I found myself pulling down the notification drawers I listen to music just to see what kind of colors that song generates now for the next section people to people this is really for content from a person like a message or a miss call and as Julie mentioned people are the VIP notifications so we're going to put them right at the top of the drawer in the number one position since major ongoing is going to be pretty rare to get this type of notification you should be set you should be using messaging style with direct reply or attaching a contact to it so as it stands today a message notification is really not that valuable and until you interact with it there's like one line of the message so you don't know if it's worth actually expanding and reading and then you start to make the decision if you should reply to it and you may not be able to reply to it directly within the drawer you have to jump out of the drawer so it's it's really just not that valuable so in oh we've made one really subtle change that has actually made a really big impact on the messaging experience and that is that we're exposing more lines of content in the collapse state of the notification so now you can read up to three lines of the message without having to spend the time expanding the notification and deciding what you want to do with it and we're continuing to invest heavily in this area of focus and we're looking forward to making even more improvements especially around direct reply coming soon this next bucket is for general notifications and the best of which as we documented in material guidelines and Julia's research backs up is really meant for well-timed and informative task reminders and in oh there's no behavior or visual change in the representation of these notifications but it does play into the next bucket of BTW and today there are a lot of notifications being sent in this general bucket that really should be sent in oh as a BTW BTWs by the way notifications are meant for contextual or informative but non-urgent content we heard a lot of stories from Julia's research about how people felt so overwhelmed if content in the drawer isn't urgent it should be sent as a BTW things like weather updates or traffic or suggestions and promotions so earlier in this presentation Rachel used crane air as an example of how to organize your apps notifications into channels if you remember there are three different channels executive program flight updates and discounts and deals and that last category of discounts and deals is an excellent example of what should be sent as a BTW notification and it's pretty simple to send one you just need to set the importance of the notification to min so in n the visual representation of these notifications is pretty equal with all the other notifications the only difference was a slightly different colored background it's a pretty heavy weight presentation for content that's actually very lightweight it's bothersome to users it takes up a lot of space so what we've done in oh is we've taken the app name and the first line of the notification and simplified that down into a single line so the new presentation is really meant to feel lightweight so that users don't feel bothered by them they don't need to manage them because they're out of the way and it's okay if they can glance at and don't want to take action on it right now you can also set your notification to automatically expire so that users also don't have that burden of having to manage them as well the advantage of this is that instead of risking annoying your users by sending a notification in that general bucket you can send it as a BTW instead this way you don't have to hurt users trust by risking something you can just put it in the BTW section also the full functionality still remains so if the user taps on this BTW it expands the full notification you get all the color and if the notification has actions with it those will appear as well so when you put it all together it comes it's comes together really nicely you can actually tell a very intentional order within the drawer and when we compare that with n that's when you can really start the difference you can really see the differences come out if you squint your eyes and look at n which is on the left in the black phone frame you can't really tell which notification is which at a glance if you look at oh it's really clear you can see the major ongoing at the top you can see the BTWs at the bottom you can even kind of tell it's a for people to people a message notification because it's taller than the rest now you can also see an n how when there's an overflow of notifications those notifications got stacked behind the ones that were visible this doesn't match up super well with the spatial model of the full width notifications so an oh we've improved that spatial model and made it feel much more unified of a really beautiful animation and the point of this is to correlate the notifications of icons of how they appear in the status bar with the notifications that they are in the drawer and again yeah you think that was pretty cool right this is something we're really really proud of and again it wouldn't have been possible without those template changes that we made an n right so we're taking steps to unify the whole experience you can also see out instead of stacking at the bottom overflow notifications fit into a shelf and we have a really beautiful experience going on I'll just let it play one more time whoo all right so this is the last feature I'll be covering pretty briefly and that's snoozing and this feature came directly out of user research the insight was that people need to remember the things they need to do and today you only have two options if you can't deal with the notification right now you can either just miss this notification and count on your memory which if you're me never works ask my wife or you can leave the notification in the shade but it can get lost and it's a ton of other notifications right it's it's either or so our opportunity was to give users more ways to do this in the drawer so let's say you're in a meeting and you get a text from a friend asking you about dinner now you're in the meeting so you can't reply right away but you know as soon as you leave the meeting your mind's already going to be on to the next task and you're not going to remember to reply so instead of letting the notification linger in the shade linger in the drawer or dismissing it all together you can snooze it and this is how you can do a partial swipe which reveals two different icons you'll notice it's an additional icon to the gear that we introduce an n and this is the button for snooze so if you tap this icon the notification will automatically be snooze and you'll get a confirmation of this snooze with an option to undo it if you need to so snoozing the notification will set it to come back in that set duration with the exact same interrupted qualities that are sent with as before so if it made a vibration tone when it came in the first time and you snooze it it'll vibrate again the second time when it comes back so if you want a different time than one hour you can just tap that arrow and you'll get a few different options to change that duration and the option that you select then will then be set as the default duration the next time you snooze a notification pretty simple that's what we have to show you for the new notifications in oh and if you're interested in finding out even more especially from a more detailed engineering perspective there's an earlier talk what's new and notifications launcher icons and shortcuts that we'd encourage you to check out we're really excited about these improvements and believe that these features are not only beneficial to users but will also increase the quality of engagement their apps thanks so much and we have a few minutes left for questions how do you expect inheritance for existing hello i'm timothy jordan and this is your update about the coolest developer news from google in the last week the advanced android app development online course has been updated improved and extended with it you can build a portfolio of apps as you improve your android dev skills the course is linked from the post in the description below we recently launched a i y projects do it yourself artificial intelligence for makers with it makers can use artificial intelligence to make human-to-machine interaction more like human-to-human interactions we'll be releasing a series of reference kits starting with voice recognition more details and links are on the post chrome 59 beta is now available with headless chromium native notifications on mac os service worker navigation preload and more all the details are on the post google cloud launcher has more google maintained containers including kassandra elastic search jenkins mysql and more google container solutions are managed by google engineers and since we're maintaining the images the containers available on google cloud launcher will be current with the latest application and security updates there are two announcements from google cloud next london that i wanted to tell you about first google cloud natural language api is adding support for new languages and entity sentiment analysis and second cloud spanner is now generally available check out the details of both announcements on the post google hyo is just around the corner and if you're like me you like to go in prepared which is why we have an android ios and web app to help you customize your ios schedule and get around the developer festival check out the screenshots and find the download links on the post in the description below please subscribe and share i'm timothy jordan and i'll see you next week as a lot of developers know there's more to having an app succeed than just building a great app you want your app to be dynamic and responsive by delivering fresh content to users and quickly reacting to their changing needs you want to test out major decisions to make sure you're doing the right thing before you push them to your entire audience and ideally you want to provide a tailored experience for each user so your vip's feel like well vip's but let's be honest that can be a lot of work and if you're a developer without a ton of resources that's time you'd rather spend on other things like building your app that's where firebase remote config comes in firebase remote config is a simple key value store that lives in the cloud but don't let that simplicity fool you because it lives in the cloud it means you're able to deploy changes that your app can read within a matter of minutes for instance say you've just pushed your app out to the world and you suddenly discover that your swedish text contains some offensive language how are you supposed to know you don't speak swedish i don't blame you but fixing that text the old-fashioned way would mean creating a new build and going through the entire publishing process again that's something that could take days which is an awfully long time to have 9.2 million people cursing your name but if your app uses firebase remote config you could change that text in the cloud through the firebase console kind of like this the next time your users fire up their app remote config will grab the latest values update your app's text and just like that you've averted a major international crisis or let's say you've got a puzzle game and you're hearing complaints from your players that level five is too hard if you've configured your app using remote config you could tweak those settings to give your players a few more turns and push out that change to the world but hang on are you sure that's the right thing to do what if the silent majority of your users actually enjoy the challenge of a more difficult level and by making it easier you're going to turn away your most hardcore and potentially highest paying customers how could you test whether or not this change is a good one sounds like you need an ab test that's where remote configs audience segmentation feature comes in this allows you to deliver different configurations to different groups of users simultaneously so you can try out your new level settings with half your users while keeping the old settings with the other half but audience segmentation isn't just great for ab testing maybe you've got a feature change that could have a major impact on your in-app economy or maybe you just want to double check that some new networking code isn't going to set your servers on fire you can use firebase remote config to gradually roll out these changes trying them first with a small percentage of your users before pushing them out to your entire audience remote config can also deliver different configuration sets to your users based on all sorts of different factors from device type or locale to any audience segment you've defined in firebase analytics so you can send out one welcome message to your new zealand customers and another to your australian ones or only show your review this app button to people who use your app every day or you can change your home screen experience for your customers who have spent large amounts of money on in-app purchases so they feel special remote config is backed by a client library on ios and android that handles important tasks like caching dealing with flaky connections and keeping network requests lightweight which is always a good thing to give remote config a try check out our documentation here we can't wait to see what you build and with npr one we are reimagining what a listening experience could be outside of the radio it's the radio but better it has all of the great stuff that we've spent 40 years perfecting with npr one we see the opportunity of reaching a more diverse audience that have a device in their pocket all the time my name is might safe lahi and the lead mobile developer for npr digital media my name is nick de pare and i'm the innovation accountant and npr my name is tejas mystery and i'm the senior product manager of npr one there's some of the biggest challenges in any mobile app are that first impression when the user first installs the app you've got a very limited amount of time to convince them to keep the app and to get engaged in the experience trying to figure out how we can get users into the content as quickly as possible was the real focus of integrating firebase and dynamic links using dynamic links we are able to shorten the number of interactions it takes for a user installing the app to get from the promoted content to the content from 20 to 3 so that user is able to get right into the content we're driving more and more listening for a user every week it's really astounding creating playlists of content that are configured by the podcaster or by a member station or by us internally and with firebase we have that at our hands having the analytics product interact with things like dynamic links remote configuration cloud messaging it adds a real multiplier effect and the integration with the broader firebase suite i don't have to go outside the platform to figure out what's working so it's not just about shipping the product faster it's about analyzing the results faster and with the integration with all the other firebase products we're really excited about all the things we can learn from it we all know from experience that people love to share things about themselves such as photos videos and gifs that express their feelings so what do you do to let them store and share these files through your app that's where firebase storage can help our storage api lets you upload your users files to our cloud so they can be shared with anyone else and if you have specific rules for sharing files with certain users you can protect this content for users logged in with firebase authentication security of course is our first concern all transfers are performed over a secure connection also all transfers with our api are robust and will automatically resume in case the connection is broken this is essential for transferring large files over slow or unreliable mobile connections and finally our storage backed by google cloud storage scales to petabytes that's billions of photos to meet your app's needs so you will never be out of space when you need it so give your users space to share their lives with firebase storage available right now for ios android and web applications and to learn more about firebase storage check out the documentation available right here right let's have a little chat about a brand new api that's just coming out on the platform called the media session api um i heard about it when i started doing this app it was mentioned to me by a couple of the chrome engineers but france war who's on my team there's a brilliant google web developers update where he explains in complete detail about how to set up the media sessions api so check the notes below we're going to pop in a link for that but i will just show you briefly what it actually looks like in the context of the app on the phone here in fact because you probably can't see it let's switch across to the direct scream cam there you go and that's what it looks like on the phone so as i start playing a video like this you see that if i swipe down from the top we actually get a notification which has an icon here and it has play and rewind and fast forward buttons and you get to configure those yourself in fact let me go into another one of these videos where i think i've actually set up to load some custom album art as well so there's me and jake and you see here now we've got custom album art which is the picture of jake and the previous and next the fast forward and rewind buttons are actually set to be skip forward 30 and go back 30 so you'll be able to tap that and go forward 30 which you can see there oh it's just skipped it right to the end whoops a daisy but i can replay the video don't worry the other thing that it actually does which i really like is if i turn off the screen then on again you can actually see that the picture the album art is actually the picture for my phone in the background that is very exciting isn't it so let me show you a little bit as well since we're here let me show you a little bit of the code uh it's very straightforward we have a quick check whether we support the media session api uh which is basically looking for let me show you actually it is just simply looking for media session in navigator and if we have media session in navigator then we consider ourselves having the media session api and what we do is we say media navigator dot media session dot metadata and then we create one of these new media metadata uh thingamajigs very exciting uh eslyn doesn't like it it doesn't think it's a real thing it is a real thing uh you can totally use it and you give it things like the title the album the artwork i've only set the 512 and the 256 but very much like your manifest files for progressive web apps you can set as many of these as you need and the user agent will choose whichever one it thinks it makes the most sense for the device that it's on and it will upscale and downscale as necessary but i currently am just setting a couple of them i may need more as time goes by and then afterwards after we've set up the metadata we set some action handlers for things like the play the pause the seat backwards and the seat forwards the thing to bear in mind here is that uh any that you set will have the buttons appear in the notification if you don't set one because there are other ones that you can set as well uh and i forget which ones they are but check out france was posty explains the whole kit caboodle uh any that you don't set won't appear any that you do set will appear in the notification and then somebody can control uh your stuff from the lock screen or by just dragging down from the top all very good isn't it that and very straightforward code to be writing uh so a brilliant little progressive enhancement thing that yeah you can check on and that i have checked on uh my media up cool to live hey gang want to see something neat check out this awesome hidden feature i found in firebase analytics so i'm over here looking at all my reports in the firebase analytics dashboard uh here for instance i've got my active users for the last 30 days and while these graphs sure are pretty i'm thinking it'd be kind of nice if i could get these numbers into like google sheets or maybe excel so i could analyze them a little better right well watch this i'm going to select my graph here in the firebase console uh it's kind of hard to tell but you can see by like the highlighted text here that my graph has been selected and then i'll hit command c to copy it and then i'm going to switch to a blank google spreadsheet and hit command v to paste and uh look at that all my values are right there in the spreadsheet for me to analyze so you can see here on the left most column i've got the date and then all the actual numbers are in the columns next to it now you might notice that i seem to have two columns of what looks like the same data right i've got monthly active users here and then right next to it i've got this monthly users column and then the same goes for my weekly actives and same for my daily actives and so basically that first column is for the value that corresponds to the date here on the left the second column is basically for that corresponding day in the previous 30 day time period uh basically it's the values that belong to this dotted line here in the graph that i copied make sense okay and then i can do the same thing for a bunch of these other graphs uh here i can copy and paste my daily engagement numbers let's uh get these into a new sheet here and again you can see i've got my engagement numbers uh from this time frame and this first column and then those same numbers for the previous 30 days in this second column and better yet i can jump over to an individual event like this completed five levels event and uh copy all these graphs here at the top and you can see i'll get event counts user counts event per user counts and values for every one of my events that i am recording in firebase analytics and this lets me do some pretty nice calculations right here in google sheets uh for example let's say our game designer is curious how often people are failing a level in our game well for starters i've got my level start graph here to show when people are starting a level in my game so first i'm going to copy and into a new sheet so put them in okay great and then i'm going to do the same thing for my level fail graph and that will show when people have failed the level so we'll copy from here and we'll paste them right in next to my other numbers and once i've copied and pasted these values into google sheets i can then calculate my average failure rate per game stat by device number here by this other one i'm going to copy this formula down for all of my dates let's give it a percentage format so it looks nice uh maybe we'll add an average at the bottom here let's do average for all these numbers and it looks again an average failure rate somewhere in the low 30s which sounds like it's just challenging enough for our players so uh our game designer is happy now a couple of disclaimers here uh first this doesn't work on all the graphs i've tried some of them just don't seem to copy and paste as well as others uh but it does work on a surprising number of them you'll just kind of have to try them out and see if they work and second this will never be a replacement for some of the awesome and sophisticated data analysis capabilities you get by exporting your raw data to BigQuery and you should totally go watch this video if you want to find out more uh but if all you want to do is maybe compare two graphs to each other or calculate some standard deviations or averages on a particular event this trick can work surprisingly well so give it a try yourself have fun with it and we will see you soon on another episode of Firecasts it's been just over a year since we open sourced TensorFlow and we've been thrilled to see the adoption by the community and the pace of development both here and all around the world TensorFlow is really the primary tool that we're using for a lot of our machine learning work in all of our products towards the end of last year we actually rolled out a completely new translation system that was based on deep neural nets in Gmail we were actually able to roll out a TensorFlow model that by understanding the context of the message you just received we can predict likely replies and this is a feature we call smart reply diabetic retinopathy is the fastest growing cause of blindness it's a complication of diabetes we gather a very large data set and had doctors grade the images then we using TensorFlow trained a neural net that does a pretty good job of predicting whether or not there is a diabetic retinopathy in the image can we use something like TensorFlow to make music to make art and to allow us to communicate better with each other with TensorFlow we're able to think abstractly almost at a level of like in improvisation with machine learning we're able to try new things to to chunk models together in ways that were impossible before we had that kind of expressivity do you guys are classed as vulnerable to extinction globally so we do a lot of aerial surveys using drones then once you've done a survey of a really large area you end up with tens if not hundreds of thousands of photos the goal was to find a way to automate that whole process and that's where we've been using TensorFlow one of the things that we've been focusing on this year with TensorFlow as performance we've been especially excited to release support for distributed training we want to make it easier for people to use so they don't have to necessarily know all of the underlying internals in order to get the distributed performance the best it can be axl a is something that can compile down TensorFlow maybe you want to compile your graph ahead of time and get it down to something much more compact in terms of memory size so that that way you can easily load it and execute it on something that might not have as much storage space like a mobile phone or some other portable smaller device when we introduced the hex on vector extensions what we had in mind was enhancing user experiences with imaging features so the TensorFlow team said that you only needed low precision multiplies to be able to execute these neural networks efficiently so we did some tests and on the same graph inception v3 we were eight times faster and four times lower power than running on the cpu's tensor flow is great to work with easy to work with lots of capability so our engineering teams and their engineering teams working together we were able to do something very exciting this is just the beginning of what will end up being a long evolution of some great things we can do with machine learning and image processing in addition to sharing TensorFlow Google has also shared a ecosystem of tools which contains everything you need to go all the way from research to production one such tool is tensor flow serving and this is a open source high performance serving solution another great tool which is actually quite beautiful is the embedding visualizer and you can use the embedding visualizer to interactively explore high dimensional data sets on the education side general assembly has done great work teaching tensor flow for my final project i was really interested in doing lyrics generation and tensor flow was a really great match for that because it allowed me to build out and utilize the models that i needed to be successful the tensor flow community is thriving around the world and we're excited about as many people as possible being part of it tensor flow is an open source project for everyone we're looking forward to building this into something even better and more useful and more powerful in collaboration with the whole worldwide community here we are in the codelabs area this is a special part of google i o where if you've always wanted to learn a new api or tool or service you get to come here and work on the codelab and if you have any issues or you want to get in more depth you can ask a googler for their help any question you happen to have i'd like to learn a little bit more about what it feels like to do codelab here at i o and to do that claire how you doing uh i'm great it's been a long couple of days but a fun couple of days i hope that means a lot of people learning a lot of things oh most definitely we've had lines waiting pretty much the entire day that's very cool so what's the setup like here the setup here is a bunch of computers that are all preconfigured with the steps to do learn pretty much whatever you want you can tell what's going on by the status of these lights up here if they're green it means the computer's ready to go ready for you to come up and learn if they're blue it means codelabs in progress and when they turn red it means that someone needs a little bit of help or it's got a question and one of our staff will zip over right away and help them out with that that's totally cool now this is a special codelab station that actually has a whole bunch of devices what's going on there that's right so right now we're in our special iot codelab section we've got a couple different devices over here uh that people can run a bunch of uh various of our codelabs on um and they'll actually get to interact and play with them this one's got some buttons and lights and numbers it's got a button that's very fun very fun to play with it's a good time all right uh is anything else you want to say about the day how's it been going really good uh we've had a lot of people with very insightful questions a lot of people showing a lot of enthusiasm for the subjects which is always my favorite thing to see awesome thank you so much thanks timothy and that's the codelabs area thanks so much for joining me remember i'm timothy jordan and if you'd like to catch some more of these videos after i o head on over to g.co slash i o slash guide hi everyone good afternoon my name is garen and i'm a product designer that works on emerging markets projects here at google and i'm tracy a researcher that works on emerging markets at youtube and we're super excited that you're joining us this afternoon to talk and learn about design for new internet users most recently tracy and i spent the last few years working on youtube go which is a new version of youtube a new app that's been designed and built from the ground up for new internet users in countries like india and you may have heard about this in the keynote back on wednesday that was announced as part of the android go operating system so if you're interested in designing apps that work great on android go or just work great for new internet users in general then we're glad you're here this talk is for you this is important because the majority of new internet the majority of people coming online and actually the majority of people online right now are in emerging market countries like india indonesia brazil migoria so if you're interested in going global if you're interested in scaling any projects or products that you work on it's an important it's important to consider their experiences in their contacts so you might be thinking well android is doing really well there's this announcement that android has over two billion users why do we have to do anything differently we know from a host of qualitative and quantitative data that these internet users have very different experiences from those of us in this room if you think about your first computer experience it was probably a couple decades ago you probably started using a desktop you probably started connecting to the internet on that desktop your internet connection was fast it's probably fast reliable and not too expensive right now you probably have a cell phone that's pretty good has a good screen it's pretty fast you're on wi-fi now or you're on wi-fi at home at work or events like this and when you're not on wi-fi you've got pretty good access to mobile data to be online all the time compare this to the typical persona or a typical persona of a new internet user these folks are coming online with their first computer their first internet experience being a smartphone and it's oftentimes not this kind of fancy smartphone that we carry around it's a lower end smartphone sub 100 sub 50 dollar smartphone these phones don't have great screens don't have great contrast they're they've got different kind of touch feedback they've got low storage low ram oftentimes they're running older versions of the operating system they're not updating their apps frequently they're using the internet on slow expensive flaky connections and rarely if ever are folks using wi-fi so well we still believe that universally we are all human regardless of whether somebody is in india indonesia nigeria united states or london we're all human we do think it's really important that that we consider the context the different cultures the different constraints and different mental models that people bring to their internet experience so over the past few years we've invested a lot in thinking and figuring out what to do within google and we're excited to share a framework that we use within the company for building for new internet users so in the course of this talk we're going to talk about what it means to make your product usable useful and engaging for these types of users we're going to give you very quick very focused tips for each one of these components so it's not going to be really nebulous very designy it's going to be very applied and tracy do you want to kick us off sure so our first bucket is usable which focuses on the question which is simple but challenging can your product be used and this bucket contains three components cost connectedness and compatibility so the first will be cost we can start with a comparison to afford one gigabyte of data in the united states you would need to work a little bit over 20 minutes on average but to afford the same exact thing in indonesia one gigabyte of data you would have to work 316 minutes so nearly five and a half hours so in markets like these like indonesia data is much harder to come by and for that reason people savor every mb that much more also in these markets most of the data is prepaid specifically in india 95 of it is prepaid and prepaid is where people buy fixed amounts of data up front there's it's not an ongoing commitment which we might be familiar with so imagine going into a store and you buy a small paper card with a scratch code on the back then maybe you buy for 40 or 50 rupees and you get 100 megabytes of data that might last one day five days or even up to 28 days so the behavior we're seeing is that people are buying denominations based on what they can afford what they foresee is actually needing and then they budget out their data use over that time period as necessary so overall data just feels like money in these markets and compare this to what happens in the us we usually buy a pretty um high data cap plan by month or we pay on every month but we pretty much commit for a long time and we're often connected to wi-fi so we don't really worry about running out of data maybe only at the end of the month so what is wi-fi really like in these markets wi-fi is actually rarely connected to because it's not that accessible globally there's one wi-fi hotspot for every 150 people and in india it's it's a lot higher there's about 31,500 hotspots overall in the country so that means one hotspot for every 3,900 people so if india wanted to meet the global average they would have to install eight million more hotspots and the point here is that public wi-fi access is rare making data again that much more precious and expensive so now you might be thinking wi-fi at home home is where we have the comfort of wi-fi we're wrapped in the comfort of our own connection our nice security blanket we're not spending any of that mobile data but this is not necessarily true in these types of markets currently less than two percent of the entire population in india subscribe to a fixed line broadband internet connection in their home and if they are accessing internet at home it's mostly through their mobile data on their mobile device so in some wi-fi access is still very rare and we can't be assuming that wi-fi is going to come along as certain cadencies for these users to fix whatever blockages or barriers they may hit in other connectivity states so our tip here for the first component of cost for data heavy tasks provide transparency and control over data consumption the example on the right is actually from youtube go after selecting a video users are given the control to select their quality before watching so they know upfront how much data they'll spend before committing to an action to either play or save the video and we specifically say for data heavy tasks because this type of transparency might be less important for smaller amounts that could be considered negligible like 10 kb so the next the next component of the usable bucket is connectivity we know that the internet is much slower in different parts of the world the statistic is that in india on average the internet is three times slower than it is here in the united states and most of the folks that are coming online for the very first time are coming online on the slowest connections on two cheap 2g 3g connections or overloaded free public wi-fi networks but on top of it being slow it's also important to consider the flakiness and the intermittency of the connections so 60 to 5 to 80 percent of the time 3g connection will around triple actually work they'll they'll be uptime only for this amount of time so you can imagine for seconds for minutes for hours at a time even though you might have data you might not be on airplane mode the internet still might not be working for you so it's crucial for those of us that are creating software to acknowledge and actually embrace these states of intermittency as designers we usually approach designing for these types of connections by looking at states so typically this is a these are the states that we're often used to designing in silicon valley or in well-connected tech company environments there's online state which is the kind of the consider the ideal state sometimes we think about partial loading states and oftentimes we'll design error states as well so a typical mock might look like this for us or a typical flow of mocks or designs might look like this for us you might move from one online mock to another online mock to another online mock and that might be how we how we design what an app how an app experience actually is maybe you design an error case as well but this isn't really comprehensive this isn't this doesn't actually meet the needs or is not doesn't portray the experience of a new internet user so we've actually added a few other states to make this more comprehensive in addition to considering the online state is ideal we also think about the offline state is ideal both because connections are flaky and because connections are expensive so people turn off their mobile data frequently in addition to loading being in progress partially loading we also have loading started just to let folks know that something is actually happening in addition to having an error content we also have a state that portrays when you're offline and there just is no content it's not a problem if you don't have connectivity and it's not a problem if nothing is cashed on your device so if you use this kind of framework and you're walking through the design of a product you end up with a life cycle that looks kind of like this you're offline you don't have any content you switch it on the loading starts there's partial loading that's happening then you're in a full online state and then maybe you switch off your connection because it's flaky or you flip you flip you switch off your connection it's flaky and then you go into an offline state and we consider all of these states first-class citizens so our tip for connectivity is to expect latency by designing offline loading retry and success experiences make connective make great connectivity an edge case so oftentimes we're used to designing edge cases as when there's no network have no in slow networks be your primary use cases and have great connectivity be the edge case the example that we have here is chrome's offline page which has recently added a download page later button so not only does chrome work offline and give you helps to figure out what's going on but there's also an action that you can take on this page on this piece of content another example for for this tip is google search has google search has launched a new feature where when you're offline or when you're on a flaky connection and the the connection to google doesn't work you can search for something and then google search will give you a notification of the content when it's ready so you can turn to google search and great connectivity and flaky connectivity or no connectivity and still count on getting an answer back the final component of the usable bucket is compatibility and oftentimes when we talk about compatibility for emerging markets or for new internet users we talk about operating systems hardware software and all of those are very important for all the developers in the audience these are important things to focus on but i want to talk about something that's actually design related which is that half the phones in india are sold half the phones in india right now are being sold that they're less than a hundred dollars and actually many of them are 30 40 50 dollar phones as well and as you can imagine 50 dollar phone doesn't really have the same type of screen doesn't have the same touch feedback doesn't have the same contrast doesn't have the same resolution as a expensive expensive premium device screen does as well it's also got lower processing on top of that because phones are expensive people keep using broken phones just because a phone breaks doesn't mean they go out and buy a new phone oftentimes broken phones are used and when a new phone comes the broken phone will still be used by another family member or by somebody in a more remote or more rural location so it's important that we understand that low cost and broken phones are actually part of the ecosystem in which people are using our products material design provides a variety of affordance options and it's always being revised and there's more affordances that are being added so we encourage you to choose the types of affordances that make a most sense for these types of phones the tip here is to design clear larger farther apart affordances to prevent mistaken taps caused by bad and broken screens and high contrast environments you'll see the example here is from youtube go there's a modal there's two buttons the buttons are not just text buttons we didn't choose to use that that affordance option we chose to use big buttons with contrast high contrast big background and big backgrounds and there this this ends up being easier to this ends up being easier to see when you're in the sunlight or when you're using a screen that isn't as great yeah and to add something to what we've learned really strongly in our user research is that pairing together icon and text can really help people understand and remember what an affordance does you can consider each item the icon and the text as compensating for each other for any little bit of each that might get inevitably lost in translation so now we move on to the second bucket of useful this contains this also the second set of components culture content and commerce evaluating is your product valuable not just if it can be used but is it also providing good use to the to the users so the first is culture and culture manifests in numerous ways but we're just going to explore one branch of it for our time purposes so before search engines existed there were sites like this like dmoss which was like a portal to the internet a directory of the internet's top level contents and whether you knew what you wanted to search for or didn't you start with a related category and and go from there but now we've evolved more into this very familiar format of searching to all of us which is arguably a little less guided but this works for us because we had portals like dmoss help us build a really strong mental model for what lived on the internet and how to find it with the high level categories and then the subcategories and studies have actually shown that your mental models and your visual preferences are heavily shaped and affected by the physical landscapes you grow up in and the environments you grow up in so why can't this be applied to digital environments too we can't assume that everyone coming online who hasn't had this dmoss type background has the same mental model of searching it can be really hard to know what you want to search for and also to formulate a query imagine how intimidating it could be to form a query in your second third or fourth language and then hope to find some semblance of a relevant result come up afterwards so our tip for this bit of culture is to show the breadth of what's possible in your product don't assume an existing mental model the example we have here is actually google search itself on this mobile device so there are options to discover not just particular categories but you can also dive directly into news content you can find out the current weather before even worrying about typing and typing can still be very hard on mobile devices another example are categories within the google play store again this screen guides users to show what the play store contains and it clearly shows the various categories with matching icons on the side the next component is content and one way just one way that content manifests is with language statistics on English speaking ability can be pretty unreliable but it's generally accepted that about 30% of the population in India can speak English and a third of that have some semblance of reading and writing aptitude so 10% overall and this is a really interesting statistic to consider alongside the fact that a lot of users in these markets and globally use their android os in English despite it not being their first language and despite not fully understanding written English so another thing we can't assume is that users in these markets will use interfaces in their native language if that's even offered as an option and we can also can't assume that they want to type in their native script so we need to work on improving our English UIs our tip here is to simplify language short simple direct for maximum clarity and with this to avoid very culturally specific phrases like shucks most of the examples we've been showing so far are examples of what to do and we want to show a quick example here of a screen that may not work so great in other markets this is a past youtube onboarding flow that said new awesomeness awaits and the word awesomeness doesn't really mean awesome or really anything in other vernaculars finally the last component of the useful bucket is commerce and oftentimes when we talk about commerce and new internet users we talk about forms of payment because so many so many economies run on cash that's important for your product to consider from your business standpoint as well but we want to talk about a UI specific design example as well and that's around sachet economics sachet marketing so sachets are small little packets of consumer goods like shampoo or are detergent and what they are is they're really cheap so people can go and buy them and this allows folks to get a sense of what the product is and try it out at a very low cost they can try the experience a little bit before they commit to buying something even more so instead of buying a big bottle of shampoo they'll buy a single one-time use sachet of shampoo and we can actually bring this learning back into how we design digital experiences as well and our tip here is to allow people to try before they buy providing tastes of experiences before you want them to commit to them the example here that we're showing is a website of flipkart flipkart is a big e-commerce company in india and what we're not showing here actually is the flipkart android app good android app great android app but we're but flipkart is heavily invested in a web presence as well so for folks that don't really know what flipkart is they don't want to invest in the data and the storage and the battery to download an app they can go poke around the website to get a sense of what the app is before they commit to downloading the app another example of this is within youtube go where when you tap on a video before you choose to play or download the video we show you a little preview of what's inside we're not going to give away the entire video but we show you a sense of what you'll be getting if you choose to spend the mbs to download it keep in mind like tracy said data feels like money for a lot of folks so it's not just spending money that feels like money it's also spending data it's spending limited storage that folks have on their devices spending limited battery so the final bucket of this framework is engaging does your product delight by being social sensorial and surprising in this context social touches on the fact that the cultures in these types of emerging markets is collectivistic it's not individualistic they rely on human networks to fetch information to learn new things to discover friends are their search engine for us google is our search engine and there's nothing wrong with that but considering how we fetch information with search engines while we are leveraging the other people's knowledge on the internet it's still very often individual experience additionally content like a funny video is very valuable social currency in these types of environments where new online content is a lot harder to search for and therefore a lot harder to access our tip here is to encourage the sharing of experiences don't just design for the individual before the community as well if you allow people to show off to their friends and family it really bumps up the the social currency of your entire app in this example youtube provides a pretty clear shared affluence centered underneath the video and the title so it makes the action really easy to discover and act upon at that moment that you realize you want to share this content this is just one way though that a product has encouraged sharing our second example is the peer-to-peer sharing feature within youtube go which allows people to transfer videos or social currency to any friend nearby with very very minimal use of data the second component is sensorial so not all cultures embrace what we deem this minimalistic aesthetic that seems to be quite coveted in western cultures and why it's not always coveted in other cultures could be due to a lot of reasons be it traditions resources beliefs or what have you but in these markets and cultures you'll see a lot of different uses of color patterns and designs that can connote a feeling of vibrancy or energy to our western eyes so for instance here we see a comparison of clothing and textiles across three different countries that differ vastly from each other and also from western markets to be it the context in which they're wearing the clothes how they're crafted the colors the meanings behind the patterns etc many more things another really interesting example is information density with respect to high context and low context environments the center image is from a really interesting study that showed that users in indonesia preferred icons that depicted more of the surrounding context of the object and more detail in the icon depictions whereas users in the united states prefer the more abstract simple icons or more low context designs the images on the sides are common scenes that you'll see walking the streets in india very dense signage or stores that are organized maybe how you would consider your storage room is organized and the main point is that what's considered clutter to us or not clean and simple design could be the preference for others a lot of these differences can also be really overwhelming but ultimately we say they should be embraced so our tip really is don't westernize everything that's not really what people want nod to local aesthetics by drawing inspiration from regionally popular products or hiring local designers this will make your product feel more relatable that you care and pay attention to the cultural aesthetics a really great example here is google alo stickers that team hired local designers in india to create these downloadable sticker packs and not only can you see the pretty clear inspiration of local aesthetics but also the addition of common transliterated phrases and our last component for the engaging bucket is surprising this is this is the one that's the most nebulous but we feel we strongly recommend is to surprise and delight by doing one thing or just to focus set of things very very well it's really easy to think of emerging markets to think of new internet users as having a lower tolerance or being less savvy and that's absolutely wrong just because they're new to the internet does not mean they're new to life does not mean they're willing to settle for less they've got different mental models different desires but they also deserve the desire they want great experiences and because many of them are coming online for the first time they're trying to evaluate how they use their technology they'll try something out and if it doesn't make sense they'll move on to the next thing so as an umbrella level it's absolutely critical that you keep your product focused not have too many gimmicks and just focus on doing this one thing delivering the value very clearly people are critical they're they're looking for they're looking for great experience looking for great experiences that resonate with them and we think that this is the way to do it this is the way that a lot of product development happens at google as well where we focus on figuring out what this value makes sense and just being very focused on delivering that so thank you for following along as we've discussed these different components of these different buckets and making your product usable useful and engaging in india alone over the next couple years a hundred million people are going to be coming online every single year so let's not forget that while we're all human well we all have well we all care about our families we have similar aspirations for our families our lives there are many different cultures different context constraints and mental models that we can accommodate for as well we think that they deserve great internet experiences and we're excited that you're here that you're thinking about this and if there's one piece of advice that we can give you other than following this framework it's to listen to your users and whether that means going to a place where your product is popular or you want to be popular to conduct some user research or to building feedback mechanisms into your product even just looking at the google play reviews and looking at where the feedback is coming from which geographies they're coming from you think that focusing on the user is the way to help understand how to deliver that focused value thank you very much and this this framework is published online in the acm interactions magazine so you can go ahead and check that out at this short link there's all we also have a website developers google.com slash billions and we'll leave you with this framework and these tips all on one slide and we invite any q and a as well so please come up to the microphones in the aisles thank you thank you hi there hi there uh any plans of introducing dcb in india could you repeat the question direct carrier carrier billing in india introducing which in india a direct carrier billing oh a carrier direct carrier billing yes um there are there's uh there are some teams within google that are experiencing it or that are experimenting with it in different parts of the world and so it's all it's all kind of business discussions that we're not necessarily privy to but um i think these are this is on the mind because that is one of the main issues over there uh people don't hold great cards so it's only famous of big issues over there to sell your app okay yeah thank you for that feedback well we'll pass that along we'll we'll let you know that you asked the we'll let them know that you asked the question thank you thank you hey guys um a question regarding the density point that you mentioned uh how much of it do you think is a preference or it's just environmental where they have a lot of shops around and they just feel the need to have like all the options available and just becomes dense as a result of it oh that's an interesting question i'm not sure that we're we're sure yet of the ratio of how much each plays into it i think more of the point was that um we see that products can can force a more simplified as a lot of developers know there's more to having an app succeed than just building a great app you want your app to be dynamic and responsive by delivering fresh content to users and quickly reacting to their changing needs you want to test out major decisions to make sure you're doing the right thing before you push them to your entire audience and ideally you want to provide a tailored experience for each user so your vips feel like well vips but let's be honest that can be a lot of work and if you're a developer without a ton of resources that's time you'd rather spend on other things like building your app that's where firebase remote config comes in firebase remote config is a simple key value store that lives in the cloud but don't let that simplicity fool you because it lives in the cloud it means you're able to deploy changes that your app can read within a matter of minutes for instance say you've just pushed your app out to the world and you suddenly discover that your swedish text contains some offensive language how are you supposed to know you don't speak swedish i don't blame you but fixing that text the old-fashioned way would mean creating a new build and going through the entire publishing process again that's something that could take days which is an awfully long time to have 9.2 million people cursing your name but if your app uses firebase remote config you could change that text in the cloud through the firebase console kind of like this the next time your users fire up their app remote config will grab the latest values update your app's text and just like that you've averted a major international crisis or let's say you've got a puzzle game and you're hearing complaints from your players that level five is too hard if you've configured your app using remote config you could tweak those settings to give your players a few more turns and push out that change to the world but hang on are you sure that's the right thing to do what if the silent majority of your users actually enjoy the challenge of a more difficult level and by making it easier you're going to turn away your most hardcore and potentially highest paying customers how could you test whether or not this change is a good one sounds like you need an ab test that's where remote configs audience segmentation feature comes in this allows you to deliver different configurations to different groups of users simultaneously so you can try out your new level settings with half your users while keeping the old settings with the other half but audience segmentation isn't just great for ab testing maybe you've got a feature change that could have a major impact on your in-app economy or maybe you just want to double check that some new networking code isn't going to set your servers on fire you can use firebase remote config to gradually roll out these changes trying them first with a small percentage of your users before pushing them out to your entire audience remote config can also deliver different configuration sets to your users based on all sorts of different factors from device type or locale to any audience segment you've defined in firebase analytics so you can send out one welcome message to your New Zealand customers and another to your Australian ones or only show your review this app button to people who use your app every day or you can change your home screen experience for your customers who have spent large amounts of money on in-app purchases so they feel special remote config is backed by a client library on ios and android that handles important tasks like caching dealing with flaky connections and keeping network requests lightweight which is always a good thing to give remote config a try check out our documentation here we can't wait to see what you built so you've built an amazing mobile app that your users are going to love but you want to get into people's hands and let them see just how awesome it is well adwords helps you do this putting ads for your app in front of billions of people that use search youtube google play and more you can quickly set up an ad campaign to reach the type of users that might be interested in your app you only pay if the user clicks on that ad and you can set the budget and acquisition costs that you're comfortable with but how do you know you're reaching the right users maybe some will install your app and forget about it while others will make it part of their daily lives firebase analytics helps you do this you can define events that happen in your app that you consider to be important such as reaching the first level of your game purchasing a fancy new pair of sunglasses or returning every morning to check out new products you can tell adwords which of these events are most important to you then adwords will display ads to people who are more likely to complete these important actions in the future you can also build audiences which are specific segments of users and have adwords display your app to them for example imagine that you have a group of users who are very active have added a product to their cart but haven't purchased yet well you can use firebase to create an audience of just these people and then use adwords to give them specific ads and encourage them to come back to your app and take action understanding your users and engaging with them at just the right time and in the right way will help you build loyal users for your app firebase and adwords working together to help you grow your user base get started today your new users are waiting every time your mobile app crashes it's an invitation to your users to rate it poorly and uninstall it this can spell disaster for the new app that you just launched if you're an app developer you need to know exactly where your app is having problems and you need this information quickly so you can correct the issue before it affects too many of your users this is where firebase crash reporting can help our crash reporting tool collects information about crashes that your users are experiencing and sends that data as quickly as possible to be tracked in your dashboard with the dashboard you can monitor the overall health of your app here you can see the top crashes and track the recent history of crashes in your app crashes are grouped by similarity and ordered by the severity of impact on your users so you always know which issues to address first in order to best increase the quality of your app each instance of a crash comes with detailed information surrounding its circumstances including the stack trace device type and other important details about the device at the moment of the crash to further enhance these details you can log additional information as the app is running all recent log messages are captured for every crash to help your diagnosis in the event that you're able to handle and recover from an error in your code but want to report that event for analysis as well there's an api to send these non fatal errors for display in the dashboard it's easy to get started with firebase crash reporting on android the sdk is enabled simply by integrating the firebase gradle plugin into your build with no additional lines of code required and on ios there's a cocoa pod which requires a few lines of code for initialization when the app launches to learn more and get started with firebase crash reporting today be sure to start with the documentation available right here we can't help you write perfect code but we can help you fight fires with firebase we are in the era of progressive web apps browsers are more performant and capable than ever and front end javascript frameworks like angular and polymer have simplified development of rich app like websites you can now build an entire application purely with static files like html css and javascript firebase hosting is tailored for front end web applications firebase hosting is a developer focused static web hosting provider that is fast secure and reliable no matter where user is the content is delivered fast files deployed to firebase hosting are cashed on ssd's at cdn edge servers around the world from san francisco to stock home to soul your users get a reliable low latency experience and every site is served over a secure connection firebase hosting automatically provisions and configures an ssl certificate for each site deployed so you can get that green lock of confidence deploying your app from a local directory to the web only takes one command so whether you're building a single page web app a mobile app landing page or a progressive web app firebase hosting has you covered to get started with firebase hosting check out our quick start to get you up and running in minutes happy deploying analytics we all know they're important to building a successful app which is why there are many different kinds of analytics tools for app developers to use there are in-app behavioral analytics which measure who your users are what they're doing and so on and then you've got attribution analytics which you can use to measure the effectiveness of your advertising and other growth campaigns not to mention push notification analytics and crash reporting but quite often this work is being done by completely different analytics libraries which means you've got reports living in various tools across the web and trying to understand trends across these different reports much less get them to talk to each other isn't always easy that's why we've created firebase analytics firebase analytics is built from the ground up to provide all the data that mobile app developers need in one easy place and it starts by giving you free and unlimited logging and reporting that's right no quotas no sampling and no paid tier to worry about simply by installing the firebase SDK analytics automatically starts providing insight into your app you receive demographic information on who your users are how regularly they visit your app how much time they've spent using it and how much money they've spent in your app but not all apps are alike and you can get detailed information about what your users are up to by logging events specific to your app these can include common events that firebase analytics has already defined like when your users add an item to their cart and there's also support for custom events you create yourself like when a user completes a workout in your fitness app or when they take a selfie in your photo app geez but it's not just about seeing what your users are doing it's also about discovering who your users are so in addition to demographic information you can also discover how your different groups of users behave by setting custom user properties have a music app and want to find out whether your classical music fans are browsing more albums than your jazz fusion fans that's the kind of data you can easily break out thanks to custom user properties and firebase analytics doesn't just measure what's happening inside your app it lets you combine your behavioral reporting what your users are doing with attribution reporting or what growth campaigns are bringing people to your app in the first place so if you want to know which ad campaigns are bringing you the users who spend the most money or are sharing the app with their friends or have unlocked the last level in your game and are ready for the sequel you can do all of that in firebase analytics but don't stop there once you have all this information you can take action on it using firebase analytics audiences firebase analytics gives you the power to build up groups of users or audiences out of just about anything you can measure in your app want to target users in brazil who have visited the sport section of your in-app store it's as easy as a few clicks in a firebase console once your app has built up this audience you can send them notifications using firebase notifications or you can modify their in-app experience using firebase remote config or you can target them through ad words google's ad platform and then because that impact can be measured using firebase analytics you can confirm you're getting the outcomes you expect firebase analytics already comes with a dashboard that lets you view answers for common questions but if you need more specialized analysis you can export all of your data into big query google's data warehouse in the cloud where you can run super fast sequel queries to slice and dice this data however you'd like you can even combine it with other analytics data that you might be capturing and this is just the tip of the iceberg of what firebase analytics can do for you to find out more check out our documentation here and give firebase analytics a try launching a great app requires dedication and vision but growing one takes revenue how about a monetization solution tailored specifically to your app one that has rich and engaging ads one that works with firebase to give you the insights you need to grow and one that uses mediation to connect you with networks all over the world well that solution is ad mob trusted by more than one million apps ad mob offers developers everything they need to implement first-class monetization strategies and when paired with firebase it's even better ad mob is included with the firebase SDK and its apis are built to make adding banners interstitials and video ads to your app simple plus ad mob automatically selects the ads that pay you the most so you can sit back and watch your revenue grow and as your business grows you can benefit from ad mob's advanced features say version two of your app has a slick new design and now you need an ad format that fits naturally with your content with ad mob's native ads you can create css templates designed specifically for your user experience we'll style the ads to match and display the result in a native ad view that fits your app like they were made for each other and it doesn't stop there ad mob helps you earn in-app purchase revenue too ad mob can determine which of your users is most likely to make a purchase and target those people they'll see an ad you design and they can make purchases right there now with your app slick design and in-app products it's become a worldwide sensation but how can you make sure you're maximizing the revenue generated by each user with ad mob you can connect to ad networks around the world bringing even more advertisers will compete for your impressions and because you're using firebase you get access to free and unlimited analytics imagine a big-time blogger in tokyo posts about your app and overnight your japanese audience quadruples with firebase analytics you can easily spot the trend and then switch to your ad mob settings to tweak mediation configurations or start a campaign targeting your new fans that's ad mob with firebase it's as easy as you want and as powerful as you need the firebase notifications console lets you re-engage your users quickly and easily with it you can manage and send notifications to your users easily with no additional coding required messages can be addressed to single devices firebase cloud messaging topics or devices that you select using powerful analytics tools so for example you can send a message to all of your users who've made an in-app purchase giving them a special offer allowing you to re-engage with them the firebase notifications console integrates with analytics so you can measure the effectiveness of your messages and explore insights based on your users activities so you can grow your application by easily engaging your users through the firebase notifications console we all know from experience that people love to share things about themselves such as photos videos and gifs that express their feelings so what do you do to let them store and share these files through your app that's where firebase storage can help our storage api lets you upload your users files to our cloud so they can be shared with anyone else and if you have specific rules for sharing files with certain users you can protect this content for users logged in with firebase authentication security of course is our first concern all transfers are performed over a secure connection also all transfers with our api are robust and will automatically resume in case the connection is broken this is essential for transferring large files over slow or unreliable mobile connections and finally our storage backed by google cloud storage scales to petabytes that's billions of photos to meet your app's needs so you will never be out of space when you need it so give your users space to share their lives with firebase storage available right now for ios android and web applications and to learn more about firebase storage check out the documentation available right here hello everyone and welcome to the 2017 tensorflow developer summit and i'm delighted to see all of you here today today we are excited to announce tensorflow 1.0 tensorflow's philosophy has always been to give you the power to do whatever you want but also make it easy and this makes it even easier we really were hoping to build a machine learning platform for everyone of the world that was fast flexible and production ready the point of tensorflow was to figure out how can we give this back to the community and be able to use tensorflow to further whether it's the research or the production needs it's how we express our ideas and it's the piece of software our engineers and scientists spend most of their time interacting with it so tensorboard is a really exciting tool it's something that will let you take the confusing world of tensorflow and start to dive into it it's just a really amazing time to be an AI researcher one of the projects that we've been working on is using deep learning for retinal imaging can we use deep learning and reinforcement learning to generate compelling media but this is just the beginning the tensorflow community is truly global we want to see all the amazing things that you guys can do with tensorflow thank you very much good morning berlin it is an absolute pleasure to be here with you here we go we're live we have a lot of experience building some of the world's most popular applications and we've learned a thing or two about what it takes to build an app and we found that it's a pretty difficult process a lot of your time goes into running infrastructure instead of building the features that make your app your app it has to be a better way that better way is using firebase we're now up to over 750 000 developers using the product if you use firebase your app's code talks directly to our powerful managed backend services we take care of security and of scalability so that you can focus on building the features that your users love today we're launching firebase ui 1.0 it's an open source library it has customized theming and it works for web and android and ios so you can go ahead and drop that in and you'll have all of the uis that you'll need is my app set up correctly which events are being captured by the sdk are you receiving my events and parameters we've built something the ideal tool to answer all of these questions and these pain points app quality leads to better user retention better your app is and the more stable it is the more likely for users to come back and for your business to be successful and sustainable and that's where we come in so we're really looking forward to get the feedback from the community as always to help us continue to refine a product and to work together to help you build a better app and we want you to be able to spend all of your energy on bringing innovation and creativity something new to the world that's really what we're trying to achieve here is making all the infrastructure pieces simple for you and i'm really excited for you to engage with firebase and see how it can make you more successful all right let's get back to the code in the experiments tent and there are so many cool things to play with before we get to that which is going to be in just a moment i just want to ask me what experiments are all about thanks so experiments are platforms that we bring out websites each one is a kind of a playground that invites everybody to come in and send in interesting playful things that they're doing with our technologies we have chrome experiments android experiments web vr experiments and ai experiments they're all there and new platforms will come out and we're very excited about the things that people are doing with them and a lot of them are open source right yes a lot of them are open source which is amazing because i mean experimentation is a wonderful way for people to play and discover new things for themselves but when they share with the community then everybody gets to see what's possible with ai what's possible with web vr and it's really amazing and it's also amazing for us at google to see the surprising things that sometimes happen may people just have fun and play with these technologies and a lot of them are built by people who aren't googlers oh yeah absolutely i mean there's a submit button down there and you can just go and send us the stuff that you're doing the garage the kind of passion project that you have and you'll be surprised how helpful some of these projects could be for everybody else especially we can get to get inspired and and imagine new things with these technologies all right let's check some of them out first off is quick draw and i'm actually going to play do you want to like narrate for everybody at home what we're doing yeah absolutely so so quick drop oh thanks so so you might have actually you might have actually played this game at home but this is the multi multiplayer version of the game what's happening here it's a machine learning game so then so you're gonna draw something and yep you're gonna draw something and the machine is going to try to guess what it is that you're drawing and if the machine can guess right you get a point all right so here we go we got timothy adam and nick three two one go so the machine is going to tell them something to draw in this case for example timothy is asked to draw to draw water and you can see over there that he's trying to draw water and if you and if the machine can guess if the computer vision i'll go with them can guess that it's actually the handwriting api if it can guess that that he got water right that he gets a point so nick just i think nick is killing it over there he's drawing a train adam seems to be in first place at 14 after we got nick Jonas with 12 and tim up there we go we're done time's up third place timothy but it's okay how do you draw water i just drew the surface some things are perceived differently right like people think water drops versus river water shapes and different every time yeah awesome thanks guys all right let's go check out something else uh there's something that makes music over there right yes okay so this is a really exciting one um uh the artist georgio moroder who used to be the or is actually no he is the the king of url pop created this amazing soundtrack and what we've done is we're using the computer vision api to go and have the phone identify different items and then the phone is actually narrating itself as a soundtrack to the music telling us about what it's seeing what it thinks it's seeing and how it's identifying these objects and that actually becomes part of the song so it's kind of that's really cool i think it's better to just listen to it right all right yeah let's go check it out you guys ready all right which one should we do first uh we should do recognition take a picture and they'll tell you what i see leave this with you guys thank you that was really cool i got a couple more things that was super fun so this is satellite imagery yes this is satellite imagery it was created by an artist zack Lieberman and the daft team so the idea the idea here was really simple what if you could search the world using a simple gesture right and notice here how he takes you to different places all over the world every time you draw a gesture so let's just give it a shot let's see where we go spain so it's an experiment it's an experiment it's just it's just a really wonderful experience right it's not like you would be able to like find your way to a specific place by drawing a gesture that's not the point the point is really this is optimized for like a really joyful and fun experience do you want to try the other mode as well yeah so what it's going to do here is build this like imaginary city built around these tiles as you start dragging in a certain direction it'll start building a road by combining all these tiles there's about 50 000 of them that that the team has processed using various algorithms to go and create this experience and this is a chrome experiment you can check it out at chromeexperiment.withgoogle.com i know sorry chromeexperiments.com really cool it's really joyful and rather pleasant it is yeah it's very meditative i mean you know we've been standing here and and seeing people play with it and we find ourselves just just like starting to just like yeah yeah it's mesmerizing it's zone out on the satellite image oh absolutely i mean the world's a big place it's a big place it gets a little smaller when you get to travel it this way okay i think we have time for one more thing what should we check out yeah let's go check out actually a submission by a student former student he's graduated now from Denmark and he's created a wonderful piece using machine learning very unexpected interesting interesting piece so this is objectifier with spatial programming and it's a really cool machine learning device that takes visual input processes it through tensor flow and ultimately triggers if this than that statement so you could imagine putting your favorite reading glasses on would trigger a reading lamp pulling the sheets up at night getting into bed will turn all your lights off it can happen on device it was built by a developer Bjorn out of Denmark and if we had the proper european adapter we could do it all right within this physical hardware there's a raspberry pi zero in there and then it can ultimately connect to anything you can plug in for the trigger all right well i was playing with this earlier it was very cool it was able to hit the button and do a gesture and then like using that gesture we could turn it off or on that's awesome so you wanted to say one last thing about quickdraw one last thing we're very very excited today to announce that we're releasing open sourcing 50 million images these are vector drawings where the actual ordered draw strokes of images that people have drawn in quickdraw it's available now for developers to go and train their own models on which is very exciting researchers can go and and look at it and find some interesting patterns and artists can go and zoom interesting and weird things and and submit them back to uh to us as experiments so we're very very happy to share all of this data with the world that's awesome thank you so much and thank you for showing us around the experiments 10 oh absolutely thank you Timothy all right and me so that was experiments now if you want to see this video again or you want to see any of the other videos are recording at the sandbox areas here at google i o 2017 head on over to g.co slash i o slash guide i'm timothy jordan and i'll see you at the next one good afternoon thank you for coming i'm just to know a sad i'm a software engineer on the chrome team and this afternoon i'll be talking to you about ways of cranking the performance in your graphics intensive web apps and games uh for those of you who haven't read the abstract just to make sure we're clear this talk is going to be mostly about canvas and a few other things here's what we're going to be covering today the we're going to look at ways to get smoother image loading um this is because image loading is often a source of jank and web apps um and also we'll look at the problems that we face when we're aiming for 60 frames per second and how to solve these and also uh we'll look at moving web apps closer to the performance that we get with native apps and um we'll also be looking at ways for memory efficient and cpu efficient background rendering um and finally we'll look into um streamlining some important use cases with web gl in particular multi-view rendering and web vr um the first topic synchronous image decodes so the current practice when we load an image asset into an app is to start using it as soon as we get the uh the signal that the image resources is loaded um so this is what we do with the on load handler right so on load guarantees that trying to use the image is going to work if we use it immediately but it does not guarantee that it's going to be fast so the first time that we use an image resource usually there's a delay due to the uh to the decode overhead and that can cause apps to be janky especially if we start loading images in the middle of an animation um and examples of use cases are the first drive an image to a 2d canvas using the uh draw image method and also um a texture upload to a web gl context using the text image 2d call so the solution to this is to decode the images in the background so image decode is not something that web developers are used to controlling because we kind of just leave that up to the browser and it just happens um but you can take control of this using a new api that's called image bitmap so uh this was released earlier this year it's available in chrome already and it's also available in firefox here's how the spec defines what an image bitmap is it's um there's a few words in there that are a little interesting towards the end of that sentence undue latency what exactly does that mean it's kind of subject to interpretation but what is clear is that this is a performance primitive and uh what we do know is that image decode is a source of latency that we want to get rid of so um in chrome we we interpret this as uh we want to get rid of uh of decode latency so here's how it works the use case we're looking at here is we have a web app that runs an animation and while we're in the middle of the animation we have a new image that we want to load from the network and start using it so we can do that fetch asynchronously using the fetch api or using xml HTTP request and while that's fetch is pending the animation loop can keep running and then once we get a blob back we can call create image bitmap this new api and that will initiate the uh image decode and the decode is going to happen on a separate thread inside the browser so your animation loop can keep running while that's happening and finally when we get the image back you can use it in your uh webgl context or in your 2d context and when you're done using an image bitmap like once you've uploaded it to a webgl texture it's a good idea to call close this is not mandatory but if you close the object it's going to free the resources that are behind it immediately instead of waiting for garbage collection so this can help um keep down the memory footprint of your app it can also help reduce the frequency of garbage collection um because if you have all these dangling large objects they're going to have to be collected more frequently but once you you clean up an image bitmap object it has a tiny footprint and you can collect these over many frames so here's the old way of doing things this should look pretty familiar just um using the image interface and loading it and once it's loaded we use it the new way it's not a lot more complicated it's just you know maybe an additional line of code uh in this example we're using the fetch api but you could also use xhr and um there's nothing new about how we fetch the blob here just once we have it we call create image bitmap and that function returns a promise and the promise resolves once the image is uh completely decoded so here's some videos that uh well i'll just let you watch them to kind of speak for themselves so the left one there's a bit of a pause when the image changes and that's the decode that's causing that jank and the right one is smoother these were captured on a nexus 5x device so this is um a phone from from last year and we were using uh one megapixel images they're not that large you know just one k by one k now let's look at the dev tools timeline and see uh what was going on so in the first case where we're using an image element there's this long image decode and uh on the top row of this graph we see frame times what's interesting is there's one frame that took 233 milliseconds to render that's terrible that's definitely noticeable by the user so um what's causing this is the synchronous decode inside the call to text image 2d so that's what the this flame chart is showing um the image decode is nested inside and this is what happens the first time you use the image resource if we were using a 2d context with the draw image call the chart would look at essentially the same it would just be draw image instead of text image 2d and we have the same problem but the second time we use the same image resource then the um the texture upload would be a lot faster but in webgl you don't really acquire an image and then upload it multiple times you really just need to do this once so we're not really taking advantage of the the decode cache and um also images are not um decoded in advance uh for a really good reason because people often ask this like how come when I load an image clearly I'm expressing intent that I want to use this image so why doesn't the browser just go ahead and decode it right away so that's ready for when I need to use it but if the browser were always doing this with all images it would run out of memory because a lot of web pages have tons of image resources and we can only really afford to decode them as they're used so that's why we're stuck with this problem now this is what happens when we use the create image bitmap api um on the top um on the top row of the the main thread you see there's a bunch of small little blue lines that's uh the overhead for fetching the blob from the blob store and the big green rectangle at the bottom that's the image decode and the image decode is happening on another thread that's called worker pool um and what's the important part here is that while this decode is happening the main thread continues we can see all those little animation frame ticks so it keeps it keeps doing its work while this um this long task is happening on the worker pool and now this little blue arrow you see right there that's pointing at the text image 2d call so this thing that was the big source of jank before is now reduced to about three milliseconds so it's no longer a big issue now another use case where we experience jank from image decodes is image injection when you have a page that's just adding images to its existing content by injecting them well it's not really a good idea to start using um image bitmap for that um and examples of um of kinds of pages where this would uh be an issue are really long scrolling web pages with lots of images like a news website or a social media feed um if we started using image bitmap for all the image resources we could uh run out of RAM because there's just you know there there could be too much content in there especially if we're decoding them on a mobile device so there's another solution for that use case this is currently under development so it's not yet in the browser but it's it's going to expect to see it soon as an experimental feature in chrome this is a new function on the html image element interface we're adding a decode function when you call this what happens is it triggers a decode that can also happen on a on a separate thread and when it's done there's a promise that gets resolved and that tells you now's a good time to use the image to uh for example append it to the to the document and that won't cause jank so the reason uh we want to use this instead of image bitmap is that this api does not pin the image resource in ram the image uh the decoded version of the image is evictable uh so it's a lot friendlier for um situations we're under memory pressure so expect to see that soon behind a flag in chrome so next topic animation smoothness so ui performance is always a concern in all types of applications uh for graphics intensive apps and for games it's a constant battle we add features we make things prettier and you know our rendering time increases the app gets slower we lose our uh beautiful 60 frames per second so we have to reel it back in on mobile it's even harder we're dealing with slower cpus and gpus less ram and also the touch interfaces they make lag uh more perceptible to the user so this is what it looks like when we're trying to drive animation using the dom so we have our 16.7 millisecond budget this is if you're targeting 60 frames per second and there's the the blue rectangle there the javascript slice that's the time your app has to do its work whatever it needs to do at each animation frame and we also have to keep a big chunk of time at the end where we're not planning to do anything this is just idle padding and uh we need this because there's a lot of stuff that your app cannot plan for that the browser just does for example um a garbage collection you could just happen at any time or there could be um uh thread scheduling issues if you're um on a cpu that doesn't have a lot of cores and there's a lot of work to do the main thread could get descheduled and all of a sudden you have one frame where everything takes a bit longer to execute and also you could have event handlers um event listeners i mean and you can have a burst of events that happens all of a sudden and you have to deal with it and this is tricky so you need to have some padding if you really want to be able to keep that uh 60 frames per second rate then there's the browser overhead um there's style and layout this is just the cost you have to pay whenever you're touching the dom uh directly in javascript and then there's the painting and compositing overhead this is the cost of just um re-rendering the content of the web page and there's no way around this on um on mobile the fixed costs that are imposed by the browser they get stretched right because we you're on a slower device so that squeezes even more your javascript time slot and that makes it that makes your frame deadline even harder to to respect now when we're animating with canvas that orange rectangle that we had for style and layout that one just goes away because we're not touching the dom so things get a little bit better the time this timeline um it's also valid for use cases where we're using css animations well certain types of css animations like if you're animating opacity um or simple 2d transforms the style and layout uh calculations don't don't need to be recomputed at each frame then there's um the painting and compositing step it's usually a little bit lighter when we're using a canvas because the browser um has less less work to do so in general when we're doing sprite based animations we don't really need all that extra support that the dom and css give us and you can get a significant performance boost by using canvas and this is what a lot of 2d game developers have been doing recently since html5 canvas appeared several years ago now this is what your timeline looks like when you're making a native app first of all you have more predictability therefore there's um much little idle time is required because you know you won't get any surprise garbage collections the threads are under the control of the developers so you can you have control over um you know how many threads you have and what's the workload that they have so you don't have to worry so much about scheduling if you're if you're doing things right and also your events like your input events they can be coalesced so the the time that they consume is more predictable so that's that's nice now how do we get the mobile app experience closer to what we have on native well the first thing we want to do is get rid of some of that overhead now the the painting and compositing overhead is necessary with canvas animations because we have to go through the same presentation process that as the dom does because uh we're keeping all of our content synchronized but many apps that use canvas they're not touching the dom so what if we just broke that synchronization constraint in order to get rid of that step well there's a new api for that and it's called off-screen canvas so off-screen canvas is kind of like a canvas element except that it's not a dom element it's just uh it's just a native interface now it has no style no layout it's not directly paintable it's not composable so this means it can't be a part of a document by now you're probably wondering so if it's not paintable or how do i do anything useful with it the whole point is i'm animating something that i want to show on-screen so how do we display it well we use a placeholder a placeholder is a regular canvas element that lives inside the web page and the the off-screen canvas is connected to it so we can inject the content that we rendered in the off-screen canvas into this placeholder location on the on screen that's reserved by the canvas element and the process of the commit process it's very lightweight it bypasses the dom update mechanisms and the presentation actually in chrome's implementation the presentation all happens in the browser process so it doesn't consume any time on the on the main thread which is where we're trying to save uh save time for your application this is what it would look like to use off-screen canvas in javascript so the first thing we're not constructing the off-screen canvas directly we're calling transfer control to off-screen canvas this is a method on canvas elements that creates that placeholder connection you get a new off-screen canvas that's connected to the placeholder canvas then we create a rendering context the same way you would on an html canvas and then the animation loop well things look a little bit different there there's this commit method that we call and this is what pushes the content update to the placeholder and you'll notice that there's no request animation frame that's because request animation frame is is tied to the dom graphics update mechanism and we're trying to bypass that so we're using something else commit returns a promise and the promise will resolve when it's time to render the next frame so that's how this works so this is commit and it also plays the role of request animation frame at the same time so let's see how using this impacts our cpu usage on the main thread this graph shows two devices a powerful workstation and a typical phone so the the orange bars they represent the the time it takes to present 300 by 300 pixel canvas to the screen and what we see here is that in both cases the time it takes when we're using an off-screen canvas is faster by about an order of magnitude on regardless of the device we're using so on mobile we were taking 2.7 milliseconds out of our 16 millisecond budget just for pushing pixels to the screen and that overhead is is not necessary in most cases so this kind of seems like okay this is obvious we should just always use off-screen canvas but it's not that that simple it's not a free win because we lose synchronization with the DOM and some apps really need that but if you're doing a game that runs in a full-screen canvas for example you most likely don't care about that so you can just go use this and get a perf win for free now what happens if your main thread is busy for example there's a lot of event traffic happening on the main thread and certain events have to happen on the main thread because that's where the interfaces are well off-screen canvas can be used in a worker this way we give the animation loop its own CPU core and it can just run without being disturbed and this way we're reducing the idle time that we need so that's another step we can take that brings us closer to native performance so we're just giving the canvas well our rendering loop actually we're giving it an isolated execution environment we get more predictable run times for each animation iteration therefore less idle padding here's how it works the first part of creating the off-screen canvas that's the same as before we just call transfer control off-screen and at the bottom of this page the animation loop that's also the same as before except that it's running in a worker the difference is what happens in between after we create the off-screen canvas we need to send it over to a worker so we just we use post message for that and you'll notice the way post message is called here we're using the second argument that's the transferable list and that's really important because the off-screen canvas object is not cloneable but it is transferable and when you transfer it it preserves that connection with the placeholder so even though we've moved it we've moved the object into a new realm if we commit from the worker the content still ends up in the placeholder canvas now here's a simple example that oh there's something wrong with the video let's retry that slide okay what we're supposed to see here I'm gonna picture it so you try you know everyone close your eyes try to imagine this on the left there's a canvas with an animation that runs really poorly and it's happening on the main thread and on the right there's another animation oh here we go thanks all right so the animation on the right is going nice and smooth and the one on the left is janky so the one on the left is being rendered on the main thread and the reason it's so choppy is because we deliberately inserted long tasks on the main thread and those long tasks are preventing request animation frame from reaching its target frame rate so the point that this makes is even though request animation frame isn't running at full speed on the main thread the worker can still pump out frames at 60 frames per second with no issue and that's the that's the whole point of isolating it so that's what that showed can we switch back to the slides now thank you all right so by now you're probably thinking all this seems nice but there's one little problem here I'm already using a framework and it just uses canvas it doesn't use off-screen canvas well maybe but is that really a problem um there's something really interesting is that the the web platform is highly hackable right people do poly fills um well with off-screen canvas often we can just drop it into frameworks that usually use an html canvas element and you know hope for the best and maybe things just work or maybe you have to do a little bit of of bootstrapping to get things working kind of like a reverse poly fill mechanism that makes sense so we're i'm going to walk you through an example of this using the 3js library so 3js is just for those of you don't know it it's just a really popular framework for doing 3d rendering on the web and 3js is meant to work with canvas elements what we're going to see is not only can it easily be set up to work with an off-screen canvas but we can also use it in a worker without too much work and a quick little disclaimer the the steps I'm going to show in the in the following slides they're not a complete full featured solution they're just you know some some ideas to get the ball rolling if any of you would like to to try this so first step getting it to work with an off-screen canvas usually 3js renderers the way most people use them is that we let the renderer create its own canvas well we can override that behavior because the renderer constructor can take a a creation parameter called canvas and we use that creation parameter to to specify our own canvas just say okay now don't create your own I have one go and use this but this parameter expects to receive an html canvas element but if we send it an off-screen canvas instead will it just work well it almost just works there's there's one little thing we need to do off-screen canvas doesn't have style right that's not a DOM element why would it have style and sometimes the there's a bit of code in in 3js that tries to write the width and height style so here I just created stubs but you could do something a little more clever you could create a getter and a setter that forward the style to the placeholder canvas that might be what your that might make sense for your application maybe maybe not but anyways the stub is enough to just get started and show stuff on the screen the other problem is how do we get 3js to work in a worker 3js has dependencies on some DOM interfaces the most important one is html image element but we can work around that and we're going to work around it by using the same texture loading mechanism that we saw in the first part of the talk where we use create image bitmap so that's what that this diagram at the bottom of the slide shows using xhr or fetch to get a blob and then the blob use create image bitmap get an image bitmap then upload that to a texture here's one way of doing it this is a really simple quick and dirty way of doing it of course we could just tweak the 3js library to make the make it do the things the way we want but here i'm not touching the 3js distro and there's a good reason for doing it this way if you don't hack it then you can keep getting 3js from your favorite cdn so here what we're doing is we're hacking the prototype by redefining what image loader load does and it turns out that 3js already provides a file loader interface and file loader can be used in workers because it uses xml htdp request under the hood so we can just wrap that but that's not enough we also need to produce an image so here we use create image bitmap and once we get our image bitmap object back we send that to the onload handler and the onload expects to receive an html image element but we're tricking it we're giving it an image bitmap instead and of course this happens to just work because if we're using the image for a texture upload while the the opengl text image 2d call doesn't care if what you're giving it is an html image element or an image bitmap so it can just make the substitution and it works now if you're making a game engine you're probably thinking well to get everything working in a worker I also need input events and there are no input events in workers so how do I get those mouse clicks and keyboard actions going well the solution is the forward events but this is not a great solution it has a serious downside because if you just have your event listeners on the main thread when they receive input events they post message them to the worker where they can be used well if your main thread is busy you'll be introducing input latency so this is still not ideal it's an unsolved problem but we're we're determined to make this better in the future this is like a future next step for this this class of solution now let's look into some other uses of oscarine canvas background rendering this is an interesting one people are already doing background rendering today this is for example situations where you have a game where it needs to do some asset preparation on the client side for example maybe you're rendering text labels that you're going to be using as textures if you're doing a lot of this work you're probably doing it with today's technology using an html canvas element that is not attached to the DOM that way it's all hidden it doesn't appear on screen just doing your stuff but we can do a little bit better we could move that work over to a worker that way you can batch all of your background rendering you don't need to to worry about dividing it into small chunks so that your ui can remain interactive or anything like that and this is how you do it so in your worker you create an oscarine canvas but this time we're creating it directly using the constructor without a placeholder and when we're done doing our background rendering we want to capture the results to send them back to the main thread so that capture can be done using this this method called transfer to image bitmap this as the name implies this is a transfer that means we're not making a copy we're just tearing off the pixel buffer that the that the canvas rendered to and because we're just tearing it off this is basically just copying a pointer well what this means is it's going to be faster because there's no copy and it's going to be leaner in memory because we don't have duplicate copies of the same of the same image in ram and and that's really going to help with memory bloat and for sending the rendered results back to the main thread or maybe we're sending it back to another worker if that's where your rendering loop is well we're using transfer semantics there too and this is this important it's important to note that image bitmap is transferable but it's also cloneable so if you if you don't specify that last that second argument there at a post the post message it's still going to work but it's going to be slower so remember to do this if you don't if you don't mind what the transfer implies you know that you can no longer use that image object in the current context and you'll get some good performance games so just so you all know we've been getting feedback from app developers telling us that it's really hard to write applications that do image manipulation and that work on mobile the second you try to use a high resolution image like for example a picture from a DSLR camera it's really easy to start running out of memory and this is due to all the multiple copies of the same image data that we end up with in memory just because of how the current APIs are designed so we're really trying to move towards zero copy interfaces and that's what these transfer semantics are about this should really help with imaging apps and going forward we're going to continue to provide new zero copy APIs another use case for off-screen canvas multi-view rendering with WebGL so imagine a CAD software or a 3d modeling software or animation or something like that where you have multiple views of the object like what we see here on screen well when we're rendering multiple WebGL views that use the same resources so the same vertex buffers the same textures the same shaders it's useful to share resources between these views now WebGL does not allow resource sharing between different canvases it's just designed that way so the solution that most people are using is to have a single canvas that's in the background that we're going to use for rendering all the views and when a view is ready we just copy it to the presentation canvas so this can already be done today using an html canvas element that's in the background you know that's not attached to the dom but we can make it a lot more efficient by using transfer semantics this is how it works so the background canvas we can use transferred image bitmap so that's a zero copy operation and to present it we can use this new type of rendering context that's called bitmap renderer the bitmap renderer context is a very simple canvas rendering context that has only one method in its api and it's transfer from image bitmap so that again is a zero copy method because the alternative would be to just use a dry image to a 2d canvas and then you're paying the cost of compositing and it's and you have an extra buffer in memory and this is yet another use for Austrian canvases web VR this is still a work in progress we're not sure exactly what this api is going to look like in the end but you know it's being discussed right now and what we do know is we do want to get web VR to run in a worker because we're looking for that advantage of having the you know the an execution context that's isolated from the rest of the browser so we can so we can hit that 90 frames per second or 75 frames per second on desktop and and also just in general we want to be able to squeeze more content to get those rich user experiences and the worker can help with that so finally if you want to try off-screen canvas well we recommend trying it in chrome 60 it it is there in earlier versions of chrome but it's not quite ready for consumption chrome 60 it's starting to look pretty good so you can get it for android on the plains on the play store just search for chrome canary you can also find it on the web to download it onto your favorite desktop platform and you can install this alongside it in existing chrome installation so it's perfect if you just want to test it in a sandbox and once you have chrome 60 well you have to turn on the experiment so you navigate to chrome flags and then you find this item called experimental canvas feature click enable and restart the browser and you're good to go now if you're interested in following the developments with off-screen canvas here's a link to the bug just follow that link and there's a little star in the corner you click that star and you'll get email updates so you'll know you know what's going on if it's ready to ship what are the outstanding bugs that sort of thing and if you do try it out please provide feedback if you find issues with it submit bugs on crbug.com you go there and not just about off-screen canvas by the way please web developers help us help you go submit bugs help make chrome better and this is the last slide let's just summarize what we've learned today so please yeah I see a few people with their cameras there this is the one slide you want to take home with you we learned that you can get smooth asynchronous decodes with image bitmap reduce your rendering overhead using off-screen canvas isolate your rendering in a worker to improve your smoothness improve your throughput and reduce your memory bloat by using transfer semantics and we saw examples of use cases where transfer semantics come in handy like background rendering and multi-view web gl so that's it thank you for listening and i'm looking forward to seeing what you're all going to make with these apis the only thing that evolves faster than technology is our expectations we want everything better easier now suddenly downloading an app feels like it takes forever and in many parts of the world data is still at a premium with one megabyte costing up to five percent of a monthly wage let's face it though until now the alternative to native apps hasn't been great progressive web apps can now deliver mobile web experiences with a native look and feel offering features like real-time push notifications adding a site to your home screen so you can easily jump back to it with a single tap even when you're offline plus the ability to make quick payments on the go this is the next generation of the mobile web so what are we waiting for let's go and build something great android 2.0 is the most significant update to android since its launch in 2014 i'm hoi lam and i'm here to take you through three highlights of android 2.0 first material design has arrived on android the new android interface uses a darker color palette which makes the watch plenty more in social environments it also helps save battery by OLED displays we also suggest that developer adopts vertical layouts and reserve horizontal swipe for activity dismissal in our research this makes user interface easier to understand to help developer implement these new design patterns we have introduced new user interface components such as the wearable drawer layout and wearable recycler view check out the material design for android site for more details the second highlight for android 2.0 is watch face complications before android 2.0 if a watch face wanted to display data from another app they need a one to one agreement in both business and technical terms to alleviate this complexity with android 2.0 we have introduced the complications api complications are a traditional watch making term for areas of the watch face that displays information other than the time such as the date or phases of the moon we have extended this concept to smartwatches so any app can publish data for watch faces to consume and watch faces can display this data in a style that fits into their unique designs for app developers this helps increase user engagement and mind share for watch face makers this adds utility to the watch face users can now choose whichever watch face they want as well as getting the data that they need check out the watch face and complications sections of the developer documentation for more details last but not least standalone functionality previously android 1.0 apps require a phone app to communicate with the cloud with android 2.0 wear apps can access the internet directly without the need for a corresponding android phone app installed this means the android app can access the internet even if the user has an iphone to help with app distribution we have also put the google play store on the watch users can now download apps directly on their wrist apart from these three areas of improvement material design complications and standalone apps there are numerous other enhancements with android 2.0 check out the android developer site for more details i'm hollam happy coding hi everyone and welcome back to ask firebase i'm james tamplin a co-founder and product manager on firebase a fun fact about me is i first started programming at the age of 12 using a ti 83 basic in math class needless to say my math teacher wasn't pleased our first question comes from mr pan four how did you meet firebase co-founder andrew lee so andrew and i actually went to high school together we've known each other now for about 17 years after we both finished university andrew gave me a ring and we had dinner at a small midwestern diner and nine years later we're still working on making lives easier for developers with firebase this next question comes from harold weisinger and harold asks is there a way to run my cloud functions once every hour great question harold unfortunately currently there isn't however it's our number one most requested feature for cloud functions which we launched last month stay tuned for more news on our twitter facebook and youtube channels thanks for the question harold next question tj asks can you check the progress on a database request so no when it's finished well tj we have a couple of ways you can kind of get at this first of all when you write into the firebase real-time database you can attach an on-complete callback with your set request this will let you know when the write is completed to disk alternatively you can check out our firebase real-time database profiler that we released just a couple of weeks ago this will give you granularity at a path level into both the latency and the bandwidth usage this way you can get a little bit more insight into what's going on with the firebase real-time database thanks for the question tj and check the links in the description for more information next question our next question comes from seichun kim what's the limit of firebase projects that can be created well seichun it depends we don't actually publish or document the limit of projects that can be created since that number actually depends on inputs from our spam and abuse teams however a couple of things to note first billing accounts or paid accounts don't count towards this limit so if you're having trouble you can upgrade to either our flame which is our $25 a month plan or our blaze pay-as-you-go plan also if you're still running into trouble please reach out to our support team and they can help you raise that limit check out that link in the description below next question char's asks where do you see firebase in five years well char's in five years time we're still going to be creating great tools that make developers lives easier software is playing a bigger and bigger role in society at large and we want to help the people creating that software we'll do this by allowing them to focus on solving their customers problems by letting them not focus on low-level infrastructure by not making them stitch together a myriad of growth tools while i can't comment on specifics i can give you a few general areas of where we'll be focusing the first of which is more deeply integrating with google cloud platform this lets developers start with firebase and then move seamlessly into google's public cloud offering second of all is focus on machine learning and integrations there it's no secret that this is a big focus for google with integrations we can intelligently provide recommendations to developers about next steps they can take to grow their app or how they can better improve their app quality it's going to be a great next five years for firebase and we're excited to serve you our developers as best we can thanks for the question char's our last question comes from gabe who asks how has the culture of the firebase team changed since joining google well gabe we were 23 people when we joined google and now we're more than 10 times the size so things have certainly changed a little bit however i would like to think that we've changed google more than we've changed ourselves specifically we've changed google's processes towards building developer products trying to bring a focus on developers around the globe rather than just google engineers we've done this through things like one-on-one developer feedback sessions through a lot of meetups and through introspecting and dogfooding by preserving this culture we'll try and make developer offerings across google better for you our developers so thanks for the question that's all we have time for today but thanks for all of your questions do keep them coming on twitter youtube stack overflow and more with the hashtag ask firebase i'm james tamplin and we'll see you next time hi everyone timothy jordan here for one last interview this one with france was a principal scientist here at google hi timothy how are you very good and you i'm doing so good the festival has been amazing and we've had the opportunity to talk to a lot of people about what they do and that's why i'm excited for this moment here because you do some really cool stuff all right yeah it is cool i agree so uh why don't we give everybody just an introduction to the work that you do around machine learning and input yeah so i work on speech recognition and i work on keyboard input so speech recognition you'll know what it's about sandar has been talking about our progress in voice and you know how we've lowered our word rate quite a bit uh google home and all that uh and uh i've been working on speech for the last 12 years at google and then more recently maybe the last two three years i've been working on keyboard input and um you know when people think of their keyboard they think it's something very simple where they just take the the mobile keyboard on their phone and they just tap you know the letters of what they want to spell but in reality if you want a good keyboard that really understands you and corrects your mistakes and predict the next word that you're going to type uh it's pretty complicated and there's a lot of machine learning that goes into keyboard as well yeah that's one of the things when you were talking about this earlier is that that's a common misconception that like machine learning wouldn't apply to the keyboard because you're just hitting buttons but that's actually not true at all no it's not true at all uh and um you know keyboard is very interesting because it's such a complex product in itself because you can do a lot of things with a keyboard you can tap on the keys you can swipe what you want to to write and then you can even click on the microphone and then you can speak to your keyboard too all right so the two modalities come together there but doing a good job at correcting your typos yet letting you type new words that the keyboard maybe doesn't know about and then learn those words and those phrases for you and then uh being able to predict the next word or the next phrase is that you're going to type so that you don't have to type then you can just say oh yeah that's what I meant click that that takes a lot of machine learning yeah you know one of the other aspects of your work that I find really interesting is I end up talking to a lot of machine learning researchers that are just blazing new ground it's really interesting but it's like very theory and theoretical but what you do is you have problems and you've just found that machine learning is a good way to solve them right so the the way that I like to think about these problems is I have a technology to build right and it has many facets and there are many things that we need to optimize it can be the machine learning it could be the UI it could be the infrastructure that's behind it that nobody sees and so what we're really trying to do is to understand the pinpoints for the user where are the places where our users are wasting time are getting frustrated what are the places that they enjoy instead and then trying to understand those problems then we look at what are the right solutions for those problems and sometimes the solution is just you know sweeping a parameter and sometimes it takes like a really complex piece of machine learning so we're just picking the technology that matches the problem that we have to solve and we improve the technology one step at a time it's really interesting to kind of have that perspective of it's very pragmatic about machine learning right right exactly exactly and I think it's a good approach because once you know exactly what the the hardest part is really figuring out what is the problem that you're trying to solve and I think a lot of developers are struggling exactly with that same problem it's what is it that I'm trying to solve how am I formalizing that and then what is the best tool to solve that problem awesome so there's another aspect of your work I want to bring up is the last one I promise which I find really interesting and you call it deep internationalization can you tell me what that is yes so you know as you may guess I'm not a native US American speaker I grew up in Belgium Belgium is a country with French and German and and Flemish I learned Italian when I came here to the United States so I care deeply about languages and we have found developing those keyboards that a lot of countries in the world and a lot of languages are not well represented in mobile keyboards what I mean by that is that there are a lot of people in the world sometimes tens of millions of people who speak a language and would like to write to their friends in that language but that language is not supported on a keyboard so what do they do either they go back to another language so maybe I cannot type in French I would go back to English because it's okay my friends understand English too but that's not what I want to do I want to speak French with my family and if I if we happen to speak a much you know more distant language that is not well represented we wouldn't have the luxury of being able to talk to each other in our own language so that brings us into the space where we think it's not just about bringing a few tens of languages into our keyboards and into Google technology in general it's about hundreds of languages it's about going beyond a thousand languages there are in the world thirteen hundred and forty two languages according to linguists that have more than a hundred thousand speakers so you imagine and if you go into even the deeper and it would be in the six thousand seven thousand languages and so I really have a passion for bringing as many of those languages online as possible and progressively enabling technologies like speech recognition keyboard input to more and more languages and the way you design an app is just a little bit different if you're trying to design for all of them right yeah so the app itself we try to keep things as general as possible because there's no way you can scale to hundreds of languages if your infrastructure is not general and your modeling uh methodologies are not you know as general and scalable as possible and that's where machine learning actually is very helpful to us because before we had machine learning to solve our problems we would really hard code things and you would see pieces of c++ code that would say if language equal German do this right and that's that's not what you want there's no way to scale that with machine learning you essentially have the algorithms that are data independent and language independent and then the data is the piece that contains the knowledge about the language so now there's this nice interface between algorithmic work and data work and so once you have the right algorithms it's just a question of pumping data from different languages into them to create a new capability in those languages and if you improve your algorithms all the languages improve at the same time so it's really nice synergy as long as you can establish that boundary and with machine learning that's very easy to do yeah and it sounds it sounds very sort of once you get the right perspective things start to fall into place exactly you really want to build from this from scratch with this thinking of how am I going to internationalize my product my technology but not just five languages just two hundreds of languages okay yeah so I think that's about all the time that we have but thank you so much for joining with me and talking about all this is there anything else they'd say if you could see some developers on the other end of this camera that would love to be doing the kind of work that you're doing now what's some of the stuff that you tell them to get started on um you know I think the the the best work that you do is really when you're passionate about the thing that you're doing so I would think for each developer if they are and I assume they are passionate about what it is that they're trying to develop dig deep into it just don't be satisfied with something that's average quality keep digging and digging to try to understand your product the best you can and what your users want the best you can and then keep developing from there and everything will go well awesome Francoise thank you so much you're welcome thank you Chiara and I am a developer advocate at Google I have been working on the AMP project for the last year and a half and today I would like to show you how it's possible to build beautiful and interactive pages in AMP for e-commerce website and even beyond I'm gonna share this presentation with other two speakers please meet Ambarish and William thanks guys and by the end of this talk you will get a clear idea of what it's available for you in order to build beautiful pages in AMP so first of all AMP stands for accelerated mobile pages it's an open source project who aims to deliver fast content on the web is built on three core components and only the combination of these three will give you the super fast experience first of all AMP is a set of html tags that you have to use in your page it comes from some limitations like the amount of CSS is restricted and third party JavaScript is only allowed in iframes then there's the AMP JavaScript library which implements all the best performance practices and manages resource loading on the page and lastly there's the AMP cache which caches AMP pages making them faster and also validates pages so then you always get valid AMP pages so when AMP was started it was limited in what it could support in fact it was supporting primarily blogs and news publisher but during the last year AMP has introduced new components that make now easy to adapt e-commerce website and we already have some samples like 1800flowers.com Zalando and Aliexpress and actually we've seen a lot of adoption of AMP from retailers as the AMP format matures so today I'm going to show to you which are the AMP components available in order to build those kind of pages and I'm going to use two samples from the AMP by example website so if you've never seen this website it contains the most exhaustive set of AMP samples so all the code that I'm going to present today can be found on this website so let's jump to those samples that I'm being promising you just one note that while I'm going to go through those samples I'm just going to be focusing on components that are relevant for the e-commerce world. Let's start with a product browse page you may be more familiar with the term product category page but it's basically a page in which a user can land when searching something on the internet like fruit so as you can see from the GIF we have some promotions on top a list of items and some recommendations and the user can swipe throughout them let's start with the promotions so as you can see the images are playing by themselves and we implemented this by using the AMP carousel component so let's look at the code so we have the AMP carousel component we are using type slides which means you're only going to see one image at the time and we are using the auto play parameter which means that the images are going to play by themselves and then what about searching as I never seen an e-commerce website without a proper search so we managed to implement a search in in the product browse page by using the AMP form extension so if you look at the GIF the user can type apple click search and then a new page with the results appear so the AMP form extension allows you to use the form and input tags so then you can build such a search form the second sample I want to show you is the product page so as you can see from the GIF we have some images on top we have selections like color quantity size and some more information at the bottom let's start with the image gallery so again here we used an AMP carousel and this time we wanted the user to be able to swipe from left to right or right to left so this is the code very similar to the one that we just seen this time without the autoplay parameter and again using the type slides just below the image we added a refinement section in which the user can select the color the quantity and the size you can implement a selector in AMP by using the AMP selector component which allows the user to choose from a list of options and these options are not only limited to text they can also be images and what it happens is basically once the user choose on an option the selected attribute is going to be added and actually with the AMP selector component you can even realize a tabbed panel as you can see from the GIF we realized a tab panel in which the user can click on about specs and size and the content changes so we implemented this by using AMP selector together with a CSS trick so when I say trick I mean that we have the AMP selector we have three groups of dips each group contains two dips one for the button and one for the content we hide all the content at default and when the user clicks on a button the content which is immediately below the button is going to be shown and you can also implement a shopping cart in AMP and not that this wasn't really possible when AMP was launched so let's look what we achieved in this sample so the user choose a color a quantity click out to cart and then a new page showing the shopping cart appears so again we used the AMP form component in here and as I said the AMP form extension allows you to use the form and input tags but I would like to catch your attention on the client id variable so this variable allows to create a unique identifier for the user that then you can use to identify the user when you send the request to the server an alternative to a shopping cart you could on Chrome you could also implement a buy now button by using the payment request API so I may you may have heard about payment request API from other tools here at IO but basically they allow requesting payments information directly from the browser in this case you need to use an AMP iframe use the allow payment request attributes and then all the logic for the payment is going to be inside the iframe and just at the bottom of the page you can add some recommendation for the users and the best way to achieve this is by using the amp list component so amp list fetches results from a course JSON endpoint and then shows them by using a template so amp supports a set of templates and for these samples we used amp moustache and moreover I would like to say that the amp list always allows you to get the latest and the freshest content to your page so in the samples that I just showed we also created some advanced interactions these kind of interactions were not really possible when amp was launched so now I would like to show you some of them and at the end I'm going to tell you how we implemented those so on desktop we added a thumbnail gallery together with the image gallery so when you click on an image from the thumbnail gallery the image on the carousel updates and at the same time when you click on an image from the carousel the image the thumbnail images get focused and notice that the page is not reloading at all we also been able to update dynamically the carousel image by clicking on different colors at the same time the price just below the image is changing when clicking on different colors and if you remember the related product section we also been able to add a show more button which is going to load more items from the server and add them to the page again without any doing any page reloading so at this point you may be wondering how we achieved this kind of interactions and we used the amp bind component which at this point you may have heard from other talks like the amp keynote but it's basically a new component available under experiment in the amp library which allows you to create interactive pages and to tell you more about this I would like to invite William to the stage. Thanks Chiara. Hi everyone my name is Will I'm a software engineer at Google and I'll be going a bit more into detail about this amp bind thing that we've been working on so first let's quickly review the motivation behind the future as Chiara mentioned amp components like amp carousel can offer rich interactivity within the scope of a single component but what if you wanted to add custom staple behaviors or interactivity between amp components and regular html elements in the non-amp world web developers would typically do this with JavaScript however part of amps consistently fast speed is the restriction on just that so what are our options one option is to embed regular web pages containing custom js into an amp page via an iframe but you'd lose the speed and other benefits of amp when you do this so that's not ideal or you could work with our open source developer community to build a brand new amp component on github and for certain use cases this is actually the right thing to do for example building a tweet button for sharing something on twitter however building a brand new amp component takes a non-trivial amount of work so it doesn't scale particularly well if you have multiple custom use cases or you can use this cool new thing we built specifically for this called amp bind all right well what is amp bind amp bind is a new amp component that supports custom interactivity through data binding and expressions don't worry if that sounds complex it's actually fairly straightforward and i'll explain exactly what that means so you can think of amp bind as being composed of three main parts json state data binding and mutation of html elements if you've dabbled in popular web app frameworks like angular or react this may feel familiar to you so the first part is state which is just json data in your document amp bind includes a new sub component called amp state which either wraps local json in the dom or fetches it from a remote endpoint in this example we have an amp state element with the id foo wrapping an inline json object containing a single key value pair now data in amp state elements can be referenced by expressions for example the expression foo.bar here would evaluate to the string hello the second part is binding or data binding a binding is just a link between an html element and an expression where the expressions result sets the value of a property on that element in this example the paragraph elements text content is bound to the expression foo.bar plus baz this means that whenever the result of foo.bar plus baz changes the paragraph elements text is set to that result so again expressions in amp bind can refer to the state and remember that the expression fragment foo.bar evaluates the string hello baz here is another variable that we'll define soon the third and final part is mutation here you can see a button with a special on attribute it's part of the amp action system which allows users to add simple event-driven behaviors such as do something on a button tap so amp bind adds a new action called amp.setState which updates the state with an object literal here we're updating the state with the key or variable baz so put together here's our hello world example for amp bind quickly starting from the top we have our state initialized with the amp state component a paragraph element with a text binding and when we update the state via amp.setState the expressions result changes which mutates the paragraph elements bound text property the expression is foo.bar which is hello concatenated with baz which is world so tapping the button makes the paragraph element say hello world so returning to our apple example it uses amp bind to achieve the page reactivity shown here again each color of apple has a different price and size availability which is updated on the page upon user selection so I'll just very quickly go over how this example works in the context of state binding and mutation in amp bind and don't worry if you don't catch every piece of code here it's all available online with documentation at amp by example.com first is state we initialize the state with data about our apples including price and size for each color of apple next is binding we bind the text of the price label to an expression that looks up the price of the selected color of apple in our product data note the selected color variable in particular and finally mutation we change the selected color variable via amp.setState now the color picker uses the amp selector component we saw a demo earlier where each selectable element has an option attribute we can reference those option values in the select event of amp selector to set the selected color variable so the code snippets shown on these slides were simplified a bit to fit the full code for this example again is on amp by example.com there's a short link there for this specific example amp by example also has many other samples of cool practical usages of amp so one part we haven't talked much about yet are expressions what are they how do they work well you can think of it basically as JavaScript syntax with a few caveats to keep things fast and safe kind of like how amp is html with some caveats for speed and reliability in amp spirit of ensuring fast performance we've made sure that using amp bind on your pages won't cause jank one way we achieve that is by adding concurrency through web workers so heavyweight tasks like expression parsing and validation are performed on a dedicated worker this avoids blocking the main thread ensuring a smooth user experience on pages that use amp bind another cool thing is that it's Turing complete so Turing completeness is an academic measurement of expressiveness of a programming language here's a Turing machine that counts in binary using amp bind the line of gray bubbles is the tape of the Turing machine the color bubble is the current position of the tape head and the step button is the crank that runs it we built this demo in part for fun and in part as an exercise to see how powerful amp bind actually is that being said I don't recommend actually building Turing machine simulators on your amp pages but for interest sake it works by encoding the state of a Turing machine in an amp state element along with instructions for advancing the tape head and writing to the tape at any given state of the machine naturally running the machine again is done by amp.setState in the step button so hopefully this shows you that amp bind is quite robust in practice and theory so here are some more technical details I won't run through every single one of these one important thing to note is that we don't evaluate expressions at page load in fact amp bind only executes in response to direct user action this makes sure that we won't affect load latency of pages and it avoids unexpected content jumping the gist of this slide is to say that we've built amp bind with the principles of amp in mind that is it should be fast safe and easy to use amp bind is available today as an amp experiment and Slate it's a launch soon we're working on making it more useful and adding more documentation and samples and it's pretty close to being production ready but we appreciate any feedback you guys might have just want to make sure that it's fits your use cases whether or not you find it intuitive just really want to ensure that amp bind is practical and useful for you guys we've also written a new codelab for IO called advanced interactivity in amp it goes over some more of the advanced use uses of amp bind focusing on the e-commerce use case in particular please check it out so many merchants have already started experimenting with amp bind in their pages here are a couple implementations from Ali express and we go Ali express in particular uses amp bind in their product detail page in a similar pattern to the apple example we saw earlier with dynamic prices and available quantities depending on the user selected skew next we'll hear from aka from mincha.com on their experience with amp and amp bind thanks well i think first of all congratulations i think you guys have survived IO or almost i'd say survived IO if you survive the next 10 minutes of my talk but i think it'll be well worth it because i'll talk to you about how we did amp bind implementation in less than a week actually we did it in six days and had the seven day for rest as well so let's talk about it so before i go into it i do want to sort of talk about the fact you know just a little bit about mincha so that you know this is actually in production in a really large scale company in india so who are we mincha is a is the largest online fashion player in india it's actually bigger than the largest offline player in india it's you know these numbers actually are slightly old i'm showing some press safe numbers here but we are larger than this at this point 18 million monthly active users 50 million page views a day five million sessions a day more than a billion dollars revenue and rate so this is done for an at scale company in production what you're going to see next so let's talk about our journey through amp bind so the real problem that we had why we sort of went looking for it was that a lot of traffic for us comes on these list pages you know i'm sure you have the same thing where folks click on something i'm looking for Nike shoes or something else and i land up on this list page which has a list of products now these pages were are loading for us in seven seconds and that just was a long time and we had a bounce rate of 30 percent which as you can see is killing a lot of our business so and as you know you know if if you reduce the time if you reduce page load you know our estimate is and we've looked at some of the studies here if you reduce the page load by a second you know you'll get perhaps 11 percent more page views on an average if you say people are happy or unhappy you'll see 16 percent better satisfaction and probably seven to eight percent more you know fewer conversion if you have you know one second of page load delay and india as you know in 95 percent of the internet is mobile you know really even though we have desktop 90 percent of our actually revenues are from mobile and the data speeds even though they're increasing very very fast a lot of users are still on less than 3g speeds which means that we have to overcome the bandwidth hurdle also through ambine kind of technologies so let's talk about how we did it so day zero you know this is our product manager Amit and the engineer Vijay they're just thinking you know can ambi dynamic enough because amp really was as you guys know was supposed to be for static pages for content pages that were loading and the second question was you know the product manager why he has his hand on the head there is that he's got like 20,000 other priorities and he's trying to figure out how much time this is going to take but I think ambine came to the rescue and here's our sort of day by day journey on how we did and this is actually how it played out in fact on day one we were talking to Anurath and Eric and folks you know at Google and saying you know can we even do it they said no read the documentation this will be actually in less than a week so day one we read the documentation built a very simple proof of concept actually everything starts with the hello world kind of thing so we built a quick proof of concept that worked out and we said okay we can actually do this thing day two we did you know amplist call to load results but as with any project we just hit this hurdle that these pages were slow to load because Ajax on the client side was not as as quick so we did server side rendering the page was less than 14 kilobytes and it sort of downloaded pretty fast so we crossed that hurdle day three this is where the real sort of interactivity came in where we had search and sort and some dynamic elements on the page and we had those working and we'll show some of that fairly soon now ambine at that point and even it was experimental but as we tried this out it actually worked fairly seamlessly for us so that again boosted our confidence day four now this is where sort of we start polishing it we've got sort of a good good thing running and we started polishing polishing the UX and as we'll show sort of in a side-by-side video you couldn't actually distinguish the AMP version from the non-AMP version once you started looking at it day five you know this is unfortunate to the reality the as you guys all know analytics comes late we think about it early but it starts coming late for us also that's what happened we started looking at analytics added that in and then the other interesting thing is you know I don't know if you've heard of amp accordions but you know we've got this side navigation bar on our M web and we used amp accordion to create those and I will show you a little bit of code on that and the last thing you know we added some transitions for filters and they're done and like I said the seventh day we had the product manager and the and the engineer there so let me show you a quick demo you know off you know it was tough to set up a demo here but we shot a video back there a few days back of how ambine works with our old so what you'll see I think you'll see on your left hand side you will see the old page loading on the right hand side you'll see amp loading you'll have to pay attention because I'm just gonna load before you blink your eye ready as the once the video loads so that right side is where you know we've got amp results loading done still waiting for my old stuff to load and this is optimized and yet sort of I think it took a long time it is actually that fast you know when we showed people they were just surprised how quick it was to load so I'd recommend you guys try it out it's actually pretty cool thank you thank you to you guys so this is like I'll go through quickly with the code I think you guys could sort of go in and look at the code but I think will talked about the amp set state so this is exactly you know will talked about the poc on the hello world and now this is how sort of it really worked in the real world where we set the state for saying okay you know what when somebody hits the search button you go to the show search div and and it goes there and we can show the search element there similarly for the sort down there we hit the sort and it shows the sort div and then we have the amp accordion we talked about so we have the side navigation bar which in this part of the code for the amp accordion so you could sort of create a side navigation bar in amp so that's how you sort of do dynamic pages in amp so it's pretty cool that you know amp was great because it was sort of loading up static pages very fast but now you can do dynamic pages in a cool way so so that worked out very well for us now what's next I think every time you do something you're looking at what what you're going to do next how you're going to improve now we'll keep pushing the google folks and I think they'll keep pushing us and we'll sort of build more stuff around this but but a few things that we are thinking next is you know how do we modify the amp list so that we can avoid you know page loads on sort as well as pagination so that we have the next set of items sitting there and we can point the amp list source to those items amp service workers that's interesting and it starts working like a PWA where we can launch the next non amp page content using amp service workers and I think this last point I wanted to expand on a little bit I think a lot of time we think about whether you want to do PWA or amp and when we started off we thought we were going to do PWA because there was a lot of sort of noise around saying you know you should do PWA everybody should do PWA and we talked to Anurath we talked to Eric bunch of other folks and they said what is your real problem and our real problem was the list pages weren't loading and there was a big bounce rate on those and I remember that call where we were told hey you know what guys think about it you want load this page fast and amp works great for that once people have loaded that page now you want to bring in PWA so that you can create an experience that is more app-like and the next set of pages loads well so I think it's not amp or PWA especially with amp bind and this becoming dynamic you could do both amp and PWA together and that actually brings in a great set of technologies to build amazing web pages so that's sort of our story on how we did this Electron white villain Kiara back on stage to close this talk thank you thanks okay so beyond the e-commerce use cases we've highlighted today what else can you do with amp bind well abstractly amp bind adds a layer of dynamic ability on top of your existing amp components and as we've seen you can use amp bind to add interactivity to regular html elements so I think the possibilities really are only bound by your imagination sorry but we're excited for all of you to try it out and see what you can come up with and really before you go we would like to leave you with some useful links about things we talked about today so there's the amp project.org which you can find all the documentation for amp all the samples that we've been using are on amp by example.com and you can also find the documentation for amp bind and then since it's the last talk of the day I can really tell you come and talk with us in the sandbox area so if you do have any questions you can use the developer support and lastly there are some very awesome amp videos on the amp youtube channels so thank you very much for your attention thanks guys I was an HR I was working as a recruiter and I was completely unaware as to what I am supposed to do I used to feel that I just don't want to be a recruiter I want to be on that other side talking technology and talking about gadgets and how technology is changing our day-to-day lives one of our relatives he challenged me that you are a girl and engineering requires a lot of designs and drawings I wanted to prove him wrong and that was the time when I heard about Udacity I started taking some online courses in Java and Android very basic things it is little difficult but it's just that I don't want to feel that you know that sense of regret I just don't want that so Udacity has actually been a savior in my life the quality of the projects is very good it's like you know after completing those you feel like you have conquered something actually I finished my degree yesterday and so today it's just going to be the celebration did you know that the average user has 36 apps on their device and doesn't use three quarters of them most of the time and of those about one third of them have only ever been used once well what if that's your app you've done the research you've written the code you've performed the testing you've perfected the design you've gotten the installs and then nothing so how do you prevent this app indexing helps you re-engage with your users through tight integration with Google search as well as appearing in search results it surfaces your app through autocomplete and now on tap all you have to do is get your app in the index and when users search for the content that's already in your app they'll be able to see your app directly in the search results and be able to launch it right from there it's as easy as that but how does it work if your app and site have similar content you associate them with each other then your app can receive incoming links from search on android these are achieved using standard android app links and on ios using standard ios universal links when a user searches for your content they can then find your app if you have the app installed it will allow you to link directly to it when the app launches it sees the address of the index content and decides which screen to load to show it it's really as easy as that you can also use the app indexing SDK to submit content of the search engine based on how people use your app content when people use your app your search position can be improved with app indexing you get into the index putting your app into google search and allowing you to re-engage your users this is ask firebase a show where we answer all sorts of firebase questions on whatever medium you ask questions on we take those questions we give you answers it's a show let's get into it hey folks my name is Abe Haskins i'm a firebase engineer here in san francisco and i'm here to talk to you about firebase for unity if you haven't already seen it i did a getting started video for firebase in unity it'll be linked below but in the meantime i want to talk to you about some of the great questions we've gotten since we released a firebase SDK into general availability a few weeks ago also unrelated but if you want to call me Abe that's great you want to call me Abe you want to call me literally anything i'm okay with it as long as it starts with an a and has some letters after it we're all good let's dive in that's not nice let's get started with the first question from my laptop battery is dead all right first question on our last youtube video about unity seven dollar one asked can i use this to build out webgl games that i've developed in unity and the answer is no you really can't since the firebase SDK for unity is built on the android SDK and the ios SDK we don't have the ability to build out for webgl or playstation 3 or some of these other platforms that unity supports you'll find that some of the features do work and even in the editor they're stubbed out so you'll be able to use them and test your code and compile it and everything like that but you won't get the same functionality you would if you built out the game for the device thanks for the question next question on the firebase mailing list deep pixel asked why does my app report live app dot so cannot be found this is actually a great question and it's related to another issue we've gotten on github from secora 777 he said why is firebase app not able to be found these issues are both related to the timing of when unity imports the dependencies it requires to run the firebase SDK in some environments you're just gonna luck out and you're not gonna have to deal with this but it just depends on your specific game and your specific operating system so what you can do is one make sure you're on the latest version of the unity plugin we've pushed some changes that should help this for a lot of people and two you can go into the assets folder and go to play service resolver android resolver settings and there's an enable auto resolution checkbox this is just gonna do some extra things to help you get those dependencies imported and help control that timing so everything's gonna work well for you all right and if you do that your issue should be solved thanks a ton secora and deep pixel next question the next question is one we've been getting a ton a lot of people want to upload different assets for their game into firebase cloud storage they want to upload images and text you know all of these great things movies et cetera et cetera all those things you need for your game you know you know the stuff they want to upload those into cloud storage and then retrieve them in their game and just you know use them in the easiest possible way the firebase unity SDK is really good when you're dealing with these storage assets because you have a lot of control you can download them as streams or you can download them as a byte array but you don't have to really do all of that if you're not interested in dealing with those more complex flows the absolute easiest way to get an asset out of cloud storage rendered or consumed in your game in some way is use the triple w package and the get download url async method that we offer with that method you'll just get a normal url it's a public url that you can consume and pull down just like any other url and the triple w package in unity makes it really really simple to take that url download it and turn it into a material turn it into an audio clip even a movie so any of those assets you upload in cloud storage or your users upload if they're uploading profile icons or anything like that just pull them down with triple w and you'll be good to go if you want to find out more about the triple w package or how it works with the get download url async method you can check out our documentation or the unity documentation both are great resources for this thanks a ton for that question everyone who asked it i appreciate it next question you guys have no idea how much i spend on conditioner to get this volume and the final question the big one we all want to hear about cloud functions i'm sure you heard at google next we announced this awesome thing cloud functions how does that work with unity what can we do with these two things put together and the answer is you can do like anything if you check out the repo we have we have a function sample repo and this isn't specific to unity it's just our general function sample repo it has 26 different examples of things that functions can do and every single one of those you could do with the firebase sdk in unity with cloud functions and that's because cloud functions ties into your real-time database it ties into analytics and storage and all these other firebase services that are supported in unity and it lets you execute code and change things way back on our cloud so you don't even have to think about it so if you want a game that interacts with an external api or you want a game that has some custom authentication that you've brewed up in your awesome game development shack you can do this with cloud functions and you don't have to worry about scaling or anything like that so you can absolutely use cloud functions with unity and it's highly highly recommended so go check out that repository it's got a ton of different samples and every single one will work great with unity thanks for the question hi everyone thanks for watching that's going to wrap up this ask firebase featuring unity if you have any more questions obviously you can leave them in the youtube comments below post them on stack overflow tweet them on twitter with that hashtag ask firebase or literally reach out to us in any way tweet at me tweet a firebase shout it at the top of your lungs near a google office we will hear it we will answer your questions we'll see you next time hi i'm timothy jordan and i've been your friendly developer advocate here at shoreline amphitheater for google i o 2017 for the past three days we've been checking out all the really cool in-person experiences that you might have otherwise missed if you were just watching the session videos i hope you enjoyed them as much as i did and if you'd like to see anything again check out g.co slash i o slash guy and hey maybe next year you can join us here in person i'm timothy jordan and i'll see you at google i o 2018