 My name is John Pallett, and I'm a product manager with the Chrome media team. And I am Francois, and I love working with web APIs. We are here to talk with you about the future of video and audio on the web. And we are going to talk a little bit about how we got where we are today. We're going to talk a little bit about what you can do today and show you a couple of demos. And then take a look at what's coming in the future. That's an awful lot to cover in 40 minutes, and so we're going to get right into it. So to give a bit of context, let's start back in 2000 when I bought my first HD television. The Super Bowl was just broadcast in HD for the very first time. I was pretty excited. There weren't any DVRs for HD or anything like that, but it seemed like a pretty good thing. And it looked great, and it played great, but over the next few years there wasn't a ton of stuff that happened. DVRs came onto the scene, Skype came out, and we were able to do postage stamp side video conferencing on the desktop, which was pretty cool. YouTube came out in 2005. Amazon and Netflix started streaming about 2007, but for a large degree of us, for most of us, this really meant that we were watching video on the desktop or sitting on the couch watching television at home. So at the end of the day, over 10 years, there wasn't a huge amount of change in terms of mainstream user behavior. Well, now let's look at what happened with my daughter. About 2010 is when she really started consuming media, and around about that time is when tablets came out, and all of a sudden you could take your video and you could watch it anywhere. This is also about when video conferencing really did go mainstream with apps like FaceTime. 2011 Twitch video game streaming comes online, and all of a sudden people are sending video all over the place. And 2013 is when Snapchat came out, 2014 musically, and suddenly millions of teenagers are creating little music videos and sending them to all their friends. And then in the last couple of years, 360 video, augmented reality, virtual reality, you can see that in the last few years, the pace of innovation has really accelerated for video. And really when you look at it, as mentioned in the State of the Union, 70% of the bytes being shipped over the internet today are video. And that's actually projected by Cisco to go to 80% in 2020. This is a really big deal. This is the decade of video. Things are happening fast and it's a really great time to be thinking about video. So if you're wondering what session to be in, this is in fact the right one for this time slot. It's a really, really pivotal point. But here's the problem. We look back over that time. Where was the mobile web? Some of the apps I mentioned actually have URLs as names, but when you go there, it tells you to install an app. The mobile web didn't play a big part in the innovation that's come so far, which is a little weird because it's frictionless. You want people to get in. You want them to get out. You want them to be able to see things quickly. These are all things the web is really, really good at. You'd be forgiven if you thought, well, yeah, that's because the mobile web is not very good at video. And it's true. Over the past few years, as recently as a year ago, there have been a number of problems with the way the mobile web handled video. Let's look at the first one, buffering. A lot of video publishers were so focused on flash that HTML5 was a secondary thought. The mobile web wasn't even a thought. And in a lot of cases what you'd have is you'd have a giant video file sitting on a server and when you went to access that, over the mobile web, it would take a long time to download and you'd be stuck waiting for it to play. Or they would think about the mobile web and they'd create a very small video file. And now you would download it and it would look awful. So great, no buffering, but the video quality is just terrible. The publishers who didn't think about the mobile web from a video perspective often would do the layout for the desktop site and not check some of the mobile sites. Now to be fair, a lot of the APIs to do this correctly didn't even exist a year ago. So there has been a big change in the web over the last 12 months in terms of this ability to do video, but this is what you were looking at a year or two years ago. And of course, if you went offline, your media experience was pretty terrible. I like the dinosaur game. It's not going to keep me going on an airplane for four hours. So now let's take a look at where mobile web media is today. To show you this, we have a demonstration application, which was written by Paul Lewis and a developer relations team. Let's go to the application, please. And what I'm going to show you is what you can do today on the mobile web. So this is a progressive web app, and it's called Biograph. When I went to the website for the first time, it asked me if I wanted to install on my home screen, which I did. And so now when I launch it, you can see it comes up, gives me a nice splash screen, and goes right into the Frameless app. If I take a look at the task manager, you can see there's no Chrome frame around this. I'm going to keep re-emphasizing the point. This is a web page. This is what you can do with a progressive web app. It looks great. It doesn't look like it's in Chrome, but under the hood, it's actually being delivered by the browser. So I can scroll through. You can see it's fast. It's seamless. It's very responsive. If I hit play on video, it comes up very quickly. We didn't get a buffering event at the beginning. Now I wish I could say that the Wi-Fi here at the show is so fantastic that you will never experience buffering with video, but that's not what's going on here. In reality, because it's a progressive web app, it's able to pre-cache the first second of video or a few seconds of video. So when I hit play, it was instantaneous. Let's take a look at some UI elements here. If I rotate, I can go full screen. You can see that I've got custom controls allowing me to go back and forward. I can also drag on the timeline. I can get some great thumbnails. Not really what you'd expect from mobile web playback, is it? If I go back into portrait mode, the device automatically pops back. Let's take a look at what happens if I go into lock screen. There's a lot of cases where I might want to hear the audio on the video, but I might not necessarily want to be watching it. You can see here, it's a little dark, but there's a background image filling the entire frame, and I have media controls. Video, in this case, and media, true with audio as well, is now a first-class citizen on my mobile device, and it lets me know what's going on in the background. If I unlock and I go to the notifications, you can see I have controls there as well. Now, this is all great, but what if I'm getting on that airplane? Well, I have the ability to take media and make it offline in this app. This is using an API that's still in development called background fetch. What's neat about background fetch is that now it's pulling down the video on the device, but if I switch pages or even exit the browser and then come back, let's go back, you see the download continued and finished up even while I was on a different page. And this is great because now, if I go into airplane mode, and I go back to the home screen, the app can actually keep track. Notice that some of these video items are grayed out. It knows that they're not available offline. Let's go into this one. I'm going to hit play on this video, which is offline, and we'll notice that after a brief second, it starts playing. So you really can do a complete, excellent video and audio experience on the mobile web today. One thing I want to mention about this video, this is actually protected content. Before I hit play on this, it already had a preauthenticated license and played it. So we've talked about doing a great media experience. This is something that everybody can do for all use cases. It's ready right now. So for those of you who are interested in trying this out, the app name is Biograph. You can access it at the URLs on the screen. It's also open source. There are a lot of really good learnings in here. Paul has been doing a series of developer diaries explaining a lot of the tips and tricks that he has used to build the app, including things like how to get those thumbnails to scrub in the player control. There's a lot of really good information there. We'll come back to these links later in case you miss them right now. So what did we see? Let's start breaking this down. I think that was a pretty great experience because there was fast playback. You had the ability to watch anywhere. It had great UI and it had really high quality video. So this is the anatomy of a great video experience. Let's tackle these one at a time. It's been mentioned before that you'll lose users if your page doesn't load fast enough. Akamai did a study and they showed that after two seconds, you start losing about six seconds of your viewers if your video doesn't start playing fast enough. So it's not enough to just have a great page load. You've got to be able to deliver up video fairly quickly. And the other thing that other studies have found is that you need that playback to be sustained. It needs to be continuous. It doesn't matter if the network goes up and down. Any time a video buffers, you're going to lose people. Well, let's start with the playback. The challenge with playing back video is you have to pull a lot of data over the network and the network is not constant. The user might go through a tunnel. They might lose bandwidth. So there's a way of dealing with this called adaptive bit rate streaming. What we'll do is we'll encode the video at multiple bit rates. Here I have low, medium, high. In practice, you'd have six, 10, 12, or more different bit rates that you're encoding to. The next thing I'll do is I'll break the video into segments. In this case, six second segments. And I'll encode the video segments at each of those different bit rates. The reason I'm doing this is now when the user hits play, the player can look at a playlist which knows where all of these different segments or fragments are. And as the video plays, it can adapt to the bandwidth. Going up in quality and resolution when there's more bits available on the pipe and going down when the bandwidth goes down. This in the demo was done through the open source shocker player. And what it does is it reads a playlist and then it pulls the appropriate fragments of video and feeds them into the video tag using MSC, media source extensions. Another thing we need to talk about is startup. Make sure it plays back quickly. This is a proof of concept done by FieldPlayer. And what they're doing, we'll wait for it to come back in a second. What they're doing in the first section here is they're using service workers to pre-cache the first few seconds of video as the user browsers the page. So here we go. The user is browsing their page and eventually what they'll do is they'll select a video and start playing. But they don't wait for that indicator to get ready for video playback. When the user is here, they've already pulled down the playlist or the presentation description. And you'll notice on the left where it's been pre-cached, the video playback starts right away. On the right, it takes a few seconds. And in fact, if you're pre-caching with service workers, in their example, they were able to get playback from over three seconds before the video started down to about 100 milliseconds. So service workers really can make a big difference. Progressive web apps have a lot of powerful capabilities that they bring to video. And in fact, you saw this pre-caching in the Biograph demo. You remember when I hit play, it started pretty quickly. Well, that's because under the hood, Biograph had used a service worker to pull both the presentation description and the first segment of video into the browser cache. And then when Shaka went to access it, that presentation description and that first segment were pulled by the service worker from the cache. After that fragment started playing, Shaka then went and started pulling the next fragments of video over the network. What this means is a user gets fast, fluid, instantaneous playback, which really is like almost every other element in HTML5. You want something responsive and quick that happens right away. I cannot overemphasize how useful and important service workers are to optimizing the speed of your site. One recent case study that we did with Viacom 18 on their Voot site optimized their mobile web page. Their page load times went up by 5x, so five times faster. And this had a big impact. This is a media site. It had a big impact in terms of return engagement for their users as well as the engagement of new users. 77% increase in the conversion from new visitor into an actual video viewer. And then a 15% increase for all of the users in terms of the number of videos that they watched on average. So you can see a little bit of optimization and a little bit of performance goes a long way in terms of increasing the engagements. So that's fast and fluid playback. Let's move on to offline. Offline is really, really an important use case for a number of reasons. One is obviously the airplane, which is something that I, of course, because I fly around care about. There are also a lot of cases where you'll have viewers or potential viewers who would like to access the video in places that do not have internet access. We've actually seen those users trickle load using the default HTML5 player videos waiting until they get all the way into the cache and then they'll take the device and go somewhere else saving the video for later. That's really not the best way to do offline for anybody who's thinking, oh, that's going to be my new offline strategy. Please don't do that. What you can do instead is use service workers. When we saw this in Biograph, what Biograph had done with the offline video is it pulled down the playlist, or actually the presentation description in this case, it pulled the presentation description and all of the media segments at a reasonable bit rate into the cache so that I could play it back here at the show. Now, you might ask which bit rate would I want to choose? Do I want the highest bit rate? Do I want the lowest bit rate? That's really up to you. You control the logic in the service worker. You control what quality, or you can give the users control. Let them download an HD version if you want, or let them do a low bit rate one if they want to save more on the device. I keep talking about video. This is a really great use case as well for audio. This is something you can do with a podcast. It's also something you can do with a variety of other audio books, other types of audio material. Now, I mentioned during the demo that the actual fetching of it, the pulling into the browser cache was being done by an API called BackgroundFetch. This API is still in development. This is a good example of something that we very much would like you to look at it now and give us your feedback. It's available if you turn on the experimental web features flag, but take a look at the spec. This is the time. If you have feedback on how this should work, please do let us know. I mentioned a second ago that there's a decision either for you or on the part of the user in terms of how much video do you really want to put on the device? Wouldn't it be great if you could get twice as many videos on the device without sacrificing quality? Well, of course it would be. This is where video compression really comes into play. Video compression is what gets used to take video and turn it into something that we could actually send over the network. VP9 is the WebM approach to video compression. It's also known as a codec, which stands for coder decoder. If you hear me say the VP9 codec, that's what I'm talking about. It's a compression technology. What's great about VP9 is that compared with a lot of the other common video compression codecs, it can get up to about 50% better compression efficiency, meaning you can cut your file sizes by 30%, 40%, up to 50% while preserving the same quality or just offer a better, higher quality level. Now, the other thing you can do with VP9 is you can actually deliver higher quality. I mentioned this as a key pillar to a great mobile web experience. VP9 is supported on over 2 billion devices. And so if you want to play high quality video, VP9 is an excellent codec choice to look at. So good that when YouTube adopted it, they saw videos starting 15% to 80% faster using VP9 compared to some of the other codecs. 50% less buffering and more HD worldwide. There are some really great gains that come from using VP9, a really great high quality experience. The last piece of the anatomy is the user experience. And one of the things you saw was the lock screen. There are a lot of cases where you might not want to watch a video. You might just want to listen to the audio. If what you deal with is audio, then this is absolutely a primary use case for you. Now, the great thing about this API, which is a media session API, is that it allows you to put your metadata and images on the lock screen as well as on wearable devices. It's also great for the user because they can tell what's going on on their device and they can control it. So let's take a look at what's going on. In order to use the media session API, the first thing I'm going to point out is this is a great example of a progressive feature. The first thing we do is start with an if statement. If the device supports media session, if the browser supports media session, great, we'll use it. To make things appear, you simply provide a little bit metadata, title, artist, album, and then artwork. Typically, you might have a longer list of images that you would provide at different form factors. For the sake of brevity here, I have two, 512 by 512, which is the most common Android size for the lock screen, as well as 256 by 256, which is useful for some older devices. Once you've provided that metadata, you'll also want to be able to respond to the controls. And here, you want to be able to seek back, forward, play, pause, next track, previous track. And what you do here is set up action handlers so that when those actions happen and when those events happen, you can take care of them. You may also want to be able to control the playback state so that the user, if you're doing custom controls, get reflection of the media session state inside the lock screen or the notification. In terms of implementing these action handlers, it's not that hard. All you're doing is you're setting controls to the audio or video tag, just like you would if you were doing controls on the web page. So it's pretty easy, and it gives a much better user experience. Here's another key part of user interface. Full screen mode. This is a good experience. I hit play. If I turn into landscape, it automatically goes full screen. This is also a good example of something that you couldn't do a couple of years ago. Let's take a look at how you do this with the screen orientation API. Again, a progressive feature. If the device supports the orientation feature, then what we'll do is we'll listen to when there's an event changing that orientation. If the orientation has become landscape, we'll go full screen. Otherwise, we'll go out of full screen. That's it. That's what, eight lines of code, and all of a sudden you can make the user experience significantly better for media on the mobile web. So you really can do great media experiences on the mobile web today. Fast playback, the ability to watch anywhere, great user interface, and high quality playback. And this is really great stuff. It's all available today. And anybody who's sitting here remembering the title of the talk is going to be saying, wait a minute. I thought you were going to be talking about the future of audio and video on the web. This is actually the beginning of the future, and a lot of sites are just now adopting this technology. I hope people in the audience, you're looking at this and saying we were going to do that as well. So to some degree, this is the short-term future, but let's look a little bit further out. Now, because all of that is available today, let's talk about what's coming out afterwards. So let's start with color. There is a new set of, there are a new set of standards that are coming out around video which are dramatically improving the realism of what can be displayed. And there are a new set of displays, today, televisions, coming soon to mobile and desktop near you, which increase the realism dramatically. Part of that is color. So for people who don't work with photos and videos all the time like I do, it may be a surprise that your display cannot actually reproduce all of the colors that you can see with your eye. This curve shows the full spectrum of colors that the eye can see. In fact, it doesn't, because that projector and this screen can't display all of the colors that your eye can see. But let's pretend it does. Your standard sRGB monitor today can only represent part of it. New video standards around BT 2020 are dramatically extending that color. So colors like my shoes. These shoes probably are not going to represent properly on that screen, but some of the new televisions are going to get a lot closer. Another aspect of the new video standards is the ability to show a wider range of brightness. And what I mean by that is brighter brights and darker darks. If you look at the standard monitor today it's doing what's commonly called standard dynamic range, or SDR. And the range of brightness there compared to what you see in the real world isn't really that dramatic. The blacks aren't really that black and the brights aren't really that bright. As we move into high dynamic range what's happening is that displays are coming out that cover a much broader range. What this requires under the hood is a change in terms of the electro optical transfer function, the EOTF which you may have heard of as the gamma function. There are a whole new set of functions for converting digital values of brightness into what actually gets displayed on the screen. What this means for anybody working in video is you need to know whether or not the device can support those EOTFs. And also make sure that you understand the characteristics of the device you're playing it on. So let's take a look at how we can do that both for color and for HDR. From a color perspective this is where the CSS media queries level 4 come into play. The color gamut query is now supported in Chrome and will allow you to query the device to determine the breadth of coverage of color on the device. On the bottom there's new IS type supported strings coming which will allow you to query for VP9 which by the way does do wide color gamut and high dynamic range with VP9 profile 2 full 10 bit allowing you to determine whether or not HDR is supported as well as the electro optical transfer function. These new advances as well as some of the most demanding low end bit rates all come bring me back to video compression. And there's some exciting news going on here. What I want to talk about briefly is the Alliance for Media or the AOM which is a cooperation between a number of companies to create a new open source royalty free compression format. This includes YouTube, Google, Amazon, Twitch, Netflix, Microsoft, Mozilla, Hulu, the BBC are all part of this cooperative effort to create a new compression format that will tackle not only HDR and white color gamut but also 8K and 4K, 360 video as well as and arguably more importantly providing video in the most demanding low bit rate situations imaginable. Important for billions of people around the world who do not have the same level of connectivity that we do. This work is going on now. The name of the codec that they're developing is AV1. And just a few weeks ago Netflix came out with some of their first analysis on the performance of the codec and what they found is it's not even done yet and it's already getting 20% better compression VP9. This is not yet available. We're going to talk about the AV1 section of the talk but this is absolutely something to keep an eye on as it rolls out. But when it does roll out one of the questions you might ask yourself is great well can my device play it back? And this is something that's interesting and unique to the web. Not all devices perform the same way. Ultimately you want to give the user the highest possible video but the one gigahertz system on the left probably doesn't have the same kind of performance. This is important when AV1 comes out. It's also important right now quite frankly for different video playback capabilities. So let's look at VP9. If I want to detect whether a device can play VP9 today I'll use the can play type function. I'll pass in webmvp9 and it will say can you do that and it will say probably and you'll say okay here's some VP9 please go play it. Now what that doesn't really tell you is you're going to be dropping frames. So there's a new API coming out to address this called the media capabilities API. And this allows you to fine tune the video playback experience for the user. On the top again a progressive feature you say if we have this media capabilities API I'm going to pass in the resolution and bit rate and the codec of the video I want to play ask for the decoding information and then it will tell you number one will the playback be smooth and fluid. And number three will it be power efficient. Now you can use this information to make decisions about not only what type of video you want to send but also what resolution of video you want to send to optimize the viewer experience based upon your specific use case. But we haven't talked about what I think personally is the area of greatest innovation and growth in video and audio that we are seeing right now particularly over the last few years. My daughter's ability to make mobile music videos on her phone and her ability to put things on her face and communicate with other people is all social. It's all about sharing. It's all about communication here. It's hard to remember this was not mainstream as recently as five or six years ago but it has quickly become very mainstream. The whole premise here with social is you want people to get in and get out and be able to do things quickly which is great for the web. So let's talk about the most personal social media, video communications. Web real-time communications, WebRTC is not new but what is kind of interesting over the last year or two has been the rapid adoption both by the browser manufacturers as well as the app platform. There are sites now that are using it on the scale of tens and approaching 100 million users on the site. So what is new is that progressive web apps make WebRTC really interesting. On the mobile web you have the ability to provide people with a website that uses WebRTC on the phone and let them jump in, communicate, jump out add to the home screen, they can come back. So there is a next step here as far as peer-to-peer personal communication. And that's fine for one to one. Let's talk about another type of social event which is live streaming. How is this social? Well, if you're sending video to 10 people, they probably want to communicate with you. If you're sending video to a million people, they probably want to communicate with each other. It really depends on the event. What's really neat about this is that where you look at all of the platforms that support live streaming today, Web has a pretty unique advantage in terms of the feature set as well as the way people use it. If you want to share something, you can send somebody a URL. They don't have to download an app in order to watch it because this event is happening now. Don't make users wait. This is where the Web can really come into play. Now the truth with live streaming is that there's a whole infrastructure challenge underneath the hood. And what I mean by that is if you're deploying to a million people and you want latency, low latency rather, you might take a different approach using files that are getting or even files that you're reading while they're growing as they're placed out in the CDN. Or you might use data channel WebRTC and do peer-to-peer CDNs to deliver it. I am not going to go into all the details on that because I would use all our time. But what I will let you know is that the Web does support live streaming both in terms of WebRTC as well as the ability to put out the video segments in the form of files and play them back. And in fact Shaka has the ability to do this by reading dynamic presentation descriptions and then as the segments become available in the network knowing the appropriate seek range as well as the end so that it knows what to play back. And all of that's great. But there's the last piece of social media that we want to talk about and that's creation. If you look at the way that something where you are creating video and sharing it with your friends works, it's really a beautiful cycle. It starts with somebody finding something and watching it and getting inspired. And then they say, now I want to make something. They want to capture and they create and they end code and they upload and they share it with their friends. And then their friends say, ooh, that's very cool and they get inspired and they want to capture. So this really only works if it's easy to discover, it's low friction to watch and then also low friction to get into capture scenario. This cycle can go very, very fast as long as the whole path is very easy. The Web has a lot to bring here and to show you an example of what's coming in this space, and that's why. Thank you, John. I'd like to share with you a simple web app I've been working on and use it to showcase some awesome media capabilities coming to the web. May we switch to the Android device, please? So this web app is called Moostash and you will understand why soon. I have previously added Moostash to my home screen and specified in the web manifest that I wanted to run it in standard mode so that the browser UI does not show up. It is quite hard almost impossible to tell that this is a web app. So now you can see me with Moostaches and running a funny app. Not that there is no lag here. It is perfectly smooth. May I say creamy? Okay, I said it. Let's press the record button. And that's all. At the bottom right of my screen I have a preview video of myself that I can share now. Click the share button and let's tweet it, for instance. Boop. Or not, please? And that's all. Let me describe what you've just seen and may not have realized by walking you through all the API I have been using for that. Let's start with the basic. This is how you get access to a camera video stream on the web today. Set the video attribute as such the object to the asynchronous result of navigator.media-devices.get-user-media. And you're good to go as soon as the video plays. Note that the only media constraint there is video true. But my moustache app actually asks for more. The ideal video width, height, frame rate and facing mode. The browser will do its best to accommodate your request so it doesn't hurt to ask. My custom draw function is going to be called sorry, every time the browser asks for a frame to be animated. Thanks to the request animation function here. Since I'm looking for the smoothest experience there, that means the draw function is going to be called approximately 60 times per second. And each time it's called I'm going to draw a live video frame on my canvas. At that point we have our face on the screen. And that's not bad, but let's go further now. Using the experimental shape detection web API, I was able to draw some moustaches and a hat on my moving face while keeping the app running smoothly at 60 frames per second. As a matter of fact, that wasn't the case last week, so thank you, Miguel, for working hard to get this demo ready for today. As you can see, this is a pretty simple and easy to use API. As hardware accelerated detectors may potentially allocate in whole significant resources, I recommend you reuse the same face detector object when doing several detection. The fast mode option here is a hint for the browser to prioritize speed accuracy, which is exactly what I'm looking for in that case. Let's look at what is happening now in the draw function. Calling face detector or data canvas return asynchronously an array of data keep faces in the canvas. Note that the processing is all done on device, so there is no internet connection required there. When a face is detected, I use its position x, y, width and height to draw some element on top of a video frame. This is a simple boolean to avoid the API as the draw function is called all the time. Pretty cool, right? This API along with the barcode, QR code and text detection API will be available to everyone later this summer for testing purposes in Chrome's table. Note that you can already play with it today by enabling the experimental web platform feature flag in Chrome. By the way, we love feedback, so if you want more features such as eye or mass detection, it's a bad example we have it, but if you want something else, something more, or if you simply stop on the bugs, please let us know. Now, how about media recording? Is that all? It is not like really. This is the full code I've used in my Moose dash app. Grab the stream from the canvas by calling canvas.capture stream and use it to instantiate a new media recorder. The MIME tab option tells the browser which codec to use for the recording. It can be H264 or VP9, for instance. If you leave it unspecified, the browser will choose the best one, which usually boils down to the hardware-accelerated one, if any. You can also pass the bits per second option to customize the video quality of the recording. Recorder start actually start recording the media, and each time the recorder delivers some media data, we call them in an array object I call chunks. When the user stops recording, we call recorder stop, which fires a stop event allowing us to create a new blob object containing all chunks we've recorded so far, and upload that video blob to our backend. I could also have chosen to directly stream these chunks to an RTC connection, by the way. The media recording API has been in Chrome for a year now, so it is pretty stable. As proof, some popular Chrome extension use it intensively to record up to one hour of content, such as video and tutorials. When I clicked the share button, did you notice that the native Android share UI showed up and included all my native app that supports sharing text on my device? I was able to do that with the experimental web share API, which enables sharing data, such as text and soon images from the web to an installed app of a user choosing. As you can see, this is as simple as calling navigator.share and pass it a title, some text and the video URL. And that is pretty much it. These four lines of code enable seamless sharing, and I personally can't wait for this API to be available to everyone later this year. Thank you. Thank you, Francois. That's pretty awesome, right? There's a couple of things in there. One is the web, which I want to highlight because what is great about that is the web creating media for the web. That is something that we see over the next few years is becoming more and more common. But what's really great about this is I love the way that in a plug-in free world, you can connect these items together like nodes and pipe them together using media streams. So camera, to canvas, to media recorder, to upload, microphone, to web audio, all of these things you can do with these tools. If you want to access the mustache demo here, again to remind everybody, this is a relatively new API. It's still in development, so you do need to turn on the web experimental features flag if you want this to work and you would use Chrome Canary for that. So that was a lot. I told you at the beginning we were going to cover a lot. So what did we see today? We showed you a couple of demos and we talked through a lot of APIs, some of which are available today. So in the biograph demo, which was really about playback, service workers were used heavily, shock a player was used for media playback and then there was a long list of supporting cast of APIs including the media session API, the full screen API, the screen orientation API and a variety of others. You don't have to remember all of them fortunately because Francois has written a wonderful article on best practices for mobile web video playback which is available at the link at the bottom. And if you're looking to do your own progressive web app doing media playback, please do take a look at the sample code provided by Biograph. On the mustache side, media stream recording API, as well as media capture and streams. These are both available today and at the bottom is the link so that you can access the mustache demo. Again, Chrome Canary on your mobile device and make sure to enable the experimental web features which brings me to arguably the most important point. We want feedback. Some of what we showed you today, the title of the talk was the future. Three of the APIs in particular background fetch to allow you to do downloading of media even while the user navigates away from the page or closes the browser it will resume when they come back. The shape detection API, the ability to look for QR codes, text, faces and other objects and the web share API, the ability to share items socially. All of these are being developed. This is your opportunity. This is a great thing that's coming in the future but frankly it'll be better if somebody in this audience looks at and says I would like this and our developer team and the other developers on the web working on the API say that's a pretty good idea. Please do help us make this a reality by going to these sites and providing feedback. Look at the APIs and tell us what you'd like to see. Or you can come see us in person we will be in the sandbox behind the stage so a lot of great APIs, we're happy to talk to you about them and frankly we are just really excited if you look at the pace of innovation over the last few years the web is coming to play, the mobile web is coming to the media game and we just cannot wait to see what happens next. Thank you everybody.