 Is everyone ready for the key note? The closing key note of the conference? Yes. No? Yes. Yes. Yes. Yes. All right. So you have Darcy. Thank you. Over to you. Awesome. How's everybody doing? Good. Good. How are you doing? Fantastic. Fantastic. Awesome. You guys have any energy left? Yes, no? Yes, you do? You're going to be to stay awake and listen to me and let me talk for an hour, maybe, 45 minutes to an hour? That's great. And today was awesome. You guys had a good time? Yes. Okay, good. And now you're going to watch the worst talk ever, right? This is going to be the worst one. So my name's Darcy Clark. I'm a developer from, I'm going to be talking about the future of video, right? Everybody's interested in that. That's why you're here, hopefully. I'm going to fix this slide. There we go. I'm a developer, designer, speaker. I'm also a mentor and I call myself a UX advocate. I gave that title to myself. Actually, I gave all these titles to myself, but that one especially. So I advocate for better user experiences on the web. I really care about how something feels, how it functions. I'm from Canada, right? Everybody thinks I'm American? So I'm from Canada, yeah? So that's cool. If you want to follow me on Twitter, I'm Matt Darcy. I have my first name, which is pretty cool, right? I had to pull some strings to get that. So I get a lot of mis-tweets as well. I used to do a joke. I would show some of the mis-tweets, but they're pretty bad. You can also fork any of my projects on GitHub if you want and you can check out my website. It's really old. Don't make fun of it. I'm sorry. I've posted any blog posts in a long time. So some work that I've done. I co-founded a company called Themify. Does anybody use WordPress? Yeah? No? Some of you? Yeah. Go ahead. If you go to Themify.me, I co-founded that company. My phone's going off, doing this off. People are adding me on Twitter. And I've worked with a bunch of startups as well in Toronto, and I've done some consulting for the likes of Google, Microsoft, and I've worked with some open-source projects and contributed to the community. I helped start a sort of concept called the front-end developer interview questions. That got really popular on GitHub, and it's a great resource to help sort of find the right questions that you should be asking, either potential candidates for a job position or basically to test yourself and see if you sort of know what you should know as a front-end developer. So cool. So over the last 12 months, the last year, I've sort of been focused on a number of different areas. If you like any of these subjects and you want to talk to me about one of these things today after my talk, feel free. Things like build tools, language abstractions, responsive images, but the one that we're going to focus on today's video and sort of the future of video. So I put an asterisk here. The future of video, I consider it kind of like a moving target, literally a moving target. Kind of funny. Targets like the Walmart. Because I started doing this talk about it a year ago, I've talked about two or three times, and it's changed over the past 12 months for sure. And it's because we keep changing what the future is, right? The future is never, never here. So when I say the future of video, that might change in a month or two. And what you might have thought was the future is now, the present and the now. So, but hopefully I'm going to show you some stuff that's kind of cool and that you'll enjoy. What we'll cover today is a little bit of history, how we got to where we are. The current landscape of building video experiences on the web. And then we're going to talk about the future and I'll dive into that. Essentially this breaks down into experiences, codecs, containers and coders, lots of really cool stuff. If we can get through all this, that'd be amazing, buffering, digital rights management and encrypted media extensions. So if any of that makes any sense to you guys, that's awesome. You're already a leg up. If that means nothing to you, there's going to be some sweet demos and I'll try to make you all laugh or something throughout the, I'll dance up here if you want. So at least you enjoy your time. Awesome, so I want to kind of go back and sort of figure out how we got to where we are today. So let's kind of look at the history of codecs, containers and playback on the web. In 1986, Sony released actually the first digital video format. It was called D1. It was uncompressed. So you can imagine that was a very good way of storing video information. And it only stored up to 94 minutes, right? So that's like one movie, that's it. That's all you could store. In 1988, two years later, Sony worked with a team at Ampex to create the next version of D1 called D2. Kind of like Mighty Ducks. Have you guys seen Mighty Ducks? Yeah. As a Canadian, hockey is my thing, right? Yeah, do you guys play hockey? No, anybody? Yeah, you play hockey. Field hockey, oh yeah. Ice hockey, yeah, yeah, ice hockey. So the cool things about D2 were it was simultaneous playback, which was really cool. A stored composite video over component, which kind of meant that you could overlay graphics on top of, so I imagine if you wanted to have some sort of news headline, you could put that on top of another piece of video. So that was the composite video. Unfortunately, it was still uncompressed though. And then in that same year, H.261 is released. And you'll recognize that format as it was the first release of the H.26 format family, right? And the big thing here was they actually used compression, which is the first, well, it keeps going off, sorry. So this was the big revolution in video formats, was that now there was a video format out there that was using compression to optimize how much video you could store, right? Video information you could store. And essentially, every single video format, since the H.261 has basically been built off this format. So if you kind of, if you haven't ever looked into this, the word codec actually comes from coder plus decoder. It's just like two words mashed together. Well, it's either coder or decoder. It's a compression and decompression. So when people say codec, you can imagine, you're talking about how do we encode all this video information and how we decode it, right? Like what's the encryption and decryption, right? So in 1994, a few years later, a company called the Duck Corporation actually released something called True Motion. It didn't require a separate decoder, which was really cool. So everything was in one format, the encoding and decoding. It had lossy video compression. So it meant that you essentially, it would be like talking about JPEG compression versus ping compression. So ping is a lossless format, and JPEG is a lossy format. So what was really cool about the True Motion was it was a lossy format. And it used the AVI file extension, which I don't think a lot of people use today. It went on, the Duck Corporation actually turned into onto, and they created, they basically donated their work that they have been working on this thing called VP3. And it was like the original version of the VP series was VP3, and eventually went on and they made iterations to that and they eventually got bought by Google onto and created VP9 in the WebM format. VP9 is actually in WebM and VP as well. So that's kind of the history of codecs and video formats, or digital codecs and digital video formats. Some history as far as video playback goes. In 1991, we had things like this, right? Does anybody ever see a video like this? Yeah, you have like an old school Mac, yeah? Yeah, some of you, no? So this was like how we would basically watch video in a playback format on our computers. In 1995, you saw Real One Player, anybody remember this? Yeah, everybody, yeah, I loved it, right? It was sweet. What was really cool about Real One Player is they had a web browser in it, yeah? That's a terrible idea. You got a video player up there, you got a web browser. How do you get anything done? There's just so much going on. So this was kind of funny, right? Like, I love Real One Player. Then in 1996, we got our favorite, right? This is our favorite, you know, playback flash, you know? Everybody, who loves flash here, right? Oh, you do? It's being legitimate. So this obviously took off, the flash player took off, and it was also, then you could embed these players, Real One Player, you could embed the QuickTime player, you could embed it into websites, and same with the flash player. And a few years later, in 1997, this was one of my favorite ones, right? Winamp, yeah? You guys, I had this for sure. I was playing Battle.net or StarCraft, and you got some scripts going on. It was amazing. You could create like bot step that could interface with Winamp. It was amazing. But this is how I remember it. Oh, wait, I took that slide out. I'm gonna find that. No, okay. I remember it being the boxy version. You remember the boxy version? I used to have a slide, I don't know where you went. But essentially, the aftermath of all these different video players, and these different video formats was a big fragmentation in the codecs, right? So we had FLVs, we had AVIs, we had WMAs, what else is on there? A ton of stuff, our WMVs and MPEGs. So there was this fragmentation in video formats, right? Who supports what will play back in the web? And this is no good. There was also a concern about copyright, like proprietary video formats. That's also not good, right? Like we have this fragmentation as far as licensing. That's not great for the web. And then we had all these plugins, yeah? You had to install it, right? You had to install the player. So the browser had to have a plugin. So you get all these prompts, and you just click yes, you don't even read the license. Your firstborn child is actually, Adobe owns pretty much all our lives, right? We just click yes. We have no clue what the terms of views are, you know? So in February 28th of 2007, opera, right? Who uses opera? Anybody use opera as like a browser? Yeah, some people. You're legit. You're the coolest. They came up with this spec for the video element, right? H05 was right around the corner, and they came up with the idea that we really need to have a standardized, native video implementation in the browser. So opera proposes the video element, and they actually implemented it the exact same day, which was awesome. So there's a mailing list thread that goes back and shows that they were like, hey, W3C, we wanna implement this awesome thing called the video element, and we actually just did it. So here's our spec, and then everybody basically used that to implement it. Excuse me. So in 2015, right, where that was eight years ago, we now all know about the video element. We all include it in our websites. It's ubiquitous, but we still have these plugins. We still have these proprietary video players, right? Flash is still around. Silverlight is still around. Still being used. Netflix, I think, still uses Silverlight. So there's still implementations. And the big reason why is because all of these guys, right? All these big corporations that hold on to old school Hollywood, they hold on to proprietary licenses. They're concerned, right, about DRM. And I consider DRM to be defective. It's digital rights management, trying to license and lock down content. It's not the right way for us to sort of protect content and intellectual property. I think they do just a terrible job. DRM does a terrible job trying to protect IP. So there's like a cool website, if you wanna go check out and read more about at least the way that I think about copyright and why it's bad. But unfortunately now these companies have come up with a new spec, right? Somehow they got into the open web standards game. They've been working with W3C. They wanna get rid of Flash and Silverlight because they know it's not, they know that it's not being supported by Microsoft and Adobe anymore, really. And they know that there's hog bugs in both of those players. So they wanna move away from it. They wanna use the open standard of the video element. But with some caveats, right? So the encrypted me extensions is a new spec essentially that explains how we will encrypt data that's being streamed to a video element or any sort of like media that we're streaming from a server. And there's a couple of specs. The biggest contributors to these specs are companies like Netflix, right? They really have an invested interest in getting off of Silverlight. Microsoft and Google. And these guys all have wanna lock down, lock down essentially like IP and sort of please all those big corporations. So they're the ones driving this. And it's kind of a, if you look at these three names, these guys all have commercial interests as far as getting involved in open standards, creating standards for the rest of the web. And me personally, I think that's bad. Bad business practice. Because what has come along with that is a bunch of bad, again, bad business practices in the sense that these standards in a lot of cases, they're having backdoor meetings. They're not talking with the rest of the web community about how things are being implemented that are gonna be shipped in the browser. So it's just no good. So yeah, they wanna get rid of Flash and Silverlight and they wanna replace it with the video element. And it kind of, to me, I look at it this way. You know, they are getting rid of these plugins, right? The plugin architecture for how they serve video. And they're just replacing it, swapping it with something that's standard. They're kinda using and abusing the standards bodies to come up with a new way of getting, gang are utilizing the cool stuff that we get to use as people that are developers that are part of the open web. So Firefox, Mozilla, the awesome company that they were, they were fighting back against these guys with this encrypted media extensions spec. They kind of lost out because Netflix has such a huge market share in terms of traffic. They have like 30% of the traffic on the web is from streaming from Netflix, which is crazy. It's amazing. So they sort of threw around their weight and they said, you know, this is the way we're gonna go. And if Mozilla wanted to keep up, you know, if they wanted to maintain their relevance as a browser, they had to support this. So they announced last year that they, you know, they're gonna give up the fight and stop fighting for open standards, which is kind of sad, but makes some sense. So if you're actually looking, if you work for a company that might be interested in getting off Adobe or Silverlight and utilizing the video element and streaming data and encrypting it and making sure you have a way to protect your content, you can check out this HL5RUX EME blog post. And it just goes into details on how you actually create like the key that gets passed back and forth from the server and the client and how you actually stream data and encrypt that data. There's also the Shaka player, which was released by Google is another option if you wanna use a plugin that will help or a library that will actually help you do this. But yeah, so that's essentially, I think that's essentially that's it or all I want to talk about for encrypted me extensions. Again, it's not something I'm super happy about. I don't think that it does anything. You know, people are still downloading and still pirating software one way or another and getting involved in web standards doesn't help the rest of the web. So, I want to sort of talk about building experiences with video and where we are right now and what people are doing with the video element. So we all know how to implement video, I think, or most people do. It's pretty simple, I can zoom in on this guy. You know, here's like a sweet, whoa, the explosions, right? This is just a basic video tag with attribute controls and I've got some fallbacks for different browsers so I have the widest support possible in different browsers I've just defined. MP4, WebM, and then I also have a fallback if you don't support it, if you're going back to I6 and you don't support the video tag, there's also a fallback with a link, right? So this is how we implement it, you know, to sort of go further from just playing video and we want to manipulate video. What a lot of people do is they actually stream video into like a Canvas object or into WebGL so let me open up this guy. So a pretty simple implementation here is I've got a div and it's inside of it, it's got a video tag, let me bump this up a little bit more, there we go. It's got a video tag in it and then it's also got a Canvas object sitting right beside it and what we do is we grab the elements or basically select the elements that we want, get the context and once the data has loaded we just start a setting of all, this isn't the most optimal usage. I would, if you wanted to be streaming data from video I'd usually say use like a request animation frame or something like that instead of centerable but essentially what this does is once I start playing the video it will actually draw for each frame, it will start drawing into the Canvas and updating the data there. So let's go like this, probably gonna have to refresh this guy, that's good. So cool, so you can see I play the video and on the right hand side the Canvas is sitting there and it's just being mapped, all the data is just being mapped over into the Canvas object, right? So this is the first sort of step into manipulating a video on the web, right? So I got a number of cool demos here that sort of showcase how you would then go off and change this. So this is Remy Sharp, this is a guy he wrote each, I think each one five essentials or something like that or each one five rocks I think. Or each one five demos, each one five something. He wrote a book with a guy from Opera and so he's got a number of each one five demos here. We're just essentially taking a video and you're changing some of the color data on each frame. So it's just playing right now. We can actually use get user media which will take the information from my webcam and stream it into the object. So if I refresh this guy or change this, yeah let's do FaceTime. Let's see how bad the Wi-Fi is right now. Is anybody downloading anything right now? Are you guys pirate? After I just said don't pirate things, did you just go download a bunch of stuff? Here we go, okay. Sweet, so here now we've got the get user media spec. Essentially it sort of streams anything that's coming in from my webcam. It's a really simple code to use. We scroll down here. We do a loaded metadata and we're just drawing essentially from the source. We're just drawing from the get user media event. We're just drawing to the canvas, right? So that's kind of cool. So you can do some color manipulation. This is a really fun demo utilizing 3.js and WebGL. And what's actually happening here is again we're taking a video that's running but we're using it as a material on these boxes and it's, I think there's audio as well but you can see that it's actually playing a video and it's exploding it out and giving you this really cool effect. So this is kind of like an interesting concept of what we can be doing with video on the web right now, right? I'm creating cool experiences. Let me see, we got some audio now. Let me refresh this. It's probably gonna chug along if I do it. This is another example of essentially utilizing WebGL. I think this is not, this is shaders. They're not using materials in this case. They're taking a video drawing it in WebGL and then we can do some like cool, like let's do a zoom blur, right? So on the fly you're watching me manipulate the video. Tile, oh yeah, woo, look at this. Okay, what else can I do, mosaic? Yeah, get nice and pixelated. So this is a kind of interesting demo of what you can be doing. Again, all we're doing is streaming in video and then we're manipulating it on the fly. Cool. And then last, this is kind of like a really old demo. I'm not even sure if it loaded properly here. So this is one of the first ever, when the video tag came out, there was a lot of people that were super excited because they wanted to start, they thought there'd be sort of a revolution in making videos that thought that essentially we were gonna create a whole bunch of free and open source kind of videos. So if I like click around here, I can like explode the video and it like comes back together. It's kind of slow right now. Just because the, there we go. It's sort of chugging along here. I've got like a million video demos like lined up, so. And then one of the kind of things that I love I've actually worked with quite a bit is sort of facial recognition and facial detection based on again, the get user media, mix with video. And so here, I'm gonna try to throw my face in here and start detecting. See if it's gonna chug along. And in this demo actually we can choose again a face to map using WebGL for mapping like the Terminator. Wait, what's a good one? A Kim Kardashian? Yeah, is it gonna give me my beautiful lipstick? See, I'm sure that on camera this is gonna look great. Me just making faces at my computer. All right, let's try Rihanna. There we go. Yeah, she looks good. So, and this is a really interesting right? Like as a experience, we are taking that stream of information. We're doing some sort of manipulation on it and then we're drawing to a canvas or yeah, in this case, canvas experience all through video manipulation. What I'm hoping is gonna load now is actually an experience I built for Lincoln. Everybody knows Lincoln, the car company for the Grammys a couple of months ago. I got together with them and we did like this music selfie sort of project, so here we go. So what it does is if we can generate one here. So here we go, music selfie, I can use my webcam or I can upload a photo. I'm gonna use my webcam because again that sort of shows off how we can be creating cool experiences with video and again, you can see their facial recognition. See the kind of dots going up and down, right? And we can snap a picture. That's gonna be really weird. And you can see that we use this thing called CLM Tracker to sort of analyze the image data, pick out the points of your face and what we can do is from there do some really cool analysis and this project actually took those points of your face and created a unique song for you, a unique really nice sounding song. I'm hoping I can get audio in here. It goes off and hits an API and we had this algorithm that would actually create a unique song for you based on your face, right? Based on all those points of your face. If it doesn't load, I might just open up. So as it's playing back, we're sort of like showing you what's being highlighted on the screen. And if I click on or I sort of highlight these different areas, they actually isolate the channel of the music. So you can see that the jawline is the bass, the bass guitar, and let me replay that so you guys can see. So let's just highlight the lips. So you can see the lips are actually the totality and there's none cool. And if we highlight something else, my nose, my big nose, percussion, it's cool. So and they all come together to essentially create this awesome experience and if you can sort of like highlight different areas around here, do some WebGL. We do some WebGL sort of filters on top of that video. So that's more static, sort of stack experience but mixing with video and some image data, we get a cool experience, right? So there's a whole bunch of other ones. A lot of people have applied pictures of themselves. Yeah, let's open up one more and just listen to what this Lay's face sounds like. I hope she doesn't mind. She signed off to something, right? She clicked the S on one of those boxes. Her face is now all over the internet. If it's gonna load, it's a little slow. But again, we had I think over 4 million different potential unique songs that could have been created from your unique facial attributes, which was really cool. A little loader, you're gonna work. That's just gonna start playing at some point. Okay, well, that's fun. You guys can go play with it. It's called the Lincoln Music Selfie Project if you wanna check it out later. So that's kind of some cool stuff that we can do right now, right? Like we can actually build, you know the tools that you can use to build these experiences, things like 3JS, you know how to work with video and you know JavaScript hopefully or you're starting to learn JavaScript. So yeah, Lincoln Music Selfie. So I want to talk kind about, again, what we can do right now with JavaScript and video, but also this leads into what I think the future of JavaScript and video and video on the web is. So there's this really cool concept of basically encoding and decoding video with JavaScript. JavaScript can do everything, always bet on it. Brendan Eich says, always bet on JavaScript, right? I think he's right, although sometimes he goes off on tangents about web assembly or whatever it is that he's into at that moment. So why JavaScript? Why do we think it's a good idea to be encoding and decoding video with JavaScript? Well, it's not proprietary, right? It's an open standard. JavaScript's open for everybody to contribute to and to have discourse about the future of. It gives us more control then, right? It's not proprietary and we don't have to worry about licensing with JavaScript. It provides sort of an alternative to the backdoor spec, right, the media extensions. We can actually do something really cool where we can watermark with JavaScript. We could potentially use Node on the server and be streaming video content to the browser from our Node server and we could do some really cool stuff there. And of course we want to use JavaScript to do this kind of stuff and to build these experiences, these new video experiences going forward because it's awesome, right? JavaScript's awesome. Yeah, woo, you're still with me? Yes, are you excited? I'm excited. I'm really excited. So JavaScript video encoders. So this is crazy. If you've ever done anything with video and you know about encoding, right? Encoding a video or changing the codec of a video, you'll probably have run into this project, right, ffmpeg. A lot of people have done that. Everybody's used it. Yeah, well, I'm gonna show you that you can actually run ffmpeg in the browser. What, right? You're all like, no way. No, it's kind of slow, but you know. So a team worked a couple, like two years ago, worked about converting ffmpeg, right? I think it's a C library, convert it to JavaScript. And what they used was M-script in and they had to do some fixes after they ran it through M-script in and basically came out with JavaScript in the browser, which I think is awesome. And the project's called videoconverter.js. So they have this nice little fun demo. It's kind of an ugly website, but you can choose a file. Choose, let's say, MP4. And it's actually reading the, in the browser, completely in the browser, it's reading the video data right now. And once it's done, it gives you some options of things that you would typically do with ffmpeg, right? Like maybe change the format of the video. Like we're gonna convert like MP4 to, you know, maybe a GIF or to mpeg or movie file. Maybe you wanna do something like flip or do some crazy thing with some sort of transformation of the image data or the video data, like blur. So let's do like a vertical flip. And I hope, and that's export. So we start off with a MP4. Let's convert that to a move file. And if you click start processing, you can see this is all running right in the browser. You start to see the work that's being done. It's just like basically console logging all the information here. And it's processing this. So it's processing it. I can hear my fan. It's running like crazy. This is all happening in the browser. JavaScript is now converting a video, right? Encoding a video and doing some sort of manipulation to that video all within the browser, which is nuts, right? You guys, yeah, this is the future, right? You're like, we don't need like, there's no need for a server anymore or an operating system. There's no. Okay, click here to download. Oh, it's done. Yes, okay. It's way better than I thought. So click here to download my output.move. Let's look. And wicked. So now it's like flipped upside down. I'll show you the, actually, I should have showed you the original one. You just would have to believe me that it was not flipped originally, right? So here, can I get to my, yeah, I can get to it. So here's the original video we started off with. It's a guy in a subway in Toronto. And you can see it's just kind of looping and there's just like a sliding door. So what we actually got back there with is a .move file, right? We converted the video file right in the browser and we flipped all the image data. We just like turned it around. Amazing, right? Amazing. So cool, JavaScript can do anything. Cool, so let's make this full screen. Close that guy. So the only problem with VideoConverter.js is encoding right now in the browser, just doing it 100% on the browser. It takes a long time to actually load the script, right? VideoConverter.js is basically like ffmpeg.js. It's pretty much this huge library that has the, it's like a, imagine having a script file that was like 10 megabytes big, right? It's like a huge image. It's like almost the size of jQuery, right? Making fun of you in your file size. But unfortunately it's slow and kind of memory intensive but this is only gonna get better, right? It's only gonna get better. It's only gonna get faster. The JavaScript VMs are only gonna get faster and it's potentially something that you could use in future projects. But so that's encoding, right? But what is probably more viable at this point is decoding with JavaScript. So right now we have still some fragmentation with video formats being supported in every browser, right? A lot of people say that MP4 is probably the best video format to be using because it's most widely supported, right? But there's still some edge cases. And there's also edge cases in terms of experience, right? So mobile Safari takes over the video experience. So that kind of sucks, right? You don't wanna have to full screen the video and you wanna control what experiences are that you're shipping on the web. So utilizing a JavaScript decoder, you can actually read video files on the fly and actually be pushing information to Canvas just like we were showing with video experiences. So there's a project out there called JS MPEG. It's pretty cool, right? It's mind blowing that you can basically read video files and be real time decoding the frames and pushing them to the browser. And it's really simple, right? All you do is you specify where the Canvas is and you say, oh, I wanna start a new player and I wanna read that video.mpeg, right? So simple, right? So cool. Mind blowing. And then unfortunately though, MPEG is kind of a crappy file format and not a lot of people use it. It's got narrow file requirements. It's kind of picky as well, this implementation. So I wouldn't recommend using this and this actual library doesn't support audio. That's sort of a killer, right? Okay, well, there's more, right? There's more of these libraries you may not have even known about, right? There's a VP8 one. Now this is starting to go somewhere, right? You wanna use WebM. It's got the best compression. It's amazing, right? So check this out, right? So it's a little bit, you know, the codes. Let me do this. The code, there's a little bit more code here, right? But still, this is not a lot of code for the fact that you are reading video files and pushing them to the browser. So you define where the Canvas is, define the player, I'm gonna get rid of that. And you do Ajax request to basically get the file and then start reading it, right? And then spitting out the data. What we do here is we say we split up the data, the chunk into frames, and then we start painting. So let's check this out. I think I might have to refresh for this guy. There you go. So you can see that same videos being now read and thrown into a Canvas object. And we don't have to worry about support. You don't have to worry about if the browser supports WebM, right? It's just JavaScript and the Canvas object. Beautiful, it's amazing. You guys excited? Yes, okay. Unfortunately, right? Unfortunately, performance isn't very good on mobile for this specific implementation. And performance on desktop wasn't very good still. And unfortunately there was no audio. That's a killer, right? So okay, so this is maybe the way we go. There was this project called Broadway.js. Has anybody heard about this? Mozilla put some time to this. And it's, let me open it up, and I'm really hoping it's gonna work for me here. I'm gonna have to refresh this page as well. So it actually does MP4s, right? So it will read a MP4 and again read the video file and start painting frames to Canvas object or utilize WebGL. I clicked here, a little bit slow. But essentially it does give you support for MP4. It's got similar problems that it doesn't support audio again, right? So that's a killer, right? So I come to you now with probably the last decoder that I wanna show you. And it's the ogv.js. Again, you've probably never seen any of these projects. A lot of them are kind of experimental, but this is probably the best one I've ever seen. And I'll show you why. It's got audio, it uses the Web Audio API. It's got amazing, like basically it's got amazing API to actually interact with all this stuff. This is essentially just a Canvas object, right? It's just reading the file information, the video file information and painting to Canvas, which is really sweet. Again, you don't have to be worrying about the browser support anymore. It's got real mental, and how it works, the differences, and you can see it. You can see, right? It's got some information here. It's got some information here. So this is a great demo, and this is a great library that I, you know, if you think that you need to be working around any browser limitations, proprietary licensing limitations or anything like that, I would suggest looking into this guy. So this kind of leads us down a path. We start thinking, okay, we don't need to care. Oh yeah, go. For Flash video, for compression? So the thing is that WebM, for instance, is probably got the best compression, right? We, you don't care about compression because all you're doing is like doing like an Ajax request to get the file information. So if you are trying to load, let's say, 100 to 500 megabyte, and that's like pretty big, video, it's gonna take as long as to go get that video as it would to go request JSON or XML from an API that is also 100 to 500 megabytes. So your compression limits are only limited to the file format compression limits that you are storing the video data as. What this works around is a proprietary licensing and formats for the browser implementations, right? You're working around that because you don't care what the browser actually supports, right? Sorry? Yeah, we're gonna get into that, man. Give me some time, I'll get there, I'll get there. We're talking about the future. There's a lot here, I'm trying to cram it all in. And yes, the short answer is yes, we'll get there. So we're essentially leading ourselves down this way where we're like, we don't care what browsers implement anymore, right? Let's have just a pure JavaScript codec, right? Imagine we've encode and decode information with just JavaScript. And there was this promise, or this blog post that was actually made about two years ago by Brendan Eich, who was working with a company called Otoi at the time. And he said, I have seen the future, right? He was like, I've seen the future of the web. And it's this cool service called Orbix.js. And Otoi is this company that's gonna build this product that lets us basically have a JavaScript video codec, which is amazing, right? And he said, it's next generation and we're gonna be able to utilize the cloud and all the GPUs in the cloud. Do you have like amazing performance? Basically, you could run AutoCAD in the cloud and you could run it on a Chromebook or a very netbook, right? You could have a really poorly performing hardware that can utilize cloud computing to do some really cool stuff, right? So you could be running Photoshop in the cloud and be interacting with it. And all you were doing was just streaming the information back and forth with it. So that was kind of cool. And the promise was this library called Orbix.js. And it has 25% better compression than H.264. And then it has a number of other amazing features that like, you know, this sounds amazing. That was two years ago, right? And last year, I started, about a year and a half ago, I started doing this talk and Orbix.js has still not launched. So I feel like I have to have some mean words with Mr. Brendan Eich. He actually sits on the advisory board of this company. So you could call it vaporware. It will probably never come out. And it's okay, we've got other solutions. But this was sort of the revolutionary concept that everybody was talking about at the time. Hey, let's come up with a JavaScript video codec. So if you've heard sort of in the news lately, again, Brendan's like, I've got this new idea. There's something cool that we can do on the web. It's called WebAssembly, right? WebAssembly, short form is Wasm. Sort of takes the best of Asm.js, if anybody's used that, and WebAssembly. And it's gonna utilize, you know, bytecode and it's gonna execute faster than JavaScript. They say it won't compete with JavaScript because everybody's gonna still wanna write JavaScript and this will just be for highly performing apps like games or if you're trying to write software that's gonna run in the browser, you would probably compile it into Wasm. So this is the potential for us to create a JavaScript-only video codec that is there utilizing WebAssembly in the future. And I bet you'll hear a lot more about it in sort of the months and the years to come. So that's sort of it as far as like a JavaScript video codec. But going back to actually encoding and decoding and utilizing the video element, right? Using the video element and canvas objects or canvas element in the DOM. You can actually do streaming and buffering, utilizing the spec called the media source extensions, right? Has anybody heard of this? Anybody used it yet, no? Awesome, so I have a couple, again, Mozilla wrote a really good, one of the guys at Mozilla wrote a really good blog post called Streaming Media on Demand with Media Source Extensions. There's a lot of good information there as far as how you implement these things. But I'll just run a demo here. So it's really simple to set up. The Media Source API is as simple as saying new media source and I'll sort of get down on here. Did you do the option? Yeah, okay, one sec. Let me zoom back out. Yeah, there we go. Woo! Awesome, so that's the API Media Source. Again, we specify the video that we want to be sending information, video information to. And then you listen for the source to open and then we start basically passing like we pass along a source buffer and we tell it what codec is being passed along from our server. And this is how you can actually stream video. Let me refresh this guy. So I think it auto plays once it's actually done. How about just click refresh? There we go. So you can see that it essentially buffers from the server, does some chunking. So it's basically streaming from the server, which is really cool. So you can imagine that if anybody knows Twitch TV or any other sort of streaming sites that they're gonna eventually move off of their flash and silver light implementations and start utilizing, or even YouTube, right? YouTube Live will start to utilize the Media Source extension to create purely non-proprietary, non-plugin experiences for streaming, right? So I think I have another example of this as well. Oh yeah. So I actually reached out when I saw this blog post, this is a couple weeks ago, I reached out to the guy who wrote it because I want to highlight that Vine, does anybody know what Vine is? Yeah, how many people? Yeah, some people know. Taking short little clips, short little video clips. Instagram has that support, but that's sort of their business model is to share short little video clips. I know a guy at Vine who told me that they were utilizing the Media Source API, which is currently only supported in Chrome. I think Mozilla has potentially, I think an implementation now, or support now might be behind a flag, but they're essentially doing some buffering, right? They're actually load videos ahead of time. So you can imagine when you try to preload images, right? You want to go out and fetch the next image that might be in a carousel or some sort of photo gallery. You'll want to sort of expect that the user is going to go back or forwards in that carousel, and so you'll want to preload those things. So what Vine does is they utilize a single video, in this experience specifically, they reuse the same video element, and what they do is they create buffers, video buffers for the videos that are going to be potentially loaded before or after, and they queue them up and start loading them, and they just basically pass that information right into the video. So there's almost a seamless experience. Now, unfortunately, the Wi-Fi's not that great, so I think it's taking a long time for them to actually queue up the video information. Let me see if I refresh. But essentially, they want to create this experience where you're seeing a little clip, and then it goes to the next little clip, then the next little clip, kind of like automatically rotating photo gallery, right? But in this case, it's automatically rotating video gallery, right? It's kind of really cool. Take off the... I'll unmute the sound, maybe it'll be a little bit. Come on, Wi-Fi. Who's using Dropbox right now? Get off of it. Get off of it. Somebody's playing video games back there. I know it. They're like, this guy is Canadian. He hasn't said A once, eh? Yeah, Canadian A? Do you guys know? We say that all the time? Eh? No? Okay, well, this is taken forever to load. But essentially, like I said, they use... Vine is one of the first people to actually implement this new API to really get some performance enhancements over their video experience, right? So they are buffering videos before they actually come in, which is really cool. So, if you were... Does everyone know Back to the Future? This is my favorite slide I've ever created. It's the best. There's a little box shadow there. It's like our text shadow. It's amazing. So if you walk away from this talk today, thinking about anything, being inspired, or going, wanting to go out and learn more about these concepts, I hope that you sort of walk away and you ask yourself, okay, what is the future of video? What did Darcy tell me? What should I be interested in? Well, these are sort of the five things that I think about when I think about the future and what's gonna happen in the next six to 12 months in the open source world and in JavaScript. You're gonna see more and more creative, immersive experiences. People are gonna do more and more with video on the web utilizing JavaScript. You're gonna see the encrypted me extensions. If you work with a big company, they'll probably, and they have video content that they wanna lock down and secure, you're gonna probably have to worry about that and learn more about encrypted me extensions. Media source extensions, so the buffering, the streaming. If you work with a company that does any streaming video or you want to do live streaming, you'll probably work with that. WebAssembly potentially has impact on your life if you again are working with any kind of big applications that want to, or our game development studios that wanna ship experiences to the browser. And then potentially a JavaScript video codec, a pure JavaScript video codec which might be around the bend. So, thank you very much for letting me talk and run around on stage here for the last little bit. Do you have anybody have any questions? No? I got one more thing really. I can show you one more thing. Does everybody know Oprah? Yeah, no? Some people know Oprah, yeah? Yeah, I got one more thing, or Steve Jobs? One more thing? Yeah, of course, I wasn't gonna leave you stranded here. So, DVDJS, right? You guys got some DVDs? Yeah, I got a couple of DVDs for sure. Yeah, I love DVDs. So, a guy at Mozilla, again, Mozilla is very progressive when it comes to trying to work around proprietary software. They make all these cool libraries to essentially do things that sort of divert from the corporate, commercialized aspects of the web. So, DVDJS is essentially a library that converts DVD files into web-ready files. So, this is why it dots. DVDJS does this. It does ifo files, and those ifo files that are on the DVD disk actually get parsed to JSON, and it does some other conversion. He basically created this because he had this huge library that he wanted to somehow, of the huge library of DVDs that he wanted to store digitally somehow, and he wanted to watch them when he was abroad, right? So, he wanted to be able to watch all his DVDs when he was somewhere else. So, he basically figured out how to parse out all the information in DVD and turn it into web, you know, web languages, things that, you know, could be read, like HTML, CSS, and JavaScript, utilize web technologies to watch his DVDs. So, you know, things like, obviously, the menu in a DVD, how, you know, the DVD menu, how does that get parsed? So, he did all this work, and there's a big converter that you can, it's like a one-step process that you can actually convert your DVDs into, like, basically a package of HTML, CSS, and JavaScript, which is awesome. And I'm gonna show you, the internet lets me, actually a side-by-side version of him playing the actual DVD with VLC, and then it's running in a web browser, essentially the exact same information, so. So, here it is, he's clicking around on some menus, you know, he's showing you that he's basically converted the entire chapters and menus of a DVD to just, you know, buttons in HTML and CSS, and then he's gonna click play, he's gonna start playing the video, or he's gonna skip to chapters, and yeah, cool, and he's gonna, I think at some point, play the video, come on. Where is it? Here we go, oh no, he's going to credits. Okay, cool, there's the credits. Yeah, play, there we go. So, he starts playing it, and so, he's basically showing you that, you know, he doesn't have to use any proprietary sort of DVD playback software, he can just make everything in the web, and it works. Isn't that cool? Isn't that awesome? Is this inspiring to go make some video kind of experiences on the web? It's a wicked, right, so. Awesome, I have no clue what the heck they're gonna recommend, so get out of here. So, thank you guys, thank you so much for listening to me, and we'll talk later. Do you wanna come up? Nourish has some file words? Oh yeah, does anybody have any questions about this kind of stuff? Or you wanna just hang out afterwards and you can ask me in person. That does VPA transcoding? Yeah, and RTMP encoder like. Decoding, yeah, with audio as well. Yeah, so, I tried to make a pull request to that project, actually, because I also thought that it would be best to use the WebM format and having, utilizing VP8, and it stayed open for like two or three months, and the guy never, I tried emailing the guy that created the project, and he never got back to me, like multiple times. So, I wouldn't, I don't think it's hopeful that he'll add support for audio, unfortunately, and that was probably the best JavaScript decoder that I've seen. So, that's the best project, and unfortunately it doesn't support audio, so I don't think that's necessarily a viable option, so. Okay, and then the second question is that Chrome had announced some, in fact, many months back that they would have H.264 support in their WebRTC thingy, and it's not around yet, so any anything, like, when is that gonna be supported? I know some people there that I could bug for you, but I don't know anything as far as, this is weird seeing myself. Weird, yeah, I can't say anything about when they'll actually support that. I think I'd have to talk to, but if you wanna ping Paul Irish on Twitter, he knows the right people to talk to you. Cool, all right. Awesome, anybody else? Questions? Yes, no? Yep, there's a, sorry? Native Client, to write Native Client. Native Client? Yeah, NACL, they call it, right? So can that be used for decoding and coding, because this takes a lot of time, right? Just writing the JavaScript encoder, decoder. Has anyone played around? I mean, you could detect support, yeah. You could detect support for WebM, and then use that, right? So that's an option too. You can use something like Modernizer has feature detection tests that you can use. So you can just pull those out of Modernizer if you want, and then use those as sort of an if-else. So if there's support for WebM, just implement the video tag and point it. And if there isn't support, then instead of falling back with other video files that you would have to create, right? You'd have to be creating those. All you do is fall back to the reading the WebM, and then painting to the cameras, right? So you use the JavaScript decoding method. So that's an option as well, yeah. To do sort of feature detection, if there's support, then we'll just use the native support. So has someone played around with that? Like videoconverter.js is totally a JS, right? For encoding, though? Yeah. For encoding, that's still, I would say that's highly sort of experimental. So I don't know how often they're actually keeping up with the work that FFMpeg's doing, because it's sort of, it was a port of FFMpeg to JavaScript, yeah. Any other questions? No? Awesome. Thanks again, guys.