 So yeah, thanks for coming to play LX. Thanks guys, I can do it. Hey, thanks for having me, having a lot of fun in my life. I was born in France, hence my accent when I speak English. I got a few PhDs, one in France, one in Japan, video processing, and video games, because video games are cool. And Japan was the best place to play games. And then I went on doing all the stuff, and eventually I realized that Singapore was the best place in the world if you don't like snow, and if you prefer beach volleyball and stuff like that. So I came here. And I've been in Singapore for 10 years now. Ludovic here with me has also been here for 10 years, same background. We used to work for A-Star. Go to the next slide. We used to work for A-Star, so right now I'm in summer mode. Sometimes I go into winter mode, so you're lucky right now. And we've been working for many years after we left A-Star on a technology called WebRTC, that not a lot of people know, but actually that's the technology that empowered the audio and the video in WhatsApp, in Skype, in FaceTime, in almost everything that has audio and video today, especially if it runs on mobile, right? And so we're going to speak to you a little bit about that technology today, first because we love it, and two because we're doing a lot of innovation with it, and we're taking a lot of master students from all over the world, and we feel that we don't have enough from Singapore, which is a shame, right? So we're also going to sell a little bit all the fun stuff we do and all the research for mainly master student level people that we welcome for six months in research internship, so maybe you get the will to come and join us and have some fun with us, all right? So let's speak about WebRTC. So Ludovic here was also a French PhD in the image processing. Can go to the next slide. Work in Japan for many years, doing research on image processing from satellite, join ASTAR, Biopolis, Fusionopolis, and so on, and eventually got equally bored and joined the startup vibe, right? And never left. So we're going to speak only about one use case. WebRTC can be used for anything for audio and video. Most of the people know the video conferencing use case or the social use case. Today we're going to speak about something that is a little bit new, the streaming, or to replace Netflix and a lot of other stuff from Flash that you cannot use anymore, but that was fast, or HLS that is used today, but it's very slow with one or two minutes delay, right? So I'm going to present today's slides I wrote for another conference or about that. So I was invited expert to the number one streaming conference in the US. You can do it. Can do it. Yes, that's the right one. This one here on the bottom. And they asked me the question. So there were people actually from Netflix, from Apple, Apple TV, Facebook, and so on, asking questions. Is WebRTC really the next thing for audio and video? Because it's super fast, but it's a little bit young. Is it mature enough? Is it scalable enough? And so on and so forth. So let's go. The question is, is WebRTC the future? Especially in low latency. And so the Apple guy that actually created HLS was at the same conference and he met the presentation the day before. So they had some jokes there that you can only understand if you saw it. And his question was, nobody really need real time in low latency, right? Okay, so first question, why do you need real time? Well, in Singapore you don't speak a lot about that, but in the US they have a lot of porn industry, right? Or, you know, that's a billion dollar industry and they like money, right? And so the question, have you ever tried to have a romantic conversation with your wife or your girlfriend or your spouse with a five second delay? I love you and then you wait, you wait. You love me too, you love me too. Your insecurity is going to build up directly depending on how long it takes for your friend or your girlfriend to answer the answer, right? I know you wait, I know you have doubt and so on. So there are some cases when you have human interaction where the delay is very, very important to the user experience. And then there are some economy, some vertical, some ecosystem where you absolutely need to be fast. AR and VR, right? If you play a game, especially a multiplayer game and one of the guy has a lag, he's dead, okay? He can get out, he can stop playing, that's it, he's dead. If you do auction, right, and people are betting money or gambling, you better be as fast as you can or someone is going to beat it to you or by the time you actually make your bet, the thing is gone, right? So you have some business that cannot compromise on the latency. It's not the case for Netflix, right? So we're going to separate the two use case. You have the pre-recorded content that you're going to see a movie with your friend. It doesn't matter if you buffer for five minutes, you're preparing your coffee, you're getting out the pizza, it's fine. Pre-recorded content doesn't have a latency problem, right? But if you're starting a chat, a conversation with multiple people and you're really waiting for the answer and you have a countdown somewhere in a game, yeah, yeah, you want the answer right away. So that's one example. You have a lot of them. Can you go to the next slide? Then, okay, fine. There is a use case for real time, I understand. Some people might want to do it. Yeah, Mr. Apple guy, some people might. So, but what about WebRTC? It's a young technology. What does it mean? Where does it come from, right? Who is using that, by the way? So, historically, Google said, Sébastien, come on, you're late. Take a seat. Google said, I want everybody to go on the net because if they go on the net, they're going to use my search engine and they're going to pay me, right? So it's like a motorway. I have the toll on the motorway so I want everybody to go on the motorway. So why people are still using the desktop? Everybody should be in Chromium today and use my stuff. So, why? Well, one thing people were still using the desktop for because they could not do on the web platform was gaming and audio and video. And so you had those Java plugins. You guys may be too young to know that but we used to use Java plugins that were insanely awful to run and crashing all the time for gaming and audio and video on the net. And they say, okay, I want that in the web. I want that in an HTML5 standard. I want that in a web platform so all the browser implemented so everybody come to do everything on the net and I take my toll. So, they did WebRTC. They bought two companies. They put 80 millions on the table. One called Gips, one called ON2, one for the codec, one for the streaming. They put everything together, things work. So, how does it work streaming really? When you see streaming, you have a lot actually of steps, right? First, you need to have a webcam so people don't think about it and then without a webcam, it's more difficult to stream. So, you have a hardware device that is going to capture an image for you. You need to encode it so it doesn't take too much space and it's efficient. You don't use a lot of bandwidth, right? So, there is a question about which codec is better than others. I'm not going there today. And so on and so forth. So, the WebRTC stack is providing all of that, right? And the difference between the WebRTC stack and the plugin you had before is you don't have plugin, right? So, it's directly in the browser. You don't need to install something else. It doesn't change every time you add a DLL to Windows. It doesn't crash when you use the printer. It works on Mac and Linux. It's absolutely beautiful, right? So, no plugin. It's standard. Everybody has it. And it's based on 20 years of technology that was used in the voice of an IP. So, if you have a CIP phone, if you have those voice of an IP phone, they actually have almost the same stack. They used to be able to use video, but Cisco never managed to sell anybody to have any customer to pay with video, right? But no, if you put that in a browser, then everybody wants the video. Sometimes, I want my spouse, my wife, to shut up, actually, just the video, you know? Not the audio. That's fine, right? So, based on real-time protocol, there was something used for media in voice of an IP that was really optimized for phone conversations of very, very low latency. They made an extension that worked for audio and video with as many audio track and as many video track as you want. So, historically, Firefox was having only one video stream because they said nobody will ever want to have more than one video stream. And then they realized that people want to stream their camera plus their PowerPoint, plus the screen when they're playing Minecraft on YouTube so people can see how beautiful they are when they play Minecraft. I don't understand that, but my kids love it and all the YouTubers had the same use case. They want to have their screen or the gamers, you know, playing something with the picture in the picture with the camera, with an notation of a funny hat, a moustache, and sunglasses. Next slide. So, how does it work? Alice and Bob, A and B. Alice and Bob is much better than A and B. Each downloaded a web app from a website and the browsers are going to take care about sending video to each other. So, because they are on the same page, the web app and the signaling server will help them find each other. Oh, I know you because I'm on the same page. I'm on the same chat room, right? But now the browser inside has the capacity to capture your camera, encode the audio and the video and send it to the other party, right? And the only thing you have to do is to learn to code JavaScript. Hey, who doesn't know how to code JavaScript, right? Everybody knows to code JavaScript nowadays. Next slide. So, 100 million, 68 millions, they announced it in 2011. That's a little bit of the history. So it's been going on for quite a few years now. Go ahead. Today you have WebRTC in all of that. All of those browsers, all the version of Canary about all the operating system. You have it on mobile. You have it even on Safari. That took a little bit longer, but it's there now. And if you think about some product that you use every day that has that and you don't know about, think about the Chromecast. The Chromecast was fantastic. It was $40. I could connect. No, we have an Apple TV, but Apple TV has actually the same stack. Different codec, but same stack. Google's Hangouts, Google Meet, Allo, Duo, YouTube Live, Comcast, people don't know Comcast, but that's the equivalent of you using Star Herb or Singtel cable TV box, right? They're using WebRTC internally to send the TV. Star Herb and Singtel are not. A little bit behind on that. Facebook Messenger, since 2012, when we do an audio-video call in Facebook Messenger, you're using WebRTC, right? So they took it, they put it inside. And that's 7 billion video chat in 2017, and that's the second most popular iOS app on the app store. So without knowing it, that technology, WebRTC, is actually powering almost all the audio and video you're using every day. And it's open source, and it's available for everybody. So with WebRTC, what does the streaming look like? We still have the same thing as before, right? The media engine on the sender side, the media engine on the receiving side, everything that is in blue is providing to you by the open source code. Now people that do pre-recorded content, they cheat. You know that movie Batman from 99? It's been there since 99. So they had a lot of time to actually encode it one way and other change all the possible parameter of the encoder to find a way to keep the quality while making it the smallest file possible, so they use the minimum bandwidth. So Netflix is thinking about it that way. They say most of the people in the US has a four gigabyte data cap for the mobile. So I want, my metric is, how many hours of video can they see with that four gigabyte data? And if you look in 2012, I think it was two or three hours, and then they changed the encoding, they changed the thing, they do a per title encoding, they do a per chunk encoding, they improve the capacity, and on over there they have 18 hours. You can watch 18 hours of good resolution, same resolution as before, right? Video with four gigabytes because they actually did multiple pass encoding. So you go through the video one time, which part is slow, which part is fast, which part has a big head, which part has a moving thing. And then once they have this parameter, they do a second pass where they do encoding and so on. So people that do signal processing and encoding, there's still a lot of work and there is a lot of money to be made there at Netflix in the US. The location sucks, but it's almost close to the Silicon Valley and the seller is great. Another thing you can do is once it's encoded, it's not like real-time content. If I call you today and I call you tomorrow, your hairstyle might be different, especially if I'm calling a different time of the day. The Batman from 99 is going to be the same whether I see it today or I see it tomorrow. So once I'm finishing uploading it to the internet, I can keep it there and I keep it on the cache. So the sender side when you have prerecoded content can be totally bypassed. It's a one-off. It doesn't matter if I take one week to encode my movie. When I'm in a conversation, you're not going to wait one week before you get the first word for the other party. So the real-time and the prerecoded are very, very different in terms of production and operation. So that being said, all the technology have the same problem. I'm starting watching my movie and my little brother on the other room is also starting to watch a bigger movie on the same Wi-Fi. And suddenly my bandwidth available goes down and then things go low. So with older technology like Flash, if you don't have enough bandwidth, the movie stops, everything crashes. With a new technology like HLS and WebRTC, they're going to try to adapt. Uh-oh, the bandwidth is going down. I need to do something. So with a normal player, it means rebuffering. We all hate that, right? Why? Because they have file chunk. So they take a file chunk at the movie at a high resolution and suddenly they cannot get it fast enough to play it. So they say, okay, I'm going to take a chunk for the video at a lower resolution, but they need to restart from the chunk, from the beginning of the chunk. So if the chunk is 10 minutes, every time they need to change to adapt, you need to rebuffer a few minutes, maximum 10 minutes of video. With WebRTC, there is the same thing. You can automatically adapt the resolution, but you do it on a packet per packet. So directly on the network, a packet is 1,400 bytes, something like that. So then you have a run trip, right? So if you ping is 200 milliseconds, you can adapt in 200 milliseconds, and you only need to rebuffer a very, very small fraction of the thing. So basically the adaptation is very fast. Question, I see you now frowning and scratching and saying, ah, ah, ah, ah. Okay, I'll take all the question during or after, don't be shy. So there are different names to the technique that allow to adapt. This name is Seymour Cast. You have new generation of codecs that are called layer codecs, SVC. Maybe some of you have heard of AV1, which is the new codec made by Cisco, Mozilla, Intel, everybody inside the Alliance for Open Media. I'm not going to go there, but there is a lot of ongoing work right now being done or to be done. If people are interested in signal processing compression and codec research, it's really an active field right now and we participating in it. The second problem solved by that is that if you have 10 viewers, but they don't have the same device or they don't have the same bandwidth access, right? One is on a super gaming PC, water cooled with the two gigabytes internet access going in. This one will never have problem and he will want to have that 4K display, maximum resolution, 60 FPS thing. That's why he paid the big bucks, right? But at the same time to see the same stream, you have someone on a 2.5J phone who is barely smart, right? And with a very small display. So even if you send a 4K, he has the bandwidth enough to get 4K, he doesn't have the display, right? So he won't be able to display that resolution. So if you send him 4K, you will sting your bandwidth because he's not going to be able to handle it. So what you might want to do is to check the resolution on the side of the screen and send it the maximum resolution he can actually handle. So I give you the example of the screen but depending on the bandwidth, depending on the CPU, depending on the battery, that can hurt too and so on and so forth. So this solution to adapt the resolution is also very important. If you want to support people viewing the same stream from different clients with different hardware and different bandwidth, right? So what happens usually in the media streaming in a one way, not in a video conference? What happens is you take the capturing out in the initial frame and then you're going to encode it three times or more. Verize on digital media, for example, encode it with 11 formats. And Netflix, pre-recorded content, I think they have 1,000 something because they pre-encode in all the possible screen format and orientation you can have so they can do it beforehand and give you the one you want. So three different resolutions, it sends to the internet and the viewer is going to receive only the one that corresponds to his capacity at a given time, right? Old school, people that still use Flash, people that use, you know, who here know OBS Studio, who is a gamer and streaming themself on YouTube and so on. So most people use OBS Studio or things like that to make a screen captured, add some effects, compose, you know, funny hat, mustache, and then send it to Twitch, Daily Motion, YouTube and all of those guys. They work like that. That's the conceptual model. So you have the acquisition on your computer with OBS Studio, for example, and you have a first encoding in Flash, right? And you send Flash to the ingress node, so let's say Twitch, that is going to decode your stuff and then do all the re-encoding and the chunk and the slicing and so on and so forth and store it, provide it to a CDN for distribution, right? So the problem is at every step that you encode and decode, you lose between 40 to 100 milliseconds. So that doesn't sound allowed, but it accumulates, right? And you have the propagation time, the upload and the download. And you start feeling the difference at half a second, like 500, 450 plus, 500 milliseconds, you start actually feeling that there is something that is slow and going down. So you want to really try to keep the entire thing below that limit, right? Go ahead. The advantage with WebRTC is you don't need to encode and decode at the beginning to go through Flash to update. You can directly encode at the source, so that is your computer. Go back to the slide before. So before that was your computer. You have an encoding here, a decoding, you re-encode, then you decode, right? To be able to render. So encode, decode, encode, decode, two times. With a new technology, you encode, you decode. So you, by default, already without doing anything, 50% more efficient for the entire pipeline. Another problem, I have StarHub, they're blocking my ports. I cannot pay, I cannot chat. They're blocking my UDP port, they're blocking some number and so on. And then I have a router that is configured, Wi-Fi router in my house that has a NAT, right? So it's blocking me from going through and so on. Sebastian can do you two days on stuff like that when he's setting up a CDN host. So how do you do? Old school, well, ask someone to come and to open that port and open that protocol and so on and so forth, so you can play, so you can connect, so you can do a voice over IP, so your video game server can go through and so on. Still today, you need to do that for some games, right? In WebRTC, they had a technology called ICE that is actually doing that automatically for you. So it's scanning the port on your side of the NAT and he's finding your public IP and he's basically punching hole through your NAT, establishing a way not only to stream out, but you to receive the stream back. A super smart, but the smartest way is that you don't have to deal with it. You don't have to touch, you don't have to do anything, it's fully automated and that's super nice. There are other problems, network quality, what do you do when you have packet loss, when it's really, really bad, you're far away from the router or something like that or there's too many people or there are routers competing in the same space on the same frequency that overlaps, so the packet get not routed into the right direction. What do you do when you have bandwidth adaptation? What do you do when you have congestion control? Again, everyone is trying to get the bandwidth, so they compete with the same bandwidth, maybe there's five people on the same Netflix account in the same house. All of that, congestion control, all of that is end-all automatically, right? So what does that mean? That means that sometimes you bandwidth reduce automatically by yourself even though you didn't have, but it's not crashing. So before with the technology like Flash where you didn't have any congestion control, if Flash says I want two megabytes and you have less than two megabytes, it's just stopping sending. So now it's reducing the resolution but it's still ongoing, which is so much better. Practically, so that was still a reticle. That's all the browsers and all the codec and everything. So we work with all the browser vendors, so Microsoft, Google, Safari and so on and we actually provide patch to all of them, to help them get that technology working faster. So we can sell it to customer, okay? Not because we want to make money, but so we help people on H.264, Chrome, we provide the code, Safari, we provide VP8, they just went out with VP8 in a Safari Tech preview three weeks ago on a 12.1 or 12.2 beta on iOS. Again, not so long time ago and so on and so forth. So right now, most of the browsers support all of the option and that's new. That's a few weeks back. So now you can really come and use that technology in production at scale to compete with the old technology. So you have to be, hey wait, you tell me Facebook is using that since 2012. Yeah, yeah, but they have their own modified version, right? Or they have to lower the capacity because not all of the browser were implementing the option for them. So if you run Facebook Messenger in Chrome or in Firefox, it's gonna be okay for example, but if you're on Mac and you try to run Facebook Messenger in Safari, they're gonna tell you, no, no, we don't do that. And the reason they don't do that is because, well, Safari was the latest one to come to the game and they just catch up now. So luckily Facebook Messenger should start supporting audio and video chat in Safari starting this year. So I want to tell you guys that there is hope. If you're a geek, there is work for you out there, right? You can tell by the way they look, they're not woman men, they're computer scientists, right? You see the color and unlikely between the T-shirt and the top. You see the haircut, you see the eyes, like they spend three days in a hackathon. Yes, those guys are computer scientist geek and they love it and they're getting paid for that. So there's hope. So the guy on the left is actually the main engineer for WebRTC at Apple. His name is UN Fabblet. And the guy here is called Sergio Garcia. He was a 20 year veteran of Telefonica, so another thing tell in Spain, specifically on media server. And this is the first test, the first successful implementation of that simulcast, that adaptation of the bandwidth in Safari. So what is funny? Do you see a difference between the two images? It's not very, very different, right? So it doesn't look very, very different, but that's the original one, the local. And that's the one that's being sent with the same number of frames per second, so 30 frames per second in both case, but half of the resolution, right? And this is the resolution, so you can check that 360 is half of 720 and so on and so forth. Right, so that's the first time that they managed to implement. So Sergio is working for us from Spain and that's one hackathon that happened that called Web Engine Hackfest on invitation only for people to contribute to open source. So that was in WebKit, not in Safari, but Safari is a wrapper on WebKit. So we're having a lot of fun and there is hope for everybody that is a hacker at NUS to have a good life and a good job in a big company. They give you hope. They give me hope. Next. So if you extrapolate a little bit, what is interesting in Lib Web RTC is also the fact that there is an open source stack and it's the same source code that is used in Chrome in Safari, in Firefox, and in many other. So you can get that source code and you can start doing a lot of stuff around. So we had fun with OBS Studio, so we took OBS Studio, which is open source and we add support for WebRTC in OBS Studio. So instead of streaming flash or at the same time that you stream flash to Twitch, you can stream WebRTC to another platform that can take WebRTC directly. The difference is five second delay minimum. Cute application, IoT. Do you want your fridge to call you? Maybe not, but do you want the people that ring on the door to actually have a little camera so you can see even if you're not at home that someone is ringing on the door and check who it is and decide what you do? Yes, you might want to do that. You want a drone to call you back? We deal with you. Yes, you want a drone to call you back. So all the drones basically are using that stack because it works, right? Electron, React Native, whatever you want. And believe it or not, 50% of the people in enterprise that use audio and video are still using Windows 7. That mean they're stuck with Internet Explorer. They cannot put Edge because Edge is only working with the Windows 10 kernel. And they hate Internet Explorer, but the IT people hate them more than they hate Internet Explorer. So they're not going to upgrade anything there. So they stuck, 50%. I was working for Citrix, you know, the go-to meeting. I was a principal architect and the stats were 48% of our customer were on Windows 7 with IE11. And I'm like, I don't want to touch that. IE11, that's bad. But if you really have to do it, you can do a plug-in. You say, okay, WebRTC is for no plug-in. I know, I know, but they're stuck, so we need to help them. So you can do a lot of stuff and then you can do into the phone which is actually adding a custom option to the stack, adding additional codec for the phone. Hey, there is a new codec, everyone. I want to put it in WebRTC and see if it's fast enough to do real time. Hey, I want to do end to an encryption. I want to have a video call exactly like Telegram does the chat, which is the platform has a key, an encryption key, but I have my own encryption key. There's two. And so even if someone go to Telegram and tell them I want to see that guy stream, if the government go there or they try to hack the server or anything, they cannot see anything because I'm the only one that has the end to an encryption key. So there's a lot of stuff that can be done there. I'm not even going to watermarking. Are you stealing my stuff? Yeah, but there is an image there. So after 30 days, you didn't pay the license for the stack. There will be my logo dancing in front of your video application. It's going to be cool. Next. So there's a lot of company that do extension, do statistics to check, okay, is your ISP lying to you? Is your CDN vendor lying to you? Is it really that fast? What is your bandwidth? And so on and so forth. WebRTC allow you to probe. So you can actually check everything. So that's a company called Colstad.io. For example, based in Scandinavia against pin-off of the University of Helsinki. Very cool people. They're hiring. The testing is difficult. So our main contract is with Google and all the browser vendors to actually test their WebRTC implementation every day. So it's a little bit of a divorce parents problem. The browser vendors, they have to work together but they hate it. They really don't want to speak to each other. So when they have something that they really need to do together, like I need to check if when I make a call from Facebook Messenger on Safari on the Mac, it works against someone calling me on Canary Chrome on Mac OS. They're not going to do it. The two teams are not going to come together and do that test. So we told them, you know what? We can actually automate that with Selenium and so on. And we can make all the tests for you every day. You just have to give us some money for that. So we did and they did. We did a little bit more research. So all of that is done in Singapore and all of that is done with students. So every project we have is with an industrial, Google, Apple and smaller one, and involve students. So this, for example, is a way to evaluate the quality of the video. Is it crappy, is it good and so on? The computer can compute some numbers but they never really correlate to how I feel about the car. Sometimes the computer term, it was good and it was absolutely bad. I couldn't hear. I couldn't read the things on the PowerPoint and so on. So Netflix had the same problem. Everybody dealing with video, they created a metric called VMAF. Unfortunately, VMAF is really, really good at evaluating the quality of movies. But not of people talking, right? Not of gamers sharing the screen. So they're based on artificial intelligence that was trained on a subset of the movie catalog. And so they're really good at evaluating their movie catalog and they're really bad at anything else, right? So we came with another way to evaluate the video quality. It gives approximately the same score on the same image but the advantage is it's faster and it can run everywhere. They need to have the original to compare with a new frame and to tell you how degraded it was. In our case, we don't need the original. So we can actually probe everywhere on the receiving side, in the server side, everywhere and give a score to the video. Drop. Too complicated. There are at least five major open source media server that implement the WebRTC stack today. Janus, GT, Midus, Quarentaux, MediaSoup. A lot of them come from Spain. All of them come from Europe. Don't ask me why they're good at that. Apparently they have good school for telecommunication and media and those guys love math. Who does that? So, okay. The question was, come back, come back. The question was all the time coming on the mailing list, which one is the best? And guess what happened when someone come to the mailing list where all of those guys are and say which one is the best? Everyone say mine, of course, right? And actually they were all very good for a different use case. So GT, for example, is the best ever for video conferencing but it cannot do anything else. So it's many, too many. Everybody trying to speak at the same time and so on. They can handle that very well. But for streaming, one, too many or for webinar or stuff like that, not that good. Janus was extremely good on IoT. So the entity, the Japanese giant of communication is actually using Janus on the Raspberry Pi to stream audio and video in and out. So it's written in C and it was made in purpose to actually run on very, very small environment with constrain stuff. So they were really good for that. For the video conferencing, not so good, right? So everybody had this little niche and nobody had the tool to actually say, okay, which one is better? Which one is more scalable? Please define better. What does that mean better? In our case, better meant which one can handle the maximum number of stream on a single server before that thing go crashing down? And so we developed a load testing tool that was not dependent on any server. So they all had their own test tool, but then they couldn't compare with the other guys because the numbers were different and the process was different. So we met something that was a server agnostic and then we published that. So those guys are still not speaking to us even though they recognize that the result are truth. These guys is still speaking to us, but he said, you know, it doesn't matter but it was 250 people in a video conference that never happened. When I have 20 people in the same room trying to attend a video conference, they're already speaking over each other. It's already a pure mess. So this is not my use case. My use case is conferences up to 100 user, right? And so for that, I'm doing my job. So I don't have a problem if I'm crashing down miserably later on, right? But some people had the hope that GC could be used for bigger stuff. We killed that hope. And then you have different people that have different. So that's our media server. So we realized that we were much worse than the others. But the advantage is we could exchange. We found a lot of bugs. We had a lot of fun. And we could actually solve the question that people really have, which is, okay, if I have a question that requires comparing different media server in the wild as a black box without having access to the source code, no, I have a tool. And no, I can ask a lot of questions about the other thing, my collaborators or my competitors, right? Oh, Zoom, Zoom is kicking my ass. They're making so much money. I want to drink their milkshake. I want to take some of their customer, but that means I need to have a better product than them. Or do I test? Or do I know if my product is better and worse? Or do I know or why the gap is? And so I know which part to improve in my product to stop getting there and getting some of the client and send money to my site. Well, that's the wrong trip. There's those guys. Same guy that's still not talking to us. So basically here, you have one second, 10 seconds, 100 seconds. Means when you reach 50 people, the latency, the lag, is already 10 seconds. Not super interactive. This is not what you want. Everybody else is like, OK, it's a little bit more difficult. I'm going to take just a little bit more time to answer. But as long as everybody has the same profile, it doesn't matter. Because the alternative is not better than me. I'm good. CPU footprint, more of the same. This one is interesting. Now, we're not doing video conferencing anymore. We're testing the streaming. So of course, there are many more people streaming. We went up to 250,000 viewers concurrent with our tool. So we asked Google the permission to test against YouTube first. They said yes. YouTube was perfect. It didn't go down and handle the load. That was beautiful. Then we tested our own stuff, mini-cast. Not as good as Google, but it's not doing the same thing. But what was nice is that when it's starting to have some problems, it was slowly degrading the quality and adapting nicely to the load so that everybody here was receiving a video. They were receiving a video that was lower quality than the one send, but they were not failing. And then we tried Wosa. That's Wosa. Wosa is the number one flash server commercial today. They're doing $50 million a year based on that server. And they claim to support WebRTC. We say, OK, challenge accepted. So we did exactly the same test. And look at here. The blue curve is approximately the same. This is the average. But this is deceiving because this is what actually happened. When you start to reach the maximum capacity, some people get the stream. And some people totally fail. That's all they adapt. So 20% adaptation means 20% of the viewer randomly will not receive anything and fail miserably. Not exactly the same behavior, is it? Nowadays, people come with new questions and say, OK, I have a conference. One guy's in Singapore. One guy's in Tokyo. One guy's in Sydney. One guy's in the US. It's a mess. Because the run trip time to go around is very difficult. So they start doing geographic distribution of the call. So instead of having one server, and everybody connect to the server a little bit like the games, they actually have people that belong to the same call, conversation, dispatch on several server. And then the server synchronize in the middle. That was super difficult. Only Cisco and another company called Video with a Y managed to do that before. And of course, it was all commercial and so on. And GC managed to do that in the case of bigger conference. If you have one conference that has too many people to fit on the server, well, just separate and put it on two servers. And then you have twice the capacity, and then you can suddenly or horizontally scale your room. And if you do that well, there is no limit to the number of people that can be together in the same call. That's how they do it. Watch and forget. Go on into the details if you really want to try it. But it's all open source, too. It's all published and everything. So what I'm trying to tell here is that people that are interested in do stuff, whether either because as a hobby or for a course or anything, all of that is out there. You can read about all of that. You can learn by yourself. The code is there. You can download it, play with it, and so on. Do not be shy. You're going to learn more. Are there any teachers around? You're going to learn more from doing stuff yourself than going to the courses. Conclusion. There's a lot of people that say WebRTC is not ready. It's not useful. It's not so on. Well, Facebook and 17 billion of calls beg to differ. So don't worry about that. The streaming industry is actually a little bit behind the video conferencing industry. And there is a reason. It's because the people that create WebRTC were coming from the voice of our IP. So they were actually the video conferencing industry, Cisco, Ericsson, and all those guys. And so, of course, they adopted right away. But the streaming industry was not around the table. So there's even more gap there for people that want to try to do something different and innovative. Right now, in the streaming industry, it's fully open. The standard are already the standard committee, the international W3C, IETF, for the web and the internet, respectively. For them, WebRTC is done. The charter will be closed in one year. They're just polishing the commas and dots in the specification now. They're really thinking about the next thing. Quick, for those who don't know, quick is what is going to replace UDP and TCP as a very low-level internet transport. It has encryption, TLS, SSL already inside, and so on. It's beautiful. And to an encryption like telegram and so on, machine learning, automatic funny hats that is actually on your head and a line with your nose, and everyone in the next generation codec. So there is a lot of new angles. And for most of the people that were involved in WebRTC, WebRTC is done and it's passed. So there is an interesting gap now where the specification is final. The implementation in the browser is done across all the browsers. And now it's up to the web developer or the native app developer to actually go and be creative with that because it's ready. Ah, switch. Close then. So you see people stop now. OK. This is a paper that was published four months ago only. And that's, oh, I'm going to put WebRTC on quick. You know what it is? There are some new keyword, AI, big data, smart city. And then suddenly people say, I'm going to be different. I'm going to take two keywords and I'm going to put them together. Well, that's what they try to do. They took WebRTC keyword. They took quick and they say, oh, can we do that? Oh, what would it look like? Do I get any benefit? And so on. So right now at the IETF, this is really the hot topic for people doing media. So on the internet, oh, are we going to do that? It's supposed to be faster. It's supposed to be encrypted from the get go. It's supposed to allow synchronization between the data and the media flawlessly. Think about augmented reality, right? If the data is not synchronized with the media, then you move. And then the augmented reality, or I take the example of the funny hat, but that can be anything else, is not following right away because the data is not synchronized. That's a problem we have today. So that would allow perfect synchronization at the transport level directly. So everyone, so we have a very strong rule in my company. It's absolutely no necktie and no shirt, except on Halloween. So this is now. It was our first students, and that was Halloween. So they all come with a necktie, and I hate them. And we were the first one to implement the new everyone codec from the open source code into the Google WebRTC stack and start playing with it. All of it's magnificent two frames per second right now. So it's still very slow, but the engineering question are, how do I put a new codec into a streaming engine, right? Because I only had the codec. I didn't have a way to push it on the internet and to receive it on the outside. It was exciting. So we had a student work. How long did Paul work on that? Six months? Six months at the research project to actually finish that. And we brought it to the Alliance for Open Media. And they invited us. They said, oh, that's cool. We wanted to do that. You have it done? OK, come in, come in. You have something we want, so you can come in for free. Go ahead. Thank you. So you've seen a little bit about the technology. You've seen a little bit about all the project we do. We love to do it with the students. We do everything with the students. We have 23 students this semester with us. We had boogies. It's cool. We would like to have more Singaporean students, right, and to work more with the university. So that was one of our motivation to come here to share the awesomeness of WebRTC and to try to motivate you or your friends or anybody that is interesting in doing quite deep but interesting state-of-the-art project with us. So thank you again for having me. Either anybody that has any question. All right, thank you, guys.