 What if I were to tell you that I'm not going to speak anything about JavaScript and I try to fool you by writing good words in the funnel so that you all are re-loaded over here? What would be your reaction for that? So that's not true. Hello? Yep, fine. So that is not true. HTML5 has evolved a lot and there are a lot of features in HTML5 which cannot be leveraged without the use of JavaScript APIs. So when you're talking about HTML5, it cannot be that you're not talking about JavaScript. So that is something that I'm also trying to attempt. WebRTC is one of the new features in HTML5 that a lot of organizations are working on. There are so many startups that have created products that leverage WebRTC, although its support is limited in the browser. But I think WebRTC is one of those cool HTML5 features that should be looked upon right now. And yeah, I'm also using a character in this slideshow which is going to question me with this gun pointed at me. It is kind of representation of a developer's frustration or questions that are not solved and he needs an answer for that. So my next slide is exactly about that. Why am I speaking HTML5 in WebRTC in JS4? So I have a couple of reasons for it and one of those reasons is this quote that I found on the internet. If something can be done in JavaScript, it will be eventually done in JavaScript. That's what we have seen. AdWords law. AdWords law. AdWords law. Yes. Okay. So sorry, I'll change that and on to AdWords. That's what we are seeing. You can see the posters. There is a robot flying over there and I think that's also going to be attempted at this conference and Adron is going to be flying with the help of JavaScript. JavaScript is also there at your back end for image manipulation libraries. It's there in WebRTC. It's there in HTML5. You have some, you have file access utilities. You have packaged applications using JavaScript. So this law is very, very true. That's what we have found and this is also applicable to the HTML5 applications. The next reason is that very same. It is meaningless without JavaScript APIs. Most of the HTML5 features, if you see, you cannot use them without the JavaScript code. So that pretty much explains why JavaScript at this conference. Then this is my favorite reason for speaking HTML5 at JS4. In India, we have this practice to idolize and when it goes for HTML5, this is the guy to Christian Elmint, not Sam Jackson. Christian Elmint is the guy I'm following since a long time. He has done a lot of R&D and I would say he's the true evangelist on HTML5. I have been following his blog. In fact, one of my demos at the end of this slideshow is inspired from his face-to-jiff blog post. So these are the reasons why I'm speaking HTML5 at JS4 and now why HTML5? So WebRTC is again one of the features of HTML5, but a lot of people question this. So you have internet, you have very good middleware, you have a lot of algorithms that process data and give the output into the front-end. So when you want to access the internet, the data is easily available because a lot of intelligence has been gone into the back-end and you are ready to consume the data. But what about the front-end? What about making the web as an application? So most of the time, you'll find developers questioning, what is your JavaScript developer, UI developer, HTML5 developer? What is this? So what do you do actually? So as an HTML5 developer, we enable features that make your front-end look awesome, that make your application more consumable, more accessible at devices across platforms and that is the reason to this question. So if you want these things and features, obviously there's no other way, back-end cannot help you. The platform has to be evolved enough so that a lot of features are supported. Right now we have features wherein you have, for example, an online pen drive wherein you can drag-drop folders from your desktop or OS into the browser and they get uploaded into the online cloud storage services, for example, Draft Dropbox, or you have Canvas in which you can paint. So these things can be enabled when the browser is powerful enough, when the user has this ability to do this stuff into the browser. If you want to know more, there are two slides that I have prepared for this. So one is this HTML5 and the power. Think about this very common idea, when you want to communicate. So WebRDC is about communication and video sharing and collaboration. So while doing this before HTML5, or for that matter doing anything which is kind of cool in the browser before HTML5, what we need to do? We need to install applications on our desktop devices, on mobile phones. So if you want to Skype communicate, then Skype has to be installed on your iPhones. Skype has to be installed on your Android devices. But now think about this idea. Skype is one software. Then for video playing and music playing, you have other software. So don't you think there is a common software which is available on the devices which connect to internet and have a user interface? What is that common software? Interexplorer. You can say browser. Browser is that common software. So why not give enough power to the browser itself so that it could serve as a platform wherein we could do all the stuff without the headache of installing plugins, without the headache of installing other applications, without the headache of maintaining those applications and plugins, updating them, and all this stuff. So the power that is given to the browser today is HTML5. It encompasses multiple features. So I have tried to sum that up into one slide. So these are the eight features, eight core groups of HTML5 features that are grouped by W3C. A lot of people think that CSS3 is a separate thing. No, it's not. CSS3 specifications are under HTML5. So you have semantics, which is for better code structure, new tags, and all that are used in HTML5. Then you have CSS3 flexible layouts, you have multimedia, playing ability to play natively the videos and the audios. Then you have graphics in 3D, WebGL is part of graphics in 3D. You have device access like geolocation, gyro meters, other sensors, you have performance. So in performance, you might be knowing web workers have come, in which you can spawn a new thread from your browser and all the heavy weight lifting from JavaScript, you can transfer to that and communicate with them. Then you have offline and storage. It also includes app manifest and cache other than local storage and section storage. So in which you can actually, for example, if you try to play Angry Birds game into the browser, only the first time the video will be loaded where the pigs are stealing the eggs from Angry Birds. Next time it is not downloaded from the Internet. It is stored in your application. Then connectivity is the hottest part. It enables WebSockets and WebRTC. WebRTC is part of the connectivity suite and that's what we are going to look at this slide. So let's see what is WebRTC? Web real-time communication. So according to me, there is a very plain definition to WebRTC. It's communication inside the browser without plugins. It's HTML5-enabled communication, but the support for it is very limited. Only these two browsers, Chrome and Firefox, support WebRTC on a full basis. A lot of people say that we support WebRTC, but actually they just support getting your audio and video feeds. Nothing else other than that. So the only true browser supporting WebRTC is Google Chrome and Firefox. And you also have frameworks, but there are no polyfills for WebRTC. So if you find that there are polyfills for WebRTC on Internet, don't get confused. They're just frameworks. These frameworks take care of implementing WebRTC features in a common way so that you won't have to care for browser differences only for Chrome and Firefox. These frameworks do not enable WebRTC on IEA or other browsers. So let's see. So this is the main question we want to ask. Where is JavaScript in all this? The main three tasks that WebRTC-enabled browser is supposed to do. First is capture your stream from your local devices. Next is sharing your stream with the peers, which is enabled using peer connection, and then sharing data through the same channel. So the highlighted objects are the JavaScript objects which are taking care of that. And I have a sample code for all the three of them, and famous Internet demos also. So this is the way where you play in a simple code if I think you are able with the black color, right? So with only this much of code you can access users, camera, and mic. Once you enable the permission, that stream is the live stream which is being shared from your browser is actually being shown to you. We can very much create a DOM element and simply append that stream into the DOM element. So this slide pretty much explains that. Then you have a very famous app application called Webcam Toy, which is a huge hit now. A lot of people are accessing this. This is getting, I mean, millions of hits and people are making faces, animation out of their own recordings. So this Webcam Toy works on GetUserMedia, but they are using Flash for putting the effects. Other than that, everything is HTML5. Then you have peer connection. Peer connection enables the stream which you have captured using GetUserMedia can be shared across, you know, P2P. So once you connect to other client using some server, and then with that client, if you establish a good connection, and say that Peer2P is enabled at my side, what about you? If it is enabled at your side also. You can very well share streams between each other without the need of any additional server. So this transfer is very fast, and you don't have to rely on other web services for, you know, kind of media transfer. So peer connection takes care of that, and that is also available as JavaScript API into your JavaScript code. Then you have, I have tried to build a demo, Icebreaker, which uses that peer connection API. If internet is fast enough, we'll be able to see what happens in this demo. And then you have communicating data. On the same channel on which you transfer the stream and communicate it peer to peer, you transfer videos peer to peer. If you can transfer videos, you can also transfer data. That is very, very simple. You can see the code is also very simple. In the same channel, just you have to enable a flag, RTP data channel is true, and then you can send binary as well as string-based data. And the objects representing are here. So there are, there's one more famous application on the internet for data sharing which is ShareFest. This is also getting a million of, millions of hits, you know, almost on a daily basis. What happens here is you, you can open ShareFest, drag drop files into it, automatically an ID is generated. This ID, you share with, with someone else. If that person, if your other friend is also hitting the same ShareFest.url and that same ID, which has been generated from your dragging dropping file, he will be able to download that file peer to peer, no need of any other server between that. And so what this means, this is similar to the airdrop feature which you use in Macbooks. So I think that is also peer to peer. So now this is enabled in the browser. So you don't need any other features other than a WebRTC supported browser for this. And these are the features support for the individual elements. So that's what I said when, when a browser says that we are supporting WebRTC, it has to support all the three, get user media, RTC peer connection and RTC data channel. The only browser that is supporting only get user media is Opera version 12. And there is no other browser that is even able to, you know, capable of capturing stream from your laptop or your any device. So right now, the development is very, very limited and focused to Chrome and Firefox. But if you see the development, the development is there are some very cool stuffs that have been done into the internet. When you see those applications, you can, you will feel the need to package them and install it as a native desktop application. So this would kind of replace the need for a surface like Skype or any other software which is going to, even using RTC peer connection, you can also share the screen. So there are ways for that. That's only enabled in Chrome. I think Firefox Nike also has a feature. So we'll see some pool demos at the end of the slideshow. Now, this is the architecture that I want to explain. Now, this is not the slide, actually. This is a common internet image which you will get when you find, when you Google WebRTC architecture. So since every other slideshow is using this image, that's why I also put it. But our guy is angry for that. And so I have this simple image that explains how things work. So this is the model of WebRTC. You have a client and the other client both has, you have to assume that both support WebRTC. Now, what happens over here is there is always a need of a server to communicate, to enable the first call of WebRTC. It's like making a phone call. So if Google Chrome wants to, or a person using Google Chrome wants to communicate to a person using Firefox, he has to rely on a server on which he will be first trying to discover the person using Firefox. So he will make a call to a server. That server is a common server. That is the site which you are building. That is an application which I have built. That server will also be accessible by the other client. And the other client will listen to that server. So if one client connects to that server, the server will broadcast that somebody has connected to me on a particular room ID. And once that broadcast is over, the clients which are listening to the server, anybody from those clients can connect to the server. So once this protocol is done, and you know that two clients are available, then the media channel is set between these two. So now you don't need a channel other than just ending the call. Now the data will be transferred from this channel and not the server. Otherwise, what is the use of WebRTC? So this media channel is peer-to-peer, which means the throughput of your data will be very high. And the data transfer latency will be very less. And HD video you can share without any lags. So this is how it happens. What information do you send on signaling? First is the discovery. Then what kind of video you want to share? Then you make a call offer. Then the client accepts the answer. When the answer is accepted, you initiate the media connection. So, but then this is one of the ideal cases of WebRTC. This is not something which is readily available. This is the theoretical diagram. So what happens actually is you always have a firewall if you're connecting from your company or anywhere other place which is behind a firewall. So you'll always have a NAT-enabled router. So in that case, now NAT routers work in such a way that they won't allow you to share peer-to-peer information. Otherwise, we would be hacking computer systems into companies. So how do you communicate in such a scenario? So there is a technology called STUN. So STUN is a kind of server which a client one can hit and then find out how it is supposed to be located from the internet. And this data will be transferred to the second client which he wants to communicate with the same previous channel, which can be anything, which can be even a Gmail message. So you can literally compose a message and in that you can write all the JSON data that so-and-so is my IP address. You can locate it on the internet and so-and-so is the video stream that I want to share and he can read that Gmail message and start a connection. That can also be true. So in this scenario, we need a STUN server that bypasses, that first gives you your identification and then when it finds out that you both can actually connect to the behind the firewalls, it enables the same media stream. So this media stream is again peer-to-peer and then we got back to our original situation wherein we can now share media on P2P stream. But then again, this is one of the ideal favorable solutions. Sometimes your routers are very, what we say restrictive and they are behind symmetric nets. So what happens over here is, suppose your STUN server replies, like this one very famous STUN server of Google, everybody for all the applications of WebRTC and their demos, they are using this server. So when that server replies, that you cannot actually communicate even though you are behind a firewall. In that case, we need a turn server. So turn is like a traversal using relays around that. I think that is the acronym. So what it does is, it will allow you to communicate using WebRTC, but this will not be peer-to-peer. So now this is something which is similar to using Skype or internet or any other internet service. So now here, this is the last resort that we should adopt to because STUN has failed and the ideal case scenario has failed. So in this way, what will happen is when STUN gives a false reply, so and so is your IP address and protocol. But still you cannot be accessed over the internet because your NAT is symmetric. In that case, one more call will be made to the turn server and this turn server will be connected by other client also. This turn server will catch all the data and give to the client and the client that I will transfer to the main caller. But then again, this is very, very slow and this depends on HTTP connection and web and internet. Again, so there's no peer-to-peer here and it becomes slow again. So this is the last scenario which we would be using. Now all these terminologies are very, very common in a WebRTC talk. So I have tried to kind of explain it in detail into next slide. Okay, sorry, fine. So these are the acronyms. NAT is network address translation which stands for a firewall. So what it does is, suppose you have all the, like 10 clients communicate, who want to communicate with each other, but they are all behind a firewall. So what this NAT does is it has one public address. So this public address can be accessed from other computers using NAT, those who are also behind NAT. But then behind this public address, all the 10 clients have their port numbers. So using this public address and the port number, the client can still be identified on the internet. Once this identification is done, this identification is sent to the other client with which the communication is desired and you can communicate with each other. So this is NAT. Then ICE is a framework interactive connectivity establishment. What it does is all the turn and stun thing which developers don't want to deal with. ICE takes care of that and you don't have, you just have to mention the stun and turn service and then ICE takes it from here. Now ICE is a must framework which is supposed to be used. According to IED guidelines and on the application layer of your internet, ICE has to be implemented for a WebRTC connection. So it is not an opt-out feature. Stun is the server, turn is again a server. Signal channel is push-enabled communication channel. Why push-enabled? Because even though you can write an email message in Gmail and send it to the other client, that message might not be real time because that is using internet. So if you have, even for the signaling, if you have a push-enabled server wherein broadcast can be done, then it really becomes real time. So that makes more meaning. So signal channel is always preferred to be a push-enabled channel. So you can install WebSockets into your HTML5, I mean into the server and HTML5 browsers, both Chrome and Firefox support WebRTC and they both support WebSockets also. So I think that would be easy. SDP is a session description protocol. What this means is whatever data you want to transfer to the next line, all this data has, there has to be some metadata related to it. So for example, the video which you want to share. So if your device is having an HD camera but the other client's device is not having an HD camera. So you might want to warn that person that I'm going to send you an HD camera. So you will first set your local description that I have an HD camera, which are 1920 by 1080 is my resolution and this is the video I'm going to share with you. But then if the client accepts it, it will set its remote description. If it doesn't accept, it will tell you that I'm not able to accept that much and you can very well have hooks over there wherein you can decide what kind of resolution. We can agree upon the resolution of the video. You can agree upon the quality of the audio or the bit rate of the audio. You can agree upon which channel to be preferred or turn or turn. So all those information goes into signal channel. Again, signal channel is taken care by ICE. I mean, sorry, just before ICE from the web server wherein this doesn't require web RTC. From SDP protocol, you first share what you want to share. Once your client accepts the request and sets all the remote description, then the web RTC comes into the picture and you can actually share data. So that is SDP. Then I think this diagram would be easier to understand. So in case of your ideal connection is failing, what peer A would do is it will hit the most famous Google Stunt Server and Google Stunt Server will reply with all the port number of the public NAT as well as the port number, sorry, the IP address of the public NAT and the port number with which you are accessible. Assuming that both the cases are favorable, Stunt can relay and then you can transfer data between peer to peer. But if Stunt fails, this is what happens. Stunt gives you false data that you are not accessible. You are behind a symmetric NAT. In that case, you need to relay all the transfer of data and communication to a turn server, which is again slow, which is again the old internet. So that is not favorable. Now this is a simple exchange load. This is how it works. So I think this diagram is self-explanatory. So peer A makes a call to peer B and ask for the channel. Then peer A sends offer the SDP. So all those metadata, like I want to share video, I want to share audio, I want to share a video with so and so resolution. The audio betrayed is going to be so much. Then when the peer B accepts that, peer B creates an answer. Now the answer, now since assuming that it has been a success, channel needs to be established. So then ICE takes over and then ICE candidate is created. A peer A is created as an ICE candidate. Peer B is also created as an ICE candidate. So ICE takes care whether to use Stunt or turn. First it will try with an ideal case. If ideal case is not possible, ideal case it will try when you are presenting on a local host. So one of the demos that I have built, I think because of the slow internet, I'll have to present it on a local host. So in that case, ICE tries the ideal case. And then if that fails, ICE tries Stunt. And this is how you connect. So I think this diagram might be more complex. So this is a simple diagram which I try to create. So I think this is more easier to understand peer A, all signaling and stuff from an HTTP server. Then once it is true, peer A and peer B communicate using ICE. So ICE is a framework that is to be looked upon. Now this pretty much covers my theoretical talk so this is a demo which I want to showcase. And I have hosted it on internet but I don't think we'll be able to access it because I was trying and it's very slow. So we'll try both the things. First is, where is demo? Just give me a sec. Okay. So you go to this URL. I have, let's try to see if you can find it. Let's try to see if you can also access this. Modules, I think we don't, okay, we have it. So what this application does is it enables you to communicate with one of your peer while sharing your video. So okay, everyone try to hit this URL and ID like ABC or some unique identifier or one to three. And we'll see whether we can actually communicate with each other or not. And this would work only if you're not using internet for tweeting or other stuff. So please shut down your services so that my demo is successful. We can't create for this, I think. It won't be redirect properly. For, again, for creating a tiny URL, I need internet. So yep, I think it is accessible. Okay, but again, for connecting to my local server, you'll need to go through internet, right? You want me to run it on local host? Okay, this is loaded for me. So I'll first show you the interface, how it works. I just share my screen and I get my video. Now, if you were to communicate to me, you would hit the same IP, which is this or a number, one, two, three, two. So this is an identifier for you. When you hit the server with same ID, if you check the code that is being populated over here, I have a parameter in JavaScript where I'm seeing group equal to null. So group is null because I'm the first client over here. As soon as you hit the server with the same identification number, it can be any random string. Then the group becomes that number and a channel is established between you two and you can communicate videos with each other. If you want, I can show you my own stuff. So I will try to open a new console. Let's see if this is accessible. Yes. Come on, load, load, load. Okay, anyways, you can try to access my local IP and see if this work. Just try to hit this. 192.168.130.244 and then what do you want? So JS foo demo. Okay, and then I'm hitting the same using my own local host. Okay, into this browser, which is not incognito. So local host 80.js foo demo. Yes. Now let's see if we can actually see each other's video. This is sharing over here. Where is the other guy? Okay, I hate windows. Where is my demo man? So sorry for taking this time. No, I think this would be fine. The demo is up and running. Only the problem is bridge forward. This is advanced. This is dev channel. So dev channel, even the stable one supports the Vibarty C and all this stuff. So I don't think there's a problem in that. So this actually works. What you can do is you can go home and check on the internet. I have this URL shared. So you can deploy that application on using your note on your machine and I have tried to deploy it on modulus. So modulus is the only cloud server that supports web sockets. And web sockets is what I'm using for push enabled broadcast that some client has joined. So when you go to modulus that URL, try to use a unique identifier. Now Vibarty has limitations. Like when I tried this with three, four connections, I was not able to see everyone's video. So I restricted it to two persons. I really want to show this demo. I don't know how to, okay. This doesn't work. It needs internet. And internet is down. I think that would be my explanation. But okay, that is it to it. I also have some demos from the slide show itself which is on, so on a final word, this demo works. You can check the URL that I have shared on your home internet which is faster. So let's assume that. And my slide show is, okay. So this is the URL which you would like to go on modulus. So this is the application which you have to access from your other internet which is faster. And then at the end of the string, try to append something. In my node, so the repository is open on my GitHub. What I'm trying to achieve here is, whenever a server is calling this application at the end of that dot net, I'm trying to redirect to a new part name. So that part name is generated dynamically. Any ID, you can use a random string or maybe the current date and time to generate that part name. And you share that part name with only the person with whom you want to communicate. Because that part name serves as a point of access. As long as a person is able to allow the connection, he will be able to, you both will be able to communicate with each other. So that is my restriction. That code I have not written yet, but every time you go, you have to manually type a part name which you want to choose. But I want to give that ability so that user can actually write. I want to, for example, JS food demo channel I want to create. So there should be everyone, everyone having a JS food demo access token key, they can hit the same URL and communicate with each other using WebRTC. So that is the idea of this product. It's still in development. It was an attempt at a hackathon of Tata communications collaboration. And it did not win any prize because they also didn't work. Anyways, I have some more demos for you which are into WebRTC. So you, on my end of my slideshow, you have some lot of R&D going on in the internet right now. Like Ericsson Labs is doing very cool stuff using WebRTC. They have really stretched it to 3D communication and whatnot. Then you have, these are the cool extras which are also famous on the internet. This is the demo that I was talking about which I have attempted from Crystal Hellman's blog post. So whether this works or not, again, it's depending on internet. What it does is, usually you sit and allow button and capture a screenshot anyways. And then you have one more demo over here, GIF recorders. So you could actually record a GIF animation from the playing video and embed it into a canvas. Then export it into an image tag. That image you can save as a very good GIF. So that file will be actually a GIF file. So you can go and check the source code of this. It's very laggy. So this is a GIF created. So using it visually. I think the animation is not good. So I'm using only five frames because there is a way to it. If you go to, if you are using WebSockets and WebWorkers, you could actually transfer the logic of encoding all the binary data into ASCII to your WebWorker. And then from WebWorker, you can get the full compile string. Then that you can embed as an SRC to the image object. But all that encoding I'm doing in the browser itself, the demo which is faced to GIF from Christian Haradans blog post is, it was very complex to understand for me. So I tried to play it simple, but that demo is even awesome. So I tried to mimic that demo. So JavaScript has this B2A and A2B functions in your browsers. I think it's only available in Chrome and Firefox again. So what it does is your binary data can be encoded into ASCII encoded base 64 URI. Now ASCII encoded base 64 URI always represent data. We have been using them into CSS inline icons and style sheets. So the same thing I'm doing here, that data I'm encoding as an image tag. So you can check the source also of this. JavaScript is very, these are the files that I'm using for GIF encoding in base 64 and LZW transform. Then the main code is only this much. Getting user media, setting the canvas bit and height, playing the user video into the video element. Then this is the recording GIF parameters that I'm passing. Then on the click of the button when I start recording, what I usually do is, I have a set timeout interval. That interval uses the GIF encoder that I have downloaded. And every frame on every, what is the frame limit you want? So if you change it to 50, the browser will actually hang and the size of your GIF will be 5MB or 10MB, I think more than that. So I change it to five. And frame rate is the time interval on which you want to execute your, I mean what frames you want to add. For example, I have animation at the end of this demo. So I'm using a 300 milliseconds time interval for that. So what is the interval that you want between two images that can be encoded over here? And this is the library that I'm using. Then rest of the code is very pretty much plain JavaScript and just, there was an old code, but then I found that B2A function is readily available into the browser, simply I'm using that and embedding it. So this is how it works. So I think this pretty much sums up my slideshow. What do I do now? So I, the real idea of this speaking WebRTC at JSFOO was to kind of let the developers be enthusiastic about HTML5, not think that HTML5 is something different than JavaScript. HTML5, most of the features are only enabled using JavaScript and there's a lot of intelligence that can be put into HTML5 application which would allow you to make some awesome and cool demo stuff. So that, you know, this kind of, you know, sets a path to which you don't want to, you want to install application for every thing you want to do on the internet, but most of the things you do on the internet are more than just sharing the data and they can be very well allowed by the same platform which you are using, which is HTML5 and then there comes JavaScript because most of those features will be able, you will be able to use only using JavaScript. So I think this completes my demo. Thank you. Any questions you can ask? I think this allows him to put his gun back. Hi, a couple of questions. Okay. Here, here, here, first one. Okay. A couple of questions. So first thing is, is there a way to dynamically adjust the frame rate because, you know, most of the streaming solutions, they adapt to the network connectivity over time, right? Like at the beginning of the video, the connection might be really good, but over time, if it deteriorates, you know, is there a way to adjust that? Okay. So frame rate, I think, since you are going to play that video into the video tag, so frame rate is usually a feature that will be controlled by the browser. I think, I don't think WebRTC has an option to control the frame rate. The frame rates you want to transfer to the client, that can be controlled using the SDP protocol. Wait a minute, control dynamically. Like it should not be the same throughout the whole session. I don't think you can control it dynamically. There's only one time call of an SDP protocol that goes from client A to client B, and when this happens, that, I mean, the protocol has been established and you know what data you're sharing. So if you change it dynamically, you're kind of actually creating a new connection. You're kind of creating a new request for a new sharing of videos. Isn't that a drawback? So, obviously, it's not compliant, but then that's what, I mean, I have not seen features like dynamically changing a frame rate even in Skype, so that is something which people have not looked up to. Microsoft doesn't look okay. Okay, fine. Microsoft is doing good development. Actually, I was a Google fanboy, but now I see they are stretching the web too much and now I switch to loving Mozilla again because Mozilla is really doing some cool work in HTML5. Okay. You wanted to think. Yeah. There's something called smooth streaming in Microsoft. Smooth streaming. Yeah. Okay, fine. Second question is, so, obviously, there will be some kind of best practices to prevent man in the middle kind of attacks. Yes, there are not only man in the middle. Actually, all the security features have been standardized. So, on the third layer of your protocols, IETF has made it mandatory that all this connection which is coming has to be secure. Then on your application layer, it is a mandatory that user has to allow. Unless the user clicks on the allow button, you can never capture the audio stream. Then the peer-to-peer data that is being transferred always has to be encrypted. Then there is one more, that SDP, SDP which you want to share, that is always supposed to be encrypted. If these four categories are met, only then it is allowed, otherwise it is not standard. In products. In production, I don't know, but in products I know because a lot of websites have come up which give you this service, peer-to-peer service. And for example, you can check on V-line suit of products which a lot of companies are using. Even Amazon is using the WebRTC. They were using some different channels, their own turn and stun server they had. But now they are switching back to WebRTC. Microsoft is considering to move to WebRTC. Hi, hello. So just really quickly, currently for cross-browser support, I'm using Flash for certain things. We were recording audio and transmitting things between different clients. So we're exploring WebRTC. We looked at the demos. Some related questions similar to what you mentioned where with recording audio, it appears to still be a challenge to down-sample it on the browser itself. Where, you know, you still end up sending out files, I mean, data which is fairly large. So if you could share some examples of where people are using it live and prod at this point, that would be useful. I can share you the talky.io URL. I don't know if I'll be able to open it because the internet is slow. If you can open that URL, you'll be able to find that they are using most of the things which are kind of feature sets of application which you install on your system. So you have people talking. You can actually record their session, whatever has happened. You can change their, I mean, downgrade and upgrade their volumes. You can find out who is speaking. Whenever somebody speaks using the Web Audio API, they are discovering that. So that demo is there on the internet. You can very well open talky.io. It has everything. So it is one full-fledged product. Provided that you are able to open it. Here. Yeah. So I have a very simple use case when I want to access the device webcam and capture the video and upload it to upload the same to the server. As far as I know, with the latest bills of, triple bills of Chrome and Mozilla, you can only access the device, but there is no standardization on the content type of the captured video based on which you can upload the data to the server. So I just want to know if the media stream APIs on the, all the related stuffs have evolved enough to support this capability also as of today. So the server is something which is not being looked upon over here. The idea, like the internet is full of demos of the get-user media, but get-user media is not the, you can say, main feature. The real purpose of WebRTC is communication. And there's no feature enabled in the browser when you can, like Chrome has enabled feature wherein you can set a flag in get-user media and you can capture the screen. So these things are there, but people usually want to avoid the server. So the uploading part is not something that is being looked upon where you are using WebRTC. The communication part is being looked upon. So when a communication channel is established, people want to share that large amount of data among each other and they don't want to store it. So storage has not come yet, but there has been a demo from Google guy who has tried to record an mp4 file from get-user media and save it. You need for live streaming, so this is more than just live, if at all your stun is successful. Example, the last demo which you showed, I consider that as more like a workaround or a hack when you just capture the frames and without any audio captured, you played that, right? But if I want to have a real live streaming application, I need to capture the data and upload to the server from where I can broadcast it to the different clients. I got your question. So this is the main reason for which we are using WebRTC is we want to prevent that, what you're trying to say. That is something which we want to prevent. That is something which we are not looking at. We are looking at real capture of feed from your application, from your hardware and transferring it live on peer-to-peer. So that is my demo. It actually works. I can show you. It's not capturing the frames and all. It's the real, whatever you, it's not like I'm capturing the frames from your camera. The camera is actually capturing the video and it is transferring right away. So when the channel is established, you will be able to see the real-time movements of the person who is involved on the other side, provided that it is, the demo works. Without Stern server is something what I'm doing already on localhost. That also doesn't seem to be working. So... Hello. Yeah, one server you need for establishing the communication. For example, I cannot share data from localhost to localhost. So I can share data from localhost to my own IP address. Again, which is localhost, but browser needs to know that they are different IP addresses and Stern server will be, will not be involved, but at least the communication channel will be involved wherein you want to make a request to the client. You want to make a call offer. So that would always need a server. Oh, one question. Here. Here. When I'm using a Stern server, provided by third party like Google or something, so, um, switching from, um, Stern to 10 based on the restriction, is it handled by the Stern server or we have to do by ourselves? The reason why we use Google Stern server is it is able to identify you in the network behind your firewall. Most of the time it works. But if your firewalls are very restrictive, you can set up your own Stern server also. So that is the case. Also one more thing. When we are sharing the screen, um, Chrome will be showing, stop sharing your screen window and all. So that is outside the browser. So it's a Chrome providing an API to modify that. Chrome provides a lot of APIs for which, you know, we shouldn't communicate with the, the browser JavaScript thread which is going on. So if suppose you did that, stop the frame, stop something from being shared, immediately that channel will be interrupted. So even though the JavaScript was not called properly to end the call, the other person would see a blank screen. Hello. Okay. The way WebRTC is implemented, it looks like, you know, there's some kind of service which is being run by the browser, which is available from a port. It opens your system to various security attacks. So far, was there any, anything like it? Or what are the possibilities? People have tried to make demos, which show that security is vulnerable in WebRTC, but those demos are restricted to the front end. Only the get user media, wherein they're faking the user to click on something else and then trying to share the video. But on the network layer, wherein the data is being shared, uh, IETF has proposed some standards. So these browsers implement those standards. Obviously a hacker intelligent enough can hack that, but there are security levels that are considered on those standards. So it usually works in a sandbox, so browser doesn't give access to any other application to that sandbox. And in that sandbox, the things are as per the protocol standardized by IETF. We have a question upstairs. No, he, he asked the same question. Do we have someone there? Oh, we have time for just one last question. Okay, last question. Sorry, already called. Feel free to talk to him later. There are plenty of links. I'm sure there's material for everyone. Hello. Uh, I just wanted to know whether, um, does WebRTC have any protocol for, uh, screen shares? Uh, can, can it get a video feed from the screen and potentially enable screen shares, or is it only limited to webcams? Screen share, as of now, I don't know. I think in Mozilla Nightly Builds, they're trying to do that, but screen share is very much limited to Google Chrome, wherein while passing the constraints to get user media, you can pass a flag, uh, use screen. So if it is true, only the Google Chrome build which supports screen sharing will try to capture your screen. That's because Google Chrome, when you install Google Chrome, it installs other software also, I mean other, uh, abilities on your system with which it has a lot of access to your system. So you can actually share the screen. Before this WebRTC came up, Google came up with an extension called, uh, Chrome Remote Web, uh, Remote Desktop Sharing. That again was only using Chrome. So I think they are, uh, using the same, uh, features over here. Thanks. Okay. Just a recommendation of what? VLC. Yes, it is there. I think VLC is around there. Yes. Everybody give a big round of applause for Omshankar. That's a lot of stuff. Brilliant. Well done. The batteries are about to die.