 This is the topic that we are discussing today, it's about, it's not just IoT, it's like how do you have media streaming in IoT over WebRTC, so before that how many of you know what is WebRTC? Two, three, four, okay, great, five seconds, cool. So WebRTC is all about real time communication, okay, on the web. So initially as we know, the web-based communication VoIP was Skype, Facebook calls, 2-3 Viber and stuff like that, then people started to view it as a very potential source and it became standardized. Now it's a standard Google, Mozilla, Opera supported technique of, you know, doing video calls over the internet. So using WebRTC, you could just open a web page and make a call to the other person without having any installations, plugins, downloads, flash, you know, silver light, nothing. It's just simple, it's in the browser, you can use it from your phone, tablet, mobiles, anything, just peer-to-peer communication, alright. So from there, so this is a bit about me. I am in telecom industry for the past four years and this is what, this is the book I've written, this is the blog I maintain and that's it. So what we'd be discussing today is four things. The first thing that we'd be discussing is how to control machines remotely, like from the web page, how can you switch on the lights, fans and stuff. That's like the first thing that one needs to do when one enters into IoT. The second thing that we'd be discussing is how to transmit media from a remote machine to our web, okay, so that we could access it anywhere. And the third thing is, you know, controlling a robot using all this, like viewing where the robot is going and controlling it, you know, navigating it based on the steam feed that we are getting. The fourth thing that we'd be discussing is how to take this feed and broadcast it publicly, so that like anybody could view it and control the robot. So if we don't stream it publicly and we just use one server, it will, you know, result in heavy load and it will just crash. So we have to use live stream mechanisms, CDNs to do that. So it discusses end to end, IoT, robot streaming and full. So yeah, these are the four things. So this is like plain IoT. I do this myself, but we see stuff like this about IoT all the time. So for the first thing, the basic IoT controlling live fan from your, you know, from the web page, these are the basic components required. So I have the B model, Raspberry. I have a SD card. And for the communication, I just use my dongle, the 3G dongle. And I could not bring the motor, but, you know, the LEDs are there. So I just control this using the web page. So this is Raspberry Pi. You know about Raspberry Pi, right? Yeah, okay. So this is a simple switch. So basically we just activate the GPIO from a PHP script, which is hosted on a server. And this is a relay. We obviously need relay to connect machines in real time. And this is what we made. And we could not bring the setup here, but I've uploaded the rough demo to YouTube. You could see it here. This is the web page. This is how we control it. So from the web page, it goes to the server. From server, it goes to the Pi, which also has, which is also connected to internet using the dongle. You can see the dongle is connected below. The cover is out. So there and the motor and the lights are connected to it. Okay. So now is the media streaming. How can you stream from a remote machine to any, you know, web page over the internet? So this is what we are going to use. WebRTC, no plugins, no installations, no downloads. Simple communication. So this is what WebRTC is in this. Although we are not going into details of this. So WebRTC has all these components. You know, it has the audio, video management. It has the peer connection management. And it has, you know, basic image enhancements and controls for the stabilization of video audio. It has syncing. So all this stuff is already there. This is the problem. The codec supported by WebRTC is not the codec supported by majority of the machines. The majority of the machines out here, like the webcams, the streaming servers, they support H264, you know, generic old protocols. But WebRTC came out with VP8 and Opus. So what we do is, okay. So before that, like for example, let's say we have a WebRTC endpoint on the machine and a WebRTC endpoint with us. For example, you know, Pi has an operating system. So install IceVisual on this. It already comes with WebRTC support. And again that is connected to web. That is connected to my machine. So I am able to see what is being transmitted from the Raspberry Pi using a simple webcam. Normal Logitech webcam. Logitech webcam. So it will turn out to be something like this. Okay. So this is the web page I am seeing. This is Raspberry Pi. It has IceVisual browser. So it is a simple WebRTC, you know, peer-to-peer call. So I am able to see because I am also on WebRTC. Yeah, supposedly we don't consider the transcoding right now. Let's say it is just WebRTC to WebRTC. Then what can we do with such a situation? We could have all these things here. For example, okay. So this is like image. This is movement detection. So these buttons you see, it detects where the movement is. How does it detect? I have one level abstraction. So this is how it detects. It has differential pattern. So with just media stream coming from a Raspberry Pi machine, you could have multiple things. You know, for example, I am just taking video here. You can also take audio. You can do face detection, movement detection. You know, you could have surveillance cameras and detect motion and it can trigger your web page from any part of the world. Okay. This is an, I have not implemented all of them. I have just done motion detection, but you have face recognition. API is a number of them. You know, you can have face tracking, head tracker.js. For audio recognition, you can have multiple things. This web audio API provided by, it's getting standardized by all browsers. Then you have 3.js and you have WebGL. These are all about motion detection and, you know, virtual reality, augmented reality, mostly augmented reality. Okay. This is how WebRTC structure would look like. Those who know about WebRTC, you know, the normal WebRTC is not as simple as it looks. It requires certain things like you could see turn and stun. So, stun is inside turn. So, let's consider this turn. Turn is all about our, you know, firewall drivers. So, the video packets by themselves, they cannot traverse your firewalls and enterprise policies. So, you need turn server for that purpose. And there is an open source turn server provided on source code by, it's called cotern and RFC 5677. So, that, these are all open source products by the way. There's nothing proprietary, nothing close source. Everything is out there. All the source code is available. Okay. So, this is the robot navigation part. We are not on the code conversion yet. So, this is about WebRTC based robot control. Okay. So, suppose now that we are getting the media stream and we can access the light bulb fans. So, how do we combine all that to make a robot? So, for that purpose, the essential thing here is the ECU. Okay. So, ECU is the electronic control unit. Almost all the robots have it. It comes in various ways. I just use Arduino because I'm not really into electronics that greatly. So, you can use an Arduino chip and you can use Arduino to control the DC motors and that will control the rear wheels. The front wheels just go wherever the floor is. And with the Raspberry Pi, of course, I have the WebCAM connected. The battery is connected to external power source, the phone bank. Of course, it's a lightweight robot. It doesn't have. And that's it. And the front project is if from the web page I give two pins, one and one, it goes forward. If I do, I can just show you the next page to get a better idea. Yeah. This is how it works. It navigates. I am controlling the machine remotely. And I am also able to see the live WebRTC stream. Okay. So, this is there. Okay. So, this is the part which I described. So, in this process that I just described, it is feasible, but it has a very, very serious limitation, which is about the WebRTC limitations. So, WebRTC, it's a budding technology and it's not supported on all the browsers, on many browsers. It doesn't have a native support at all. Okay. So, what we do for that is we broadcast it. We transcode it. So, for that purpose, we could... Let's suppose the load that we're expecting on the media stream is, you know, 10 viewers for every one player. For every one webcam which is publishing, it has 10 viewers. Then you could have a simple WebRTC multi-peer, like a single WebRTC stream, you know, given optimum bandwidth like 3, 4 can have 10, you know, simultaneous peers, which it can broadcast to. So, you could have, you know, simple just put it on Amazon and let it broadcast. By the way, WebRTC is peer-to-peer. So, your own bandwidth should be high enough. The centralized Amazon thing that I just mentioned is about signaling. It's like controlling. It doesn't participate in media streaming at all. Yeah, that will come at the end. So, by broadcasting, I mean I just want multiple viewers for single publishers. So, it could be channel. It could be my own algorithm. It could be through relays as, you know, you could see here. So, I've just, I tried all the four alternatives and right now the third alternative that I show here, I go with that. So, this is like, suppose we have more than 10. Suppose for every one publisher we have like, you know, 20, 30 viewers, then one machine cannot stream to so many peers. It cannot have a simultaneous peer connection. It will crash. It will just hang. So, what we do is we have our publisher, which is coming from the supply. Then this is the relay points. So, one relay point will take the stream, put it on another three different channels. Then that three different channels will go to three different relays. They'll put it on three more different channels. So, like that, like a fusion reaction is just spread out. So, we could have multiple people. But even this has limitation. It cannot stream to so many people. You know, for every one we could have like 24 maximum, maybe 37 maximum. That's it. Have really nice bandwidth at t5. That's it. If you want, if you're expecting load of like 100, 200,000, then this is the way to go. This is what I've been telling you from the start. So, this is what we do now from the webcam. We have a Web RTC stream coming in through the Web Cam. Through the Raspberry Pi. Coming to Coming to my own browser which is Firefox, now I have got the stream from the remote machine how I handle from here is the rest of the story. What I have done here is I use JS scripts to get the frames, put them in a web P image format, compress them, slice them and convert them to MP4. This happens in a chunk by chunk format and those I send to the Vauza streaming server. Vauza streaming server if you know it has this transcoding engine real time where it transcodes to multiple formats for FLB, for iOS, M3U8 format, FOM format, RTMP, RTSP, it handles from there. At Vauza streams we give to this CDN network, CDN network of course you know it handles the load so you could have as much load as possible the CDN will make sure that every end user gets the stream on time without lags, it creates multiple server instances, keeps the cache and all that and that is how we can give it to non web RTC browsers, we can give it to smart TVs, we can put a display unit on the fridge and transmit there, we can just give the stream coming from the remote machine to any display device. This is a small POC of how I am, it is not visible, but how it takes it the whammy recorder and how it creates the frames. The key thing here is achieving a frame rate decent enough to have the quality thing in between because right now with everything I have tried, multiple approaches I cannot reach a different frame rate and if the frame rate is there then there is no quality so that is the challenge right now, but there is a solution. WebRTC in few months I guess will have H264 so is the declaration in documents so when that comes we could have WebRTC doing great things and this is pretty much the end. So it is as you see it is very cost effective, I have used basic products, there is no technology, software or hardware that I require to do all that that I have done. It can be expanded to multiple domains like you could have machine learning, augmented reality, sensor to server, anything that you want to do with IOT and media streams and that is pretty much it, VLC does that. VLC I am not using it because I wanted to curb the potentials of WebRTC so we could also do the same where on Raspberry Pi we have products like motion, we have G streamer. These are the software which are meant for Raspberry Pi, FFMPEG is a very strong library it captures and it sends but then these have their limitations. I specifically wanted to use WebRTC's capabilities because WebRTC is peer to peer it has data channels, audio video stream is separate so if we use these G streamer, FFMPEG products then we could do everything but they will not do WebRTC. So controlling lights and fans we just need to send any text from the web page to the PHP script running on Pi. That is only for the media stream. So right now I am not controlling right hand fans using data channel of WebRTC although I could do that for the control of the machines, the GPIO pins. It is just a simple PHP script on Ajax so that is how the command transfers and there is a very nice library GPIO library by Drogon or somebody and I just use that in the PHP program and I give part to the switches, the pins, the pins give it to the relay and the relay comes. Wauza media server, yeah. Wauza, Wauza, W-O-W-Z-A. Wauza is paid, there is an open source I think a free trial Wauza but I use the paid one. So it takes some amount of investment to live stream it. That part is other libraries if you are not particular about WebRTC and you do not have a specific use case pertaining to WebRTC you could also use VLC, G-Steamer, yeah. So Wauza does multi-format transcoding real time. It also does adaptive streaming for example it will stream at 140, 240, 3-4 kinds of. So all kind of players can connect to Wauza and it has RTSP, RTMP, HTTP. So supposedly there is a browser called, supposedly I make a browser called Altenai Mini. The stream coming from Wauza transcoded stream, at least one of the stream is ought to play in my browser even if I do not have WebRTC support. Only the browsers which are like really on the edge for example Firefox, Opera, only they have WebRTC support as of now. Internet Explorer does not have it, Safari does not have it, multiple unknown mini browsers do not have it. So that is where this process comes into picture. And supposedly I want the stream to be played on my mobile without using browser like you know just the way RTMP stream plays. So for that purpose the transcoding of the codec, audio video codec is really, really required.