 Thank you for having me. It's great to be here. I'm Hermann Goldenstein. I am a software developer at WebRTC Ventures. I'm going to give you a high-level introduction to WebRTC. We're going to see what we can do and how to use this technology. So I want to know how many of you have been working with WebRTC recently. Can you raise your hands, please? Okay. Okay, thank you. So I want to show you first what is WebRTC in action. So please join me at MeetGTC-GrankeyGeek. I think you have access to the Wi-Fi. You can show him from your Android devices. Thank you. Okay. I can see some faces. Okay. This is exactly that is the echo. Okay. So this was exactly what WebRTC is basically the ability to share video and audio from the browser and mobile devices. So WebRTC stands for real-time communication. It is basically a set of JavaScript APIs that allow us to have a media communications between peers like browsers or mobile devices. And did you notice that you didn't install any plugins? So the thing is that WebRTC works natively in the browser. So this is very cool. So WebRTC has three main tasks. The first one is getting the video and the audio from the input devices such as your webcam or your microphone. Secondly, establishing a peer-to-peer connection between peers, of course, and the ability to send video and audio. And third, the ability to send also arbitrary data. So for each one of these tasks, we have one object, one API. The first one is media stream. The second one is the RTC peer connection. And the third is the RTC data channels. So I'm going to talk a little bit about these APIs individually. Let's start from the first one, the media stream. That is also known as the GetUserMedia. These APIs allow us to enable access to the camera and the microphone. There is one parameter that we need to pass to this API. That is the constraints. That is basically the way to say that I want access to the video or to the audio, or both. And, for instance, you can set the frame rate or the video resolution. And because the WebRTC specification is not yet complete, not all the constraints work on all the browsers. So there is a recommended workaround that is using the adapter.js, which is a shim that helps us to handle this situation and between the browsers, okay? Also, media stream handles the user permissions. When you call the GetUserMedia API, the browser will ask you first permissions to the user. So the user has to manually click on a button on the allow in the case of Chrome. To grant access to the devices. And this is important because WebRTC and the browser is protecting us of being inspired. So I want to show you a little above the code. As you can see, one second, please. Sorry, hold on. All right. Thank you, Chad. Okay, sorry for that. Okay, so as you can see on this last line, we are calling the GetUserMedia. We are sending, as a parameter, the constraints that are already fine on the top of this snippet. I'm saying on this sample that I want video, but not audio. And as you can see, this has the structure of a promise. So we have the dem and the catch and the indication of the success. We have a success function where I'm receiving the media stream objects. So let's take a look to the media stream object. This picture illustrates us how this object looks. It has basically two media tracks. The first one is for the video and the second one is for the audio. And in the case of the media track of the audio, we have two channels, the left and the right, that together compose the stereo. And what if you have more than one camera or one microphone? Is there a way to choose what device I want to use and the other ideas by using the Enumerated Devices API that returns us? Yeah, sorry for that. Thank you. Okay, so I was saying that this API returns us an array of devices that are connected to your computer. So we can choose by specifying the device ID in the constraints. You can set the audio source or the video source. Okay, so I want to show you a quick demo of the media stream. Okay, thank you. So as you can see, Chrome is displaying that dialog box asking me permissions and once I give, I grant the access, I can see my video locally on the video. Okay, as I was saying, this sample app demonstrates us that there is some other stuff that we can do with the video element. For instance, I can use, I can apply a CSS filter, for example, the CIPA or the Invert, this is my favorite. And I can take a snapshot, for instance, and let the user upload this photo to, for example, his user profile. So let's talk about the RTC peer connection. This is like the core of RTC. Once you get the media stream object, you want to send it to the other peer. So the RTC peer connection takes care of all the transport details. For instance, the SDP negotiation, the SDP stands for session description protocol. It is basically a standard protocol to describe some information about the peer, about the media codecs, and the media streams. It also handles the data channels, the media encoding and the coding that is compressing and uncompressing the video and the audio. It also handles the network issues such as the packet loss, the echo cancellation, the noise reduction, and the NAT traversal. That includes the ice processing, the STUN and the turn servers. I'm not going to go deeply on this topic because we have another session for this topic, but just to let you know this is a technique to find a way to get the other peer through the NATs and the firewalls. Okay, so, sorry. Okay, so what you're seeing is basically the call flow of a basic call, video call, in WebRTC. As you can see, we have two peers, the peer A and the peer B. And in the middle, the signaling server. About the signaling process, this part is not part of WebRTC. You have to, this part is up to you, you have to decide how to implement it, but you need to understand that you need something to exchange information between the peers in order to get the peer connection established. So, the peer A is going to start the communication, okay? It's going to call the get user media. Then he's going to get the media object into the RTC peer connection using the AdStream API. Then he's going to call the create offer. This API returns us the SDP, let's call it the offer. So, this peer has to send this offer to the other peer via the signaling server. He's saying, basically, hello, I want to communicate with you. This is my information, okay? So, the peer B is going to receive this information, the offer, and it's going to follow the same steps. It's going to get the user media, AdStream. In this case, he's going to call the create answer that is similar to the create offer. It also returns the SDP of the peer B in this case. But this SDP is for answering, so he has to send back this information, the answer to the other peer, to the peer A. So, after an exchange of eyes candidates, they're going to be able to enjoy a video call through the peer-to-peer connection. Okay, the RTC data channel. This API enables to send and receive data, okay? And this work, basically, very similar to their WebSockets. So, the question is, why do we need another communication channel to send and receive data? Okay, we have WebSockets, Azurex, we have a lot of options. And the answer is because this is peer-to-peer. So, there are more benefits. For example, we have a lower latency, more security, a more privacy, because there is no server in the middle, okay? There is no server reading my messages. So, this connection can be reliable or unreliable. You can choose this mode. And this work like TCP or UDP, okay? The first one, guarantees that all the packets are going to get to the other peer and on the same order. For instance, if you send ABC, you are going to receive on the other peer ABC in the same order. The other one, the unreliable, doesn't guarantee the retransmission and the ordering. So, this connection can be faster, okay? And you can choose this option depending of your use case. If, for instance, your use case is a web torrent, you make to ensure that all the packets have to be on the other side, okay? So, you need to choose the reliable mode. But on gaming, maybe you need more velocity and if you lose some packets in the middle, it doesn't matter, so you can choose the unreliable mode. Okay, so, there is more APIs. So far, I talked about the media stream, the RTC peer connection and the RTC data channel, the main APIs, but we have more. For instance, the get stats enable us to get access to statistical data about the peer connection. This is very useful because sometimes you need to know on which cases your application is working slowly with poor connection or good connection. Also, we have the object APIs. WebRTC has been adopting these APIs from ORTC. There is another peer-to-peer communication API that provides more control and more flexibility on the RTC peer connection, but don't worry, you don't have to use it. But just to let you know that these objects are coming and actually it's already implemented on Microsoft Edge and it's being slowly merged into Chrome and Firefox. So I want to talk a little bit about WebRTC media codecs. The question is, why should I care about media codecs? I just want to have a video call and that's it. The thing is that the coding and the coding part impacts on the performance, especially on mobile devices. You have to take an account if the codec has hardware acceleration. The hardware acceleration reduces the CPU usage. There is a very important and avoid overheating on the battery. So there are codecs that are mandatory to use to implement in the case of video. We have the H.264 that is like a standard and the VP8 that is free and it has less support in terms of hardware acceleration. In the case of audio codecs, we have G711 and the Opus which is free and it provides an excellent quality. Okay, so WebRTC is supported in the main browsers in the case of desktop browsers. It is supported on Chrome, Firefox, Opera and Microsoft Edge without a plugin. And if you use Internet Explorer or Safari, you have to use a plugin, okay? Doesn't work natively. In the case of mobile browsers on Android, it is compatible with Chrome, Firefox and Opera and in the case of iOS, WebRTC is not supported by the main browser but you have an option. You can build a WebRTC application for iOS and Android as well. But you need is basically a WebRTC SDK and there is many options to get the SDK. The first one is by downloading the source code from Google and compile it by yourself. The second one is use a pre-built SDK from a third party and unless you have to change anything on the source code, I would recommend the second one because compiling the SDK by yourself, it can be tricky and it takes a lot of time. But you can do it, of course. And the third one is to use a WebRTC pass, the platform as a service. Many companies are doing that because this kind of services provides support for the server side, the signaling process, the media servers, et cetera. So it is a valid option. So to finish my talk, I want to mention some deployment considerations, things that you have to take in account when you decide to put your application on production. The first thing seems to be very obvious, but it's not. WebRTC works with secure connection. So you have to implement HTTP, okay? Yes. The second one is the signaling process. I told you, this is up to you. You have to decide how to implement this part. You have many options, WebSockets, XMPPC. So you have to think what you need. This is not part of WebRTC. Also you will need probably stone and turn servers. You have to decide if you are going to host them and maintain them or are you going to pay a commercial solution. Also, if your application supports multi-party calls, for instance, a call with five, 10, 15 people, so you have to think that WebRTC doesn't scale well itself, you have to implement something there to help the scaling. There are many options like MCU and SFU. You have to think about this and think about things like media recording, video composition, and connection quality. There are things that you have to take in account for your deployment. So, well, thank you very much for hearing me. Obrigado. Muchas gracias.