 Hello, guys. My name is Marcel Fonso. I'm an engineer at Talkbox. For those of you guys who don't know about Talkbox, we are a telephonic company based in California. What we do is we offer a cloud platform called OpenTalk that basically helps you a lot if you want to develop a WebRTC application. If you want to know more information about us, please check our website, Talkbox.com. But today I'm not talking about OpenTalk. I'm actually talking about how you can make sure that every cloud works. So in order to start and in order to explain better to you guys what I'm talking about, I would like to start with some real user stories that are actually very common problems that we hear from our customers when we have an application based on WebRTC. So let's take a look at some examples. So the first one, the user says, hey, your app is great, and everything is working fine. But the problem is sometimes video quality is not that good. So why does it happen? In this case, we also have a screenshot. As you can see, the first screen doesn't really have a good video quality. Then the second example is the user says, hey, when I joined the conference, I could see my friend, but he only saw a black screen. Why did that happen? So we have an example of four streams, but two are then not black. So why did this happen? And then the third one is the user says, when I joined the conference, I was able to hear the other person, but I could not hear them. So why did that happen? So as you can see this, no audio for user one. So based on these problems that we, again, we hear very often from our customers, I basically have three questions for you. One, how can we answer our customers in these situations? And the two is how can we identify the source of the problems? And third, how can we avoid problems like this to happen in the future? So in order to help you with these questions, I have some information to share. Well, the first information I want to share with you is a very cool application called test.rgc.org. And this is the WebRTC troubleshooter that lets you identify problems with a specific user. So I will show you how it works. So as you can see in the top part, there is the start button. And when I click here, what the app does, basically, is five checks. Okay? So you check first my microphone to guarantee that the microphone is capturing audio. It also checks additional things like clipping and the volume of the microphone to see if not too low or too high. Then you have the second check, which is the camera. So here the app's check that the camera of the user supports some defined resolutions. And it also checks that the video frames that are being generated are not black or frozen. Okay? Then we have the third check, which is the network, in which we would check if the firewall is not blocking specific protocols like TCP and UDP. And the fourth check is the connectivity where we can test the connectivity to our turn and turn servers. And lastly, we have a bandwidth test, which will basically test our bandwidth and our packet loss. So the other thing that I want to show you here is that there is also in the top part there is a menu that you can click. And you can also specify some settings when you run the test. So if you have more than one microphone, you can specify your microphone, also your camera, and you can also specify your turn and turn servers. Right? So as you can see here, you can specify turn and turn. So when I run the connectivity tests, it will be against the turn and turn servers. So test to our BGC.org is really great. But the point is, how can I avoid problems like this to happen before users join the conference? So what I have here, basically, three really quick checks that you can implement inside your app. So the idea here is that inside your application, users can test basically these three things. And if something goes wrong, you can just play a message to the user before they connect to the session. Right? So the first part is a hardware setup in which we will test the microphone and the camera. The second one is the connectivity test, in which we will test the connectivity to our eye servers, our turn and turn servers. And the last part is the network test, in which we will check bandwidth and packet loss of the user. Right? So first, the hardware setup. Right? So the idea here is that inside your application, you can implement a preview screen just like this one. So users, before they join the conference, they can basically test their camera and microphone to guarantee that it's working before I join the conference. Right? So how can I implement something like this? The first part is basically creating the combo boxes. Right? So we have two combo boxes, one for selecting the camera and the other for selecting the microphone. So what I need here is I need to get the list of the devices from the client. I need the list of microphones and the list of cameras. How can I do this? So in the Media Devices API, there is a method called Enumerated Devices. And this will return an array of media devices. Each media device has basically four information. It has a kind, which is the type of the device. It can be video input if it's a camera. It can be audio input if it's a microphone or audio output if it's a speaker. Then we have the device ID. And the device ID is important because this is the information that we will pass in the get user media call to specify that that is the microphone or the camera that we want to use. Okay? I will show you later how we do this. Then we have the label, which is the name of the device. Right? Camera FaceTimeGigi, for example. And then we have the group ID. And there's ten group ID as two devices that have the same group ID if they are physically in the same device. For example, I can have a monitor that has a camera and a microphone inside the monitor. So both the camera and the microphone, they will have the same group ID because they are physically in the same device. So, okay, once I get the devices, the next part is passing the devices to the get user media call. So I can specify the microphone and the camera. So the first thing we need to know is the concept of constraints. So when you call get user media, you basically can pass an object containing some parameters, which we call constraints. Right? So there are basically two media string constraints, which is audio and video. So we basically specify true or false for audio or video. But you can also specify some additional constraints. So in this example, in the bottom part, I specify my audio is true. And for video, I'm specifying resolution there. So I say, hey, my resolution here is 2080 by 720. Right? There are other constraints you can specify. So here, we have a list of the main constraints you can specify. These are not the only ones. So there are more constraints. There is a complete list of about 15 constraints. If you, yeah, in the media devices API, there is a method called get supported constraints that will return a complete list of constraints and basically true or false for each one of the constraints to see if that constraint is supported or not. So once we have the devices and once we know how to pair the constraints, then we can specify microphone and camera. Right? So I'm creating a variable called constraints. And I'm saying that for my audio, this is the device ID I want to use. It's the ID of my microphone. And for the video, this is the device ID that I want to use, which is the ID of my camera. And then I call get user media passing the constraints as the parameter. Right? In this slide, I also left a reference for you guys for how to create the audio level bar. So in that sample that I showed you, in the bottom part, there was an audio level bar. So as the user speaks, you can see the bar animating to indicate the microphone is working. Okay? So the second part is checking the connectivity. The connectivity to our eye servers. So here I have a fact which is because of restrictive networks such as corporative firewalls, sometimes users cannot establish a call using our eyes servers, our stun and turn servers. So the question is, how can users test the connectivity to our turn and stun servers before they join the conference? So what I have here is a sample that explains to you how you can do this. So what I'm doing here basically is I'm creating a peer connection. And I'm specifying the IC servers in my peer connection configuration variable. And then on my own eyes candidate listener, what I'm doing here is I'm verifying the type of candidate that I get. And I'm basically counting how many reflexive candidates I get and how many relay candidates I get. So look, I analyze the type of the candidate and then I say, oh, it's reflexive? Okay. Increment the counter for reflexive. If it's relay, increment the counter for relay. So in the end, I have the number of relay candidates and the number of reflexive candidates. Once I have the number, I can basically set my test. So when I create my offer, I will see how many reflexive and how many relay candidates I get. And then I can say, okay, if I get relay candidates, it means my connectivity to the turn server is working fine. If I don't, it's not working. The same for reflexive candidates. If I get reflexive candidates, my connectivity to the turn server is working fine. Otherwise, it's not. This sample is not covering the check for protocols, right? So it can also happen that your firewall can block, for example, UDP traffic. So it's possible that you can get, for example, relay candidates if you deploy a turn server in TCP, for example. I'm not covering this here, but just for you to know that you can also do something more complex to also verify things like this. And the third part is the network test. So I have another fact for you guys, which is a high number of problems related to video quality happen because of two problems, right? One, users don't have the minimum bitrate required. And second, there is a high audio and video packet loss in the session. Of course, there are many other reasons that can cause video quality problems, but what I'm saying here is that a high number of problems are caused by these two reasons. My question is, is it possible to check the user's bitrate and average packet loss before they connect to the conference? What I have here is a sample, which you basically run for some seconds, and you get statistics about my network, okay? Or better yet, statistics about my audio track and my video track. And what I have here is basically the video bitrate, the video packet loss, the audio bitrate, and the audio packet loss. Once I have this information, I can work with them and just play a message to the user. So in this case, your network looks fine, you are okay. So how to implement a test like this? So in this slide, I will give you just a spoiler of next presentation, which is about WebRTC statistics, because I just want you to know that there is an API called GetStats that allows you to get statistics about your peer connection and statistics about all the objects inside your peer connection, your streams, your audio tracks, your media tracks, your ice candidates, and more. So what I'm doing here basically is I have my peer connection, so my peer connection objects, they have a method called GetStats, so I call GetStats. And what I'm doing here is I'm basically filtering the results, because I'm only interested to get a specific object, which is my audio track received. So I have my streams received, and I'm getting the audio track for the stream. Once I have this, I just console log the result. So I have an object with some information there. So I can get information for the audio track, for example, things like the audio level, the bytes received, the ID of the object, the number of packet loss, and much other information there. But I just want you to focus on the information which is in red, which is bytes received and packet lost, because this is the information we want, right? So look how cool it is. Once I call GetStats, I can get information of bytes received and packet loss. Once I have this, what I can do is I can create a basic network test. So what I'm doing here basically is I'm specifying a timeout of, let's say, 15 seconds, and now you let my app run for 15 seconds. And after that, I will call GetStats, right? So I'm calling there, after the timeout, I'm calling pconnection.getStats, and I'm getting information about the audio and the video. So I'm getting statistics about my audio track and about my video track. My process stats, basically what it does is I will get information about bits received. So what I'm doing here is I get the total of bytes multiplied by 8 because it's the number of bits and not bytes, and then I divide by the number of seconds. So what I have in the end is the bits per second, right? And the line above, so in the following line, I have the packet loss per second. So I have the total of packet lost divided by the number of seconds. So I have the packet lost per second. So at this point, I have basically four different information. I have the video bit rate, the video packet loss, the audio bit rate, and the audio packet loss. There is an observation in the bottom part, which is this sample only gets one snapshot. So I only call getStats once. You can do something more sophisticated if you want, and actually call getStats, let's say, once a second so you can get things like network variation or packet loss variation instead of just calling once because you only have one snapshot at the end. Okay, once I have this information, what I can do here is that I can create some three shots to display the information. So what I'm saying basically is the user is okay for the audio and video if the audio bit rate is higher than 300 kilobits per second and the video bit rate is less than 3%. If my user doesn't satisfy this condition, I can also try audio only. So what I'm doing is if my user has an audio packet loss that is less than 5%, you are okay for the audio only. So I have basically three possibilities, audio and video, audio only, or none of them. Let's say, oh, sorry, your bandwidth is just too low. The last part is just playing the results. So just making a recap, we have basically three parts. So the first one, the hardware setup in which we test camera and microphone, then connectivity test to our stun and turn servers, and last, network test to test the bit rate and packet loss. Once you test everything, check, check, check, you can be sure or almost sure that every call will work. Thanks, Marcio.