 You have to be fast now. OK, my microphone is working. So yes, hi everyone. Today I will be speaking about my experiment on using the Web Artisanative API to do a gateway. So I will start by speaking a little bit more about where I'm coming from and why I think this may be really valuable. So in the past year, I've been working in Seattle for FlowRoute, and I'm mainly focusing on scaling SIP signaling. But the four years before that, I was working at Libon. And my main role was to ensure audio quality. We were monitoring it well, and that if there was any degradation, we would be able to identify why. And there was millions of calls, so it was a really good place to verify what is going on on Android devices. Because Libon was a soft phone running on Android. It was using a built-in communication line phone under the hood. So I was mainly working on the native part of it. Great piece of code, the Media Streamer. I became expert with it, and I learned many things while using it. I worked with other people like Dragosh. Libon was kind of very rich because it was founded by Orange, and they were a ring specialist. And it was a really good experience. So yeah, then what is the WebRTC gateway? I think there was already a lot of discussion about it. So what it is doing, I mean, there may be different RTC gateway doing different things. I guess one of the big differences, are you transcoding or are you relaying packet? Because you can gateway without dealing with media. Here I'm highlighting the fact that what I have in mind is to take care of benefit from the QoS best effort that WebRTC can provide with their jitter buffer management and with their bandwidth estimator. So all the other details, I think they were already discussed briefly by others. So let's explain why I believe a WebRTC is so great based on my experience. The last update they did, they were saying that from now on there is a switch, and there is more and more people using WebRTC to do native application out of browser. And they do embrace this idea. And we have seen such a great example in the previous metrics demonstration, amazing stuff. They also are accepted to expand the API, to let you inject your own codec, non-free codec. They don't want to deal with licensing. But the API will support the fact that you could use any codec you want now. I haven't looked at this part of the API. And their mission also seems to be aligned with not only browser kind of stuff. So they are really expert on Android mobile devices. They are optimizing. Most of the code is optimized. We greatly benefit from OPUS, which is also optimized for AMR and other CPU. So in terms of what they are doing, I think they may agree with this kind of usage. And they may be able to make it support it in some ways. Now, yeah, other things that are really great in WebRTC is they do know many things about signal processing. I don't know the team or the engineers. But what I know is that they are doing great things. And the proof is some of their code has been stripped from WebRTC and used in other free software, which is great. Like, PJ Media is integrating both EcoCanceler, the EcoCanceler for mobile, and the EcoCanceler that was made two years ago, the EcoCanceler delay agnostic, they were saying, that was the value. But now they came up with something new anyway, a new algorithm, even better. Because there was some, sometimes it was having a big downside. It was altering the, they are explaining better than I won't be going into the detail. But in part 2.1.0, they are explaining why this algorithm is better. My point is that there is a lot of knowledge in WebRTC and lots of skills. And everybody may know that. So here I'm just sharing an integration of their EcoCanceler in Media Streamer. But here what I want to highlight, it's kind of hard to strip the code and maintain it in another software. Because sometimes when you have to open to benefit from the latest patch, and then your code is broken. So you need to update WebRTC, maintain the compatibility with WebRTC. Sometimes it's challenging, I think. That's what I saw in Media Streamer. Because when I was going to update the AACM, then suddenly I needed to update other things. And it was more work. This is just a side note to see that we are already using WebRTC in other soft foams. But there may be a way to use it in different ways, in completely compatible ways. So then Netek, which we cannot see, is their Jitter buffer in the decoded domain. So when I was working at Orange, I was lucky enough to work with some of their signal engineers, specializing in audio. And they did have a look at Netek. And they had great opinion about it. The way it was doing fast accelerate, it was more evolved than other Jitter buffers that were made by Orange. But maybe not the EVS, they're really flagship. I don't know if EVS may be as good or better. But they had great opinion on it. And it's really challenging to do. And especially very difficult to test. And where WebRTC is also bringing something great here is they have billions of calls per month now. And they are gathering statistics from these calls. So this kind of testing, I almost know, it's hard to do the kind of testing that they will be able to do. And that they are already doing. They did an AB testing campaign with their eco-cancelor last year. So they can verify with some, I don't know, they select the candidate though. Maybe I don't think it's random. So here I'm just saying that we can control this. It's really easily from the API. When you are using their C++ API, you can just set the maximum buffer size. It's exposed, so you don't need to change their code. Another thing is that it's hidden, again, because of this PDF. But they are supporting Microsoft Edge ORTC. I'm just showing a commit where they have done, like in the past month. So they will deal with the complexity of being fully compatible. I mean, and probably Microsoft will have to align with them, because they have the biggest market share now. So we can, I mean, all of these complexities are really taken care of by them. It's almost certain. I'm just, it's an assumption, of course. But I mean, yes, it's a safe assumption. And then they're bandwidth estimator. Of course, it was aimed at video, but they did say it was supposed to be activated for audio as well. Now, it doesn't seem like it is. I didn't have time to, I did not went far enough to test it and make it work. See how it's working? But I can see it's there in the code. They are doing things like, you know, Opus was changed recently to support 120 milliseconds natively. It's not the fake, you don't generate twice 60 milliseconds, and then you repacketize it. You can generate an Opus 1.2, it was introduced. So we can see they know these things. So they know things. They know what's going on. And also, I'm sure they have good collaboration with Jean-Marc Valen, because he's working for Firefox. They have probably very good, but anyone can do that anyway. Opus is free, and Opus is well integrated in many other software, I'm not saying that. So I'm working on verifying what can be done. Here is where the state of my work in progress is in this diagram. So yeah, I needed to use a real web browser. So I have used a JS zip, great zip stack. I chose to use zip, why? Because if my module is, if the web artist gateway is supposed to gateway to zip, maybe it's less complicated to use zip on both sides. I don't know for sure yet. But I like Camelio anyway. So I just wrote a little Camelio module to manage the zip and extract the body and send a request, the offer to a peer connection listener. I will show you where this code is. So Camelio can forward the SDP from WebRTC. And then WebRTC will just create the peer connection, generate the answer first. And then it will generate the peer connection and do everything else. So what I am highlighting here is that there is a standard interface, the audio device module, that can be used to access the media in real time, both on the transmit. It's like a sound card. In fact, for them, it's a sound card interface. So behind that, you have OpenSLS, all of these things. So they have an interface where you can take control of the media, but the decoded media only. So there may be a limitation here. And then what's in red, it's potential candidate for bridging the call. I decided to focus on connecting the browser. There was too much work. I was unable to bridge a call. I think on the second leg, it may even be possible to use WebRTC, in fact. Because you can disable, the API lets you disable multiplexed RTP, RTCP. And you can also disable encryption. And then you can control ICE. So I'm not sure if there is anything else that is not compatible with legacy SIP. So there may even be a way to have another, if not, of course, I'm a big fan of MediaStreamer 2. I may experiment with it. Because I will explain why a little bit later. I have some example where I already did transcoding with MediaStreamer 2. So yeah, where did I take the source code? And I cheated a little bit. I've used all the source code that was already there. And I took the pieces that were working and stick them together. So I should even be more clear on in my file that this is really not exactly the code that was provided. I did add some copyright to make sure I'm not confusing people that my code is not the code of. But it's taken from their sample application, most of it, the peer connection client and the peer connection server and the conductor. In fact, the peer connection server is only a server for signaling that the peer connection client and conductor together, they drive the peer connection, what you need to do, they implemented everything in these two files. And then I took their file, the device module, from the module directory. And I needed to modify it, because it was not an audio device module, but it was an audio device generic. And you need to have an audio device implementation to create it. And the implementation is taking care of adding extra things like an audio buffer, audio device buffer. But it's five minutes left. So I was hoping to do a demo here, because at the same time, some of the problems I face with Trickle and I see many people are facing them. They are easy to work around. So you don't have to know everything about ISA and all of these RFCs, like WebRTC is usually doing it well. So only to troubleshoot, you may need to know more. So all my code is shared. And it can be built without forking WebRTC. This is how you create an audio device module by using their abstract-based class. And then you need to add your audio buffer yourself, because you can't use the implementation. You have to provide the audio device. Here you can see, in this constructor, we can specify the audio device. You can see it. So your audio device has to be fully operational at this point. So that's why you need to create the audio device buffer before. And then I'm sharing also the code, the KMIO module, that can be used to drive this, even if it's probably not useful for many people. Then again, I'm coming back to where I started from. The expertise in audio quality is very high. They have 150 metrics. When I was working at Libon, the first thing we did was to add some metrics to monitor everything. And I reported call by call to know what's going on. And then cross-check the statistics with the user ratings, or the average length of call, to verify if it's really what is impacting the users. So WebRTC is already fully rich with all of these metrics, so there is little work to do on this part of things also. This I already spoke about it. So maybe I can do a demo then if I have five minutes, just to prove that I'm sure you're already convinced that this can be working. If I go back to the little demo here, you can see that KMIO is returning the SDP generated by WebRTC. So I can call, disconnect, reconnect. The call signaling is already working. Now I'm reconnecting. You don't see it. This is hosted in Paquette in New Jersey. So it's running on a server, and you can play and record a file. So of course, it's working on a server side now. Two minutes left. OK, well, I think I was able to cover most of my de-argument that made me think that this is what's worth doing. And I know, let's to conclude, sorry, I have to say there is many alternatives around that are working very well. This is an example of how to use a media streamer to do bridging on transcoding. You can see how high level it can be. So even if it's a C application, it can be used as a high level application, in my opinion, because you only need a few functions to do everything. You have factories for everything. Now I should say that there is already a software solution that we're built by. And people are working hard for years. So my experiment is nowhere near anything like production ready. So I'm just highlighting their work again. And Claude Lamblein was the engineer that was providing feedback on NetTec for us. She studied also the resamplers, the performance of various resamplers, and many other things. And so thank you. Yeah, no, you compile your application stand alone. There is no need. You have to build webRTC, the libraries. You link to them.