 I, per tant, moltes gràcies per introduir-m'hi. Ok, ja hi anirem a parlar de Mediasub. En sort, Mediasub és una multiparty videoconferència en sistemes de noves, que és designat per a treballar amb web Cartesian points, potser brossers, potser mòbils i aplicacions, i és designat d'una manera que no és un servei comú, com que no ho sabrem, però, a més, no és un llibre, una cosa que explicaré després, en més detall. A partir d'aquí aniré a parlar de diferents multiparties multiconferències. Hi ha uns d'ells, i ho explicaré, crec que hi ha tres. El primer és de Fullmes, i això és molt fàcil, molt interessant en multiparties multiconferències, perquè, com que pots veure, no hi ha un servei de medi. Of course, there is a signaling servers on web, because there must be a way to make all the participants agree on having a room, but that's something about signaling. When it comes to the media plane, I mean, processing, receiving, sending, the media packets, in this topology there is no a media server, so it's funny for that because you don't need a media server, but it doesn't scale very well. As you can see in the picture, its participant must send all his streams, audio and video to all the other participants, which means a high uplink bandwidth requirement, which is not good for clients. Then we have a more, a better solution, which is a bit legacy right now, but it works very well. It's the MCU. In this case there is a central server, a media server, which does a lot of work, in the sense that it receives all the media streams from the participants, let it be audio, microphone, audio, and video from the webcam. The media server, the MCU, mixes all of them into a single new stream and send it back to all the other participants, so each participant sends one, sends one, and receives one. This is okay for, from the client point of view, in the sense that there is a low bandwidth requirement, and it's very easy to deal with just a single remote stream, but it doesn't let the application build interesting or cool things in the layout because you just receive a video, and if you know how to draw a video element in a web application, you know that a video is just a container and you cannot split it into multiple pieces. So, well, it's simple, it works, but we can do it better. The SFU, well, Gitchy, is also a SFU. So in this case, in this topology, there is a central server, it receives all the streams from all the participants, but instead of processing them, instead of the coding, mixing and encoding into a new single stream, it just behaves like a packet router for all the RTP packets coming to the server from the participants. So, at the end, one participant sends one and receives many. Okay, from the uplink point of view, this is okay because each participant must upload just a single stream. And for a downlink, it just, it's a browser, in this case, much receives a lot of streams, but usually the downlink is better than the uplink, so that shouldn't be a problem. And from the layout perspective, I mean from how the UI application will look, this is nice because the client will receive separate streams, so it can display them in any kind of fashion. Well, as expected, the MediaSoup is SFU, so let's see how it works. Okay, first demo. Please connect to the 4, then, Wi-Fi network. I mean just IPv6, please. It's running, the server is running in my laptop. It's running in the IPv6 network, and just in my case, I don't know, but just, for example, in my case, I had to disable the IPv4 network in order for it to connect to the internet, so... Okay, so just make sure that you have internet access to Google or whatever, and then just please join that. Well, not yet because I am gonna run the server. Well, I run out of time already, so it's enough. You can see... So you can see how each remote participant is displayed into a different container of video, so you can place them in any kind of fashion within your web application. Okay, so coming back to what MediaSoup is, one of my goals when I decide to build this media server is to make it minimalist. What does it mean? Well, it means that MediaSoup is not a common server. It's not a full application. It's not something that provides you with an init script or even a configuration file. Instead of that, MediaSoup is just AP driven by JavaScript. So it just focuses on the media plane, which means it does not provide you with a signal in protocols such as CIP, XMPP, any kind of custom protocol of a socket or whichever. It's up to the application, it's up to the developer a integrar MediaSoup into its application to decide how to signal messages from clients, browsers, to others. So, at the end MediaSoup is just a node module. And you all know what a node module is. It's something that you add into your bigger node application. It's just a dependency into your application, like many others. You may want to build a real time application and you add express, so you have HTTP server and you can add socket.io, so you have chatroom, and then you can add MediaSoup into it so you can provide your application with multi-party video conferencing. So, in order to install it, it's just as any other node library, just an NPM install, MediaSoup, and that's all. Well, it's a node library or node module, so it must provide us with a JavaScript API. So, this is how it looks. Basically, here we are loading the MediaSoup module within our bigger node application running in the server, of course. We are creating a server. When we create a server, we instruct MediaSoup to launch some sub-processes that run, well, they are written in C++ and they are separate processes that manage the media layer, okay? So, then we are going to create a media room with some specific codecs. In this case, we use Opus for audio and BP-H for video. Then we create the room and the party can start. So, now it's time to add participants into the room. Well, for this task, MediaSoup provides the application or the developer with two different kinds of APIs. A low-level one, which is based on ORTC, ORTC is a specification that tries to accomplish with the same goal than WebRTC, but at a very low-level API. So, this is how MediaSoup internals work. This is a very low-level API that lets you create all the components of the communication by you, just by creating them separately. You can create a transport, you can create a receiver to receive audio from the browser, whatever. But it happens that browsers speak WebRTC, no ORTC. So, browsers deal with SDP blocks. They generate SDP blocks and they consume SDP blocks. So, we must deal with that. So, MediaSoup provides with a high-level API that, in fact, uses the low-level API inside. And this API is exactly the same as the WebRTC API. This is peer connection, create offers, local description, all of that. The Soup just forwards the media. This is not new. I mean, it does not provide you with a signaling protocol, nothing like that. And when it comes to forward media, this is how the MediaSoup worker, which is the sub-process handling the media, looks like. You can see that here we have a room with three browsers. All of them, for every browser joining this room, there is a peer instance within the MediaSoup room. Its peer instance manage its own transport and receiver and senders in order to receive media streams from the browsers and send audio and video to them. And, okay, this is how the MediaSoup provides WebRTC JavaScript API looks. If you are familiar with WebRTC API in browsers, you will know what this is about, because the API is exactly the same. So, the party starts when a participant wishes to join a room, so the application in server-side must create an offer for him, get an SDP offer, and send the offer to the client, to the remote client, maybe a browser or whichever. How to do that is up to the application. You can use SIP protocol over WebSockets by using JSCIP, for example. You can use XMPP, you can use whichever way, whichever mechanisms you wish to communicate with your browsers. So, suddenly you will receive an answer, an SDP answer from the client. And, eventually, new participants will join the room. So, when that happens, all the others will get an event, a negociació, a need event. So, the application, who is listening for that event, will call again, create offer, and send remote description. So, the browser receives a new description, a new SDP, telling him about all the remote streams that it must manage. So, suddenly, on other stream, events will happen in the browser. So, the browser will be able to display the new video of the new participants. Well, this is like before. In this case, there is a room with six participants. And, well, you can see that the CPU usage is not very... Well, it's okay, no? I think it's running in my computer, so I don't know how it would be in a server, but, well, it looks okay. We are speaking about multi... Okay, thank you, it's not mine, but... We are speaking about multi-streaming stuff. So, how do browsers behave when it comes to multi-streaming? Okay, it happens that, unfortunately, Chrome still implements his own specification on how to deal with multiple streams. At the same time, FireFox implements the new specification, the standard. So, this is a mix of different things that it is very difficult to manage. The good news are that MediaSoup does that, supports both of them, so you don't have to worry about this. Well, something about the roadmap of MediaSoup. The first version is really close. My frame, also José Luis Millán, there is working on implementing the Google Congestion Control Protocol into MediaSoup, so the video can be smoother than right now. Also, some stuff about the WebRTC provided API. And for the second version, funny things will be done, such as implementing Simulcast and SVC codecs, which Saúl already explained before. So, I'm not going to enter into details. The point is that, for example, I mean, Saúl has shown us the Gitch application. Well, that's a full application, the sense that it provides you with a complete tool of a server, a web interface, the client-side, server-side. It's an application for meetings or something like that. The point of MediaSoup is that it's not a full application. It is just a library, a server-side library for nodes. So you must integrate it into your own application and you must decide what to do. I mean, you may wish to build yet another multi-conference application for enterprise. You may wish to do something social for people to meet other people. I don't know. So, I have done that yet another application. Well, okay. I have five minutes. Okay, I just need two. Okay, in this case, well, I will show it. So please, just reload the browser. No, not yet. I must change. Okay. People are joining. More people, please. We need to meet ourselves. Okay. Sorry? Okay. Can you see it? Okay. Does it work? Can you... Sometimes in mobile, well. Just a demo. Okay, so for example, this is an application. Saul sent me a like. We can't see one to each other, but we cannot talk or listen one to each other yet. But... Hey. Oh, I don't like your voice. Sorry. So well. So well, that's all. So thanks a lot. If you have any question... We've got three minutes for questions. Okay. Bueno, Saul. Well, in fact... Okay. He's asking me about capability of Media Soop to record the streams in server-side, for example. Okay. In fact, I already provide, Media Soop already provides the application with an API to receive a raw RTP packets in JavaScript lamb. So you can manage them as you wish. You can save them into a database, I don't know. So that's for now, the API provided API. RTP, what do you mean by RTP? RTP. And the second question, why do you create the first offer from the server and not from the client? Yes, okay. Okay, first one, he's asking me about what Media Soop does with RTP processing. Okay, currently, that's something that José Luis Millán is working on. He's the RTP guy on this, okay. We now generate the receiving reports. We forward send the reports. We manage NAC. We have a RTP buffer. So if we receive a NAC from a receiver, we resend it. And the most important thing is a congestion protocol that is not yet implemented. So he's working on this right now. And the second question was... Okay. Okay. For me it's a very interesting question because I spent six months doing the opposite until I realized my error and changed two weeks ago. Okay. So the question is that if you create the offer in the server, you manage which pilot types and everything the others will need to use instead of letting them choose them. So you don't have to keep a mapping of pilot types and nothing like that. In fact, you create the offer, you own it and you own the session. It's really fucking easier in that way. All right. Thanks, Iñaki. Okay. Thank you.