 Yes, hi everyone. My name is José Luis Millán. I'm coming here today to talk to you about ESIP, the SIPESTAC library. Well, I'm currently based in Berlin. I'm working for a company called Fraffos where we build SIP and media server which is actually in SDC and it also has WebRTC capabilities. ESIP is SIPESTAC. It's also an old ESIP library. Even though you can also find Debian packages and Bauer packages as well. I have seen before the Debian package maintainer here, David Pocock. And one huge feature for me is that it's fully documented. You can find the documentation in the website. Can you speak a little bit louder? Yes, sure, okay. So what's the motivation behind ESIP? There was a trigger in time which was the WebSocket Transport. When WebSocket Transport came to the playground, we saw a huge opportunity to make browser SIP devices. In such a way that these devices should be not only soft phones like not hardware would be needed, but so simple to use and to upgrade as well as reloading a website. So by making a JavaScript library, not only you can have SIP stack and make a new SIP in your browser, but you can upgrade to the new version by just upgrading the website. You download again the JavaScript file and you've got the new version of the ESIP. So it was also a nice excuse to learn SIP and JavaScript more deeply and I really think we were in the right moment at the right place when WebSocket was starting becoming a standard. Looking back, we started in 2011. That's where the development started and then that year we could already be able to make a SIP call from browser to browser or from browser to a legacy UDP or TCP SIP device. Of course, not with media because WebRTC was not yet already. So we started also writing draft to make WebSocket a standard transport for SIP. In that same year WebSocket became a standard. So everything was clear. I mean, once we were able to have WebSocket as a standard transport for SIP, then the image was perfect. We could perfectly see that. It would be easier once the standard WebSocket protocol, then current voice over IP servers could connect and create a new transport in order to be able to communicate these two words, the web and the SIP word. Then 2011, the first SIP call with media thanks to WebRTC and in 2014 the SIP over WebSocket became a standard. So there was no excuse anymore for SIP server vendors to implement the transport. There's a list of implemented RFCs. There are many, many more in the SIP ground, but I think these are no more nor less just those that we needed at that time and indeed right now. The API. Well, there were several points that were quite clear. We wanted to provide an API easy to use that could abstract the user from SIP internals. We were really aware about the fact that users making SIP applications with the ASIP didn't need to be SIP experts. So we wanted to abstract them from the SIP protocol internals. That's why it was a must that the API should be as simple as possible. So it's quite expressive. As you can see, we can create a SIP user agent by passing a basic configuration which only mandatory options are the SIP URI and a socket, a server socket to connect to. Then we can start the user agent, register and register. How can we call? Okay, just call. How can we send a message? Just call, send a message. Okay, also it is a call-driven API, meaning that the communication between the DSC objects and the user are the callbacks. So you define your callback functions in order to set your logic when a new RTC session comes, then you set a new RTC session callback function. You can then in the callback object inspect who is usually the direction is quite a common attribute of the DSC callback objects which tells you who is doing this action. Is this new RTC session being locally generated? Is it an output call? Is it remotely generated? Is it an input call? Then answer, check it, draw, react to whatever you need. So, at the glance, this is the DSC architecture. I think it quite well represents the core. We've got everything. All of these points are stand over the user agent who holds a transport that holds many sockets. Also, dialogues who on top of which RTC sessions are created. You can also do IAM message. We have a registrator attached to the user agent. So the user agent can register and register. And this basically is. So, as you can see, it looks like a modular design. It was also a must since the beginning. We wanted its element to take care of its things and abstract the others from its internals. As an example, we've got the transport. As we can see in the previous slide, we see a transport and multiple sockets. Okay, then why don't we see web sockets? Because actually, you don't need to use web sockets to use JSIB. We provide sockets interface, which is really simple, which needs to implement three methods, connect, disconnect and send, which will be called by JSIB when necessary. Also, three callbacks on connect, on disconnect, on data. So, JSIB is aware of whether it is able to send data to this transport or not. And it's got some mandatory attributes, well, which got their default values, which are via transport, URL, CIP, URI, who helps to help creating the SIP messages and setting certain non-standard values that some SIP server vendors require. This way, you are not attached to use web sockets. Of course, we provide a web socket built-in implementation. We also have a web socket node module. So, you can use JSIB in node. Of course, we are talking about signal in node media. And, yeah, that's all. So, you could send your SIP messages over HTTP or over some application over HTTP or over any other kind of transport that respects this and implements the interface. Another example of the... This is the communication, internal communication of the objects. We can see here this ONION architecture, where we abstract objects from others' implementation. So, a requester could be an IAM message sender or a dialogue or a registrar who wants to send, in this case, a SIP message. So, it delegates to the request sender who creates a SIP transaction and delegates the sending of the packet to the transport who uses the socket and everyone does its thing. So, for example, the request sender, in case there is a need to make a digest authentication, we don't deliver it to the original requester, but we absorb and consume it on the request sender, making this more modular and even cleaner. Another important part is the RTC session. Why is this important? When we are talking about media, we need to look at this class. This is the one that deals with WebRTC API, the one that adds and removes streams to the WebRTC engine, the one that requests the SDP so we can later send the SDP through a SIP message and, as well, once receiving an SDP, it feeds the WebRTC engine with this information so we can, magically, thanks to WebRTC establishing media connection. Apart from those actions, these are the typical SIP actions that we could expect from a session that are hold and hold, mute and mute transfers, DTMSCB info. We can also offer some callbacks so you can modify your SDP before, when you receive it from the network, before you fill it to the WebRTC core or after you are sending the SDP to your SIP server so you can adapt it to some circumstances, or, for example, a data channel, information, whatever you need. In order to communicate with the WebRTC engine, we use WebRTC adapter which solves incompatibility between naming mostly of APIs offered by different browser vendors. This way we just abstract, we just use the standard naming and we don't care which browser we are talking to. This library solves this issue for us. Let's go for a demo. This demo is a double demo. One of the first part is showing that we can build a node application in order to use JSIP for signaling and the other one will make a media demonstration. Okay. Saul, please. Can we use this? Here, the screen. No, no, this one, no. Where do you have it? The monitor? You don't have the monitor? No. The first thing of death. Did you shoot it? No, it doesn't matter. Show me your options. Where are the mirroring options? Hello. Houston, wait, wait. Move the screen. Okay, okay. How do I move it? Move the screen. Move it. Okay, okay. Okay, Saul. I need to split. Okay. I will. Okay. Okay. So, I will go for the second part of the browser, too. You want to mirror? Yes. What's the command to go for? Yeah, but it didn't show there. I don't know where the fuck they are. There, mirror display. There you go. Okay, sorry for the inconveniences. Okay, I will need to do it in a single... Okay, I wanted to have this split it, so we could see the execution of the app in one side and the code in the other side. But I will... It would have made things a lot easier. So, please, if you're interested, pay attention because they will not happen at the same time. So, you can see my cursor. We are requiring JSEAP as well as the NodeJSEAP WebSocket because we are creating JSEAP user agents using the Node WebSocket transport. So, mainly this is the... In order to show how easy can we use it, this is the main function which just creates WebSocket Node WebSocket, creates a user agent with this WebSocket and starts the user agent. There are two callbacks predefined with just the back some information. The application allows us to be inside this script, so we will be able to control the user agent by executing commands. Okay, first of all... Okay, so I have enabled the Z-Blogging because I thought this was interesting for the first time. Okay, we can see here that the JSEAP is starting. We can see the configuration and it's auto-registering because, by default, it's auto-registered. We are registering in a testing service that we have in TragicJSEAP.net. Okay, now I will go out. I will disable JSEAP. I just wanted to show you the ZIP traffic here. Okay, now we are in the application. I made a very simple CLI where you can kind of control this user agent instance. We have already seen. And well, we can see the status. We are using this version. We are connected, registered. And I will establish a remote peer. Okay, peer. I will connect to my colleague, Njaki. I will try to establish a chat here. Hi there. Hi. Okay, so... Yep, okay. Oh, I'm sorry. Okay. Let's go. Yep. Hi there. Hi, how are you doing? Okay, Njaki is just there. Thanks. Yep. Okay, so let me just tell you what just happened here. We did a kind of trick here. So we were... the chat is based on SIP message request from one to another. And there was a cookie phrase which is invite plus something where we send Njaki a link, a URL to the Javascript.js demo web application which actually runs on top of the browser. So sending this invitation through a SIP message and he received this message and opened his browser with the given link. At the same time, I opened as well the browser. So we could make the media communication. That is it. I'm sorry for the inconveniences on the demo. I hope everyone understood my aim and I'm willing to answer any questions you may have. No questions? Yes? Do you have suggestions for backends especially concerning audio codecs between different browsers and different clients and so they are transcoded on the backend or what's the normal way to do that? Well, since there in WebRTC there are mandatory codecs. If you are talking with WebRTC endpoints you don't need to bother about transcoding because they should share at least the common codec. Otherwise, if you are trying to communicate with the legacy voice over AP network then you should take care of that and transcode accordingly. So it was it. Thanks.