 Okay, so I think we're ready. I'm Saul, you may have seen me on the other side. But now I'm here. I'm going to tell you all about going mobile in your WebRTC application with the help of ReagNative WebRTC. I'm here representing the IGITC project, which is the case we're going to look how we migrated it. Who here is familiar with IGITC? Okay, a few hands. So I'm going to be quick a little bit on this. So IGITC is a set of open source projects that allow you to deploy video conferencing applications. They are fully featured and run out of the box. So it's not a framework where you build on top. You have a finished product you can deploy. It's also a set of APIs and SDKs. So you can use this on another Web application or another mobile application. Look at that part in detail. It's also a pretty large community of people. We gather at many conferences and we get many contributions. We gather pretty much online. And for example, we have a few of those people here, like Gasky sends patches every once in a while. Good stuff. So that's who we are. And how does this GITC thing look like? Pretty much like this. This is the typical screenshot of a solo WebRTC developer when you have nobody to test with. You test with BigBug Bunny. And also I said we're in the browser, we're in all platforms. So of course we're on mobile, which is what we're going to talk about today. We have a new version of the app coming and one of the highlights is that white sheet is black. So it looks a lot better on your OLED display. Amongst all the cool stuff. We have lots of features, but today I'm not going to talk about the features themselves, but how you can take these features and run them on your mobile. So as I said, what we wanted was to go mobile. Now what did we have at hand? So our architecture is pretty much like this. So we have GITC Meet, which is our web application where you see the user interface that you saw before. That block is around 40,000 lines of code. UIs are simple, right? Then we got LiveGITC Meet, which is our low-level library. But this low-level library takes care of all the signaling, stream management, letting you know if a participant joined or left, statistics, audio levels, analytics, everything is integrated there. So the application doesn't use the WebRTC APIs because they are abstracted by our layer there, GITC Meet. That is around 30,000 lines of code at the moment. And it uses, of course, the WebRTC APIs at that level. But then we also use other libraries like Strove.js, for example, because the entire GITC Meet application is built on XMPP. That is not very relevant to what we're talking here today, but I thought I'd mention because it's also part of our DNA. Now, when we wanted to go native, you start with what everybody else is doing, I guess, which is let's build an Android app and an iOS app, native. Well, some experiments were made, and with around 20,000 lines of code, we were like 10 features in, 10% of the features in, which is not great. That also means you need a lot more resources because you need a team that understands Android well, you need another team that understands iOS well, and then you have those 30,000 lines of code that are only in libGITC Meet, so the library taking care of all the signaling and all the stream management, which you need to sort of reimplement. So what if we could somehow reuse these things and at least make use of all the man-hours that were spent there, all the knowledge that was poured onto that library and all the problems that are already solved. At the same time, GITC Meet is an application that was many, like it was built around the thing 2014 or something. So, you know, it uses jQuery. It's now still in somewhere little dark corner. Now it's a React application, but at this time, roughly at the same time, we were thinking, okay, we need to migrate to something which is better, which allows us to architecture our application better and make it more future-proof. We thought React was this answer, and roughly at the same time, React Native was coming up, which was the new hotness. It gave some promises that sounded great. So we started building a prototype for this application in 2016. For those who know React Native, this was React Native 0.27. So that was a while ago. Now, what is React Native WebRTC or what does it give us? Sorry, React Native. It is not a WebView. So, of course, one way to go mobile is, ah, we'll use something like Cordova. So we put our code on a WebView, and it runs JavaScript, so we can do that. No, React Native, it uses JavaScript, but it's just to run some code that ends up translated into Native via a bridge. So this bridge will turn your JavaScript code into Native views. So you end up with a full, honest-to-god Native application instead of something wrapped in a WebView. I'm not going to go into detail about why you may want to go one direction or the other. This is what worked for us. It may work for you, which is the point of why I'm here. It's got a thriving ecosystem, so lots of plugins to connect anything to everything. And also, it's just JavaScript, right? So you can still write your, like the good thing about React and React Native together is the idea that you could write your logic in JavaScript and then on the Web you would emit DOM so that it's rendered in the browser and on React Native you emit React Native components and thanks to the black magic of React Native they end up becoming Native views and everything works. That was the promise. And it works for us. Now, of course, if you're not in a WebView, you're not in a browser, you don't have WebRTC, for example. So what do we do? Well, we build it then. Somebody started, Henry started a plugin called React Native WebRTC, very original name, of course. I released version 1.69 last week with a number of good features. Now, so what does this module offer us? It offers us WebRTC APIs that you can access on your React Native application. They are usable. They are not the latest APIs that you can see on, you know, the WebRTC spec because things take some time to build and in the end it works today and you say I'll do it tomorrow and here we are. But it works, of course. It's just the API. The inner works are the same. It's battle tested. As in, we have been using it for years. Other people use it as well. And it's usually pretty up to date. Right now there's a little gap, you know, Christmas period. We got acquired, yada, yada. So it's at Chrome M69 right now. It's also a funny number, but we're going to switch off it really soon. Hopefully to 72 or better. We try to keep up with Chrome's releases very early. So those 1.0 million lines of code that you heard at Jeremy and Leonard's talk, we embed them all. That's what we use on the React Native WebRTC plugin. Part of the work that we did in the last two releases, so 1.67 and 1.69, has been in performance. Right now we have an, on iOS, we have a metal based renderer. We offloaded lots of work off the UI thread, so it would run a lot snappier and we have seen significant improvements in our own application in jitsiming. So I think everybody would benefit from using the latest and the greatest because it is the greatest. Now, in the React Native ecosystem, not all the glitter is gold because if you want to build an RTC application, there's lots of things you still need to do. Well, first of all, remember when I said it's just JavaScript, so yeah, your web developers can chime in? Well, no, they can, but you are going to need to write native code. You will not be able to escape this. The cake was a lie. You'll have to do it. I wasn't a mobile developer before, but I somehow became one, I guess. So it can be done and you don't need super deep expertise to start working on it. You need to take care of audio routing. The WebRTC library will just use whatever is the default, but you may want to offer the user a choice. It's like use the speaker, use the Bluetooth, use whatever. When iOS is pretty simple to do because there's MP volume view, which you can show, and then it's just already built in. The user chooses dismisses and that's it. On Android, it doesn't exist and you have to build it. So that sucks. Then, of course, the users have the expectation that your application integrates with a system calling services so that you don't get a phone call at the same time and things break. So that means that on iOS, you will want to integrate Coket. And on Android, this weird, reasonably recent thing called connection service. There is a Reagnative Coket plugin you could use, but you need to wire this in. Then there is an interesting story about Codex because a mobile WebRTC doesn't use a software-based encoder for H.E.64. It uses only the hardware ones and in some craptacular Android devices, they just are disabled because they work so bad. At the same time, some Android devices are capable of doing hardware-assisted VPA encoding, which means a simulcast doesn't work even if you enable it because they use the hardware encoder. So there's a little bit of a mismatch and you may need to, you know, test, fine-tune and use what you want. Think about what use case you want to cater for, what resolution you want to set, and what code you want to use. So this is important for you to know. Timers don't work in the background in the JavaScript library. There is a plugin that allows them to run in the background. It won't magically make your app run in the background, but if it's already allowed to run in the background, timers will run. So you may need to do that. And there's all sorts of platform shenanigans like OpenGL Layer Limits in Android, yeah, they add. And all of that just to get something like this. So what we did, actually, we needed to both deliver an app but also deliver an SDK so that other people could do all of this without doing it themselves. And that is the Jitsimid SDK. If we look at this application, this is really just a carcass because this just renders a Jitsimid view, which inside renders the ReagNative root view. So we take care of all those connections, all your routing, call kit, all the codec stuff, all of it is built in. And all you have to do is integrate the single view and you get our entire experience, multiple participants, mute and mute, all of it. And this is very different from all the other APIs you get on mobile for these kinds of applications. They're always very low-level. You need to do the entire UI and all that stuff. And we give it to you for free Apache 2 license. And there's already applications using it like Riot.im from the Matrix guys, which is awesome. Here's a quick example of how it looks like. On iOS, you just get a hold of the view called join on it, pass a URL, bobs your ankle. Then you get some delegate methods telling you, hey, the conference started, hey, the conference ended, and then you may want to dismiss your controller. But that's it. You get events to know what was going on. And on Android, and this shows a little bit, something new in the SDK, we had the support for fragments. So you just take a fragment, get the view and call join on it, and suddenly you have a view with a GT URL, then you dismiss it and move on with whatever else your application was doing. So if we look back, how did we do? We started with this, which is what we wanted to do. So I'm happy to say that we are reusing 100% of our lib.gtsimid source code. And I guesstimate that we're actually sharing 85% of the UI code between web and mobile. And to give you a quick example of how this materializes in the UI, this component down there with the controls and that component out there with the controls for the call, they are both the exact same thing called Toolbox. We just have, at the end of it, the rendering is done on Reagonative Views or in DOM for the web. Everything else, all the logic to keep the state and everything, all of that is shared. So in our humble opinion, Reagonative Views is ready if you want to choose it for your next adventure going mobile. And if you don't want to do it all yourself, your application has other types of business and you just want to add video to it, use the gtsimid SDK. I'm happy to help you out if you have any problems with it. And if you have questions, I run out of time, so meet me in the hallway. Thank you.