 Hi, everyone. So indeed, if you've seen us in previous RTC talks on the RTC conference chat, then you've likely heard us praise the merits of video routing against media mixing or how the benefits of SAFUs are so great compared to MCUs. One thing, strangely enough, that we don't talk enough about is the other side of all this. It's another component of our ecosystem that is just as important. And that's our video client, GTMeet. So GTMeet is a mature, conferencing video client that connects and uses the GT Video Bridge. And people use it for a bunch of things, meetings, one-on-ones, global wall hands and all staffs. It's, again, a very rich application. It misses one thing, though, or it did until recently. As Chad pointed out at the beginning of the session, real stats counter, you've probably all seen this, had this stat about, in October, for the first time ever, more people connecting to the internet through mobile devices than they did through desktops. And that kind of puts things in perspective, makes you wonder, do I have the right priorities? Well, we certainly did have mobile as a priority. And we were thinking about it in a way that many of you are. Let's get a little bit into that. So GTMeet, if you look a little bit too deeply, a little bit deeply into it, is two main components. One is the user interface that you use and see. And the other is everything that happens under the hood, XNPP session management, handling peer connections, Mungin, SDP, so that you would have simulcast, participant joining and leaving. All of that is what we call LibGTMeet. So in our community, these two projects have stabilized at around 20,000 lines of code. Obviously, your mileage may vary, but just keep those numbers in mind as a reference. So when it comes to mobile, again, we were thinking of just doing everything the way that everyone else is doing it. So we thought, well, we'll just take the WebRTC stack and wrap it in some Java. And we'll have an Android application. And then we'll just do the same with Objective-C. And we'll have an iOS application. And everything will be just beachy, or so we thought. And then we started with a Java prototype application for GTMeet. We were at about 10% of feature parity. We hadn't even gotten into all of the battery savings and native encoders, none of that. And we were already at 90,000 lines of code, which is half the code that we already had for desktop. And then Comcast tried and did a similar thing, a similar exercise for iOS. They were a little bit more mature, like about 15% parity and still almost had 20,000 lines of code at that early stage. So that was disheartening, because basically it meant that either we have to split our team in three and completely destroy our development pace and basically don't do almost anything new for a number of years. Or we had to go to Atlassian Management and beg for triplicating our team and hope that we don't get laughed out of the room. So we started looking at other alternatives. And one thing that really very quickly started looking promising was the whole ecosystem of React. And that's really great. So how many people here are familiar with React? About half. So for those of you who know React, you probably already know that one of the most characteristic ways of things about it is the fact that you're manipulating this virtual DOM. You're not actually touching the HTML that goes into the browser. You manipulate a tree that's only used by React itself. And then you have React that's responsible for translating that tree into something that the browser can render. Now, when you look at that diagram and obviously everyone's thinking, well, if I have such a separation, I could probably just replace this thing with something else, like a mobile interface. And it's not such an original idea. And actually, the React community went along and did exactly that. So this is how React Native was born. The concept of React Native is that you write your user interface in JavaScript in a virtual DOM. And then that is rendered into native components, native views on Android and on iOS. Now, we get to the interesting part. This whole thing about writing the user interface in JavaScript is obviously great. But the really great thing about React is that you actually get to use a full-blown JavaScript engine with great native integration. This is really where React stands out. And keep in mind that this is not a Cordova WebView style browser. This is just a JavaScript engine. So you have JavaScript and that's about it by default. You don't have things like WebRTC. So you're thinking, well, what good is that? Well, this is where React Native components, React Native modules come in. React Native modules are a way that React gives you to use native functionality on different devices, things like I would like to make my phone vibrate, or I'd like to do an alert, or I'd like to get some geolocation information. This happens to React Native modules. Most importantly, while most of them are provided by Facebook themselves, there's a bunch out there that are provided by the community. And specifically, React Native WebRTC, a project of Harold Young, is something that gives you all the favorite objects and functions that you need in order to build React Native applications. Peer connections, media streams, media stream tracks. All of this is available with this React Native WebRTC module. So this is pretty big. We took React Native WebRTC. We plugged it into React Native. It took a little bit of pushing and shelving and a lot of PRing back to React Native WebRTC. But eventually, we got it to work with Jitsi Meatlib. We added a thin UI layer on top of all this. And we had it. We had an application that was using exactly the same code to run on desktop, Android, and iOS. And that was pretty big for us. So we were quickly feeling very, very optimistic. Yeah, and this gray step wasn't surprising at all, because we were harnessing the power of multiple potent technologies. But truth be told, those are young technologies. So you kind of expect to bump and dip onto the road. That's what we did when we actually ran our application on mobile. There were problems. So let's just give you fair warnings on different topics now, what you can expect. One thing on our desktop app that we do is we take great advantage of peer connections WebRTC statistics. We use them in multiple ways, probably do as well. You get the classic one where you just submit them to costats.io, or you process them locally and do things like we do without your levels. So in our UI, we examine the statistics that WebRTC gives us. This is information that we see over the network, and WebRTC collects it. And then from time to time, we can go and ask about it. That's what we do without your levels. And we pull peer connection stats. Now, we've introduced you to this world of React Native. And a very big piece of it is the part that connects the JavaScript world with a native world. And that's called native to JavaScript bridge. So when from our JavaScript source code, we go and call all the methods that we know get stats, we want it to end up in the native WebRTC implementation that's going to go through this bridge. And we saw that these audio levels were pretty slow. That's what's done for profiling. And the profiling revealed that, well, it's the native to JavaScript bridge that is inherent to React Native. That was the bottleneck. Few details. So WebRTC stats are a big object, especially for us, because we have multiple participants. We are supporting conferences. So we can have 10, 20 people, a really unlimited number. So this object gets pretty big. And we have to walk it because it's like a tree of objects and convert it into structures that React Native knows. Then it's going to drive this known structure to the JavaScript site and give us a JavaScript object. And we discovered that in the case of such big native objects, it's actually better to just go and walk them and build a very long string, adjacent string, have this single object pass through the bridge, then even go and do in real JavaScript JSON parts and still get better results, like five times better. So if you decide to go this way, keep this idea in mind. While looking at a peer connection, we discovered, though we were looking at React Native's problem, we identified or kind of surprised to find it in WebRTC itself. There is inconsistency. It's slightly mentioned in the source code as a legacy thing. So in the web specification for WebRTC, some of the statistics are numbers, which is natural, like a byte sent or received. You wouldn't think about them as strings. But in the Java and Objective C API, we'll get is a key value thing with strings on both sides. We're kind of safe because JavaScript is really going to step in and do the conversion when we consume this data from strings to numbers. But people, at least some of the JavaScript community around opponent, because it's an implicit portion of types. So keep it in mind in case there are some unexplained problems. A big disadvantage, I would say, of React Native is that this bridge that I described, it's serializing, but most importantly, asynchronous. So your communication between JavaScript and native is asynchronous. And naturally, you can't just return a simple, return value from your native code to the JavaScript. So it seems that the way it was implemented is kind of a step back, because the JavaScript community made a lot of strides to solve their callback hell. They introduced promises and such. But React Native is going to light those fires a bit back. It's going to contain them in the native modules. So your JavaScript source code, you can still go and do the model thing with promises. This one was mentioned, but I would like to stress it a bit. So in React Native, the React world, that like half of the people here express that didn't know it, you describe a tree like a HTML tree. It's not really HTML, but let's say that. And in React Native, when you transfer this idea, it's going to be a tree of views. So you're kind of free to do a broad or deep tree of views. And you can have your videos that you're rendering of the people in your video call if you have like us multiples. You can place them in different nodes in these trees. For example, the one speaking could be high in the tree and the ones who are the remote participants could be lower in the tree. So they appear in front of the guy who's talking, has small thumbnails. And when we're trying to support this thing in a third-party module such as React Native whoever to see where we're contributing, we kind of try to be generic there. But we have to remember that even though we're free to put our views however we want, generally on Android, there are just three layers in which you can place video. One is behind the application, which cuts a hole in the application. You just have an open GL surface. And Android knows about, oh, I know about a second layer in which you can put above this. And then you get a top layer on which you can put stuff. But they're going to either be behind your UI or in front of it. So if you want to provide some decorations, it's going to be difficult. And again, because we use React Native and we want a bridge between JavaScript and native, we have to do some interesting and advanced stuff with collections. That's where I discovered another weak point of WebRTC. And that's WebRTC goes and creates multiple objective C instances for one and the same underlying media stream or media stream track instance. And because they don't have value equality, like there's no equals or hash, you kind of can't go and simply use them in collections and expect the right behavior because it's nowhere in the public documentation. And the biggest problem for us was, of course, that we just lost Apple's popular associated object support. We feel that this is a problem that's more easily solved in WebRTC itself because in the case of third party developers like us, we have to add much more complicated logic in order to deal with this problem. And we'd like to alert you that when you go and use the C++ API, the Java API, or the Objective C API, you're kind of choosing three different things. There are inconsistencies between them, which may force you to, as cross-platform developers, you would have one problem, but you want to have and go and solve it in two ways. That's what we did because the data channel was missing an ID and the Java API. But we all know it's available in the C++ API and it's in the Objective C. Thankfully, at the beginning of this month, it was added to the Java API, so we're kind of everything's cool again. But there may be other inconsistencies. Before you go and choose a, OK, I'm going to do this thing on the two platforms in one single way. Maybe go and check whether it supports in the two different APIs. And as I mentioned, we want to be cross-platform, not only on mobile itself, on the two platforms on the back, but we want to have the same code on JavaScript. It's a little bit of a detail that needs to be mentioned. So these objects in React Native that we have on the native site in WebRTC, they're going to be managed automatically, by irreverence, counting, garbage collection, whatever. But since we're representing them as JavaScript objects, we kind of have to force our developers to actually go and explicitly release some of them, like the media streams and the media stream tags, especially those that come from GetUserMedia. And just the last piece of inconsistencies between the native and the web APIs. So on the web side, we're kind of used to having a set of constraints where you just tell strings to the WebRTC API that we want this whole set of constraints that we want to have on the WebRTC API. This whole set of constraints, and one of them is going to be the optional video constraint facing mode. Strangely enough, on the native, you get supports for some of the elements of this set, but others like facing mode, you just have to go and find specific classes that are dedicated to the purpose. And specifically, you have to go and call specific methods. So it's one of the same thing everywhere. So all of this may sound like a lot, but you really shouldn't worry about it for two main reasons. The first one is that we've already solved it in lib.jtsy.meet. So if you wanted to use the same thing, you could. Jtsy is open source. I didn't mention that, but I expect people to know it. And then the second one is that we have to put in perspective what we're trying to achieve here. What we got out of this is an application very quickly that, for the most part, runs the same code on all the target operating systems that we want to run on. And let's go back to where we started. From the lower layers of lib.jtsy.meet, we've been able to achieve 100% reusability across our platforms. There's still a little bit of work to do on the UI layer. We're currently running with mobile specific user interface. But still, it's 90% shared code between iOS and Android, which is already quite an achievement. And we have an objective to make the same ratio apply for desktop as well. We hope that we'll achieve it. We're not there yet, but we're certainly at a place where we can already highly recommend React Native. If you have a WebRTC project that currently only runs on the web, or you're planning to start a new one, then you should definitely consider React Native as one of the first options for mobile development. Thanks very much.