 Today I'm going to talk about the Android development with WebRTC. The summary of the talk basically is we are going to see the options that we have to integrating WebRTC into an Android application. And we are going to have a live demo on how to use WebRTC using the Java APIs provided by WebRTC and the bindings that they have. So let's start with options. The options here are from the point of view of using WebRTC. So at the end, if you are using another language or other framework, whatever, they are going to use these three options. Maybe you are not doing some of them, but the framework will use one of these. So we have the Android WebView. We are going to see that, the native Java API for WebRTC and the C++ API. Let's start with the Android WebView. The WebView that is based on Chrome is that it was added to Android some time ago. It replaced the old WebView that was already available in Android using WebKit. And this new WebView uses Chrome. And it has WebRTC support. It was introduced on Android 4.4. But the problem with that is that it was introduced using a very old version from Chrome. And it was not supporting well WebRTC. So at the end, it's only usable in Lollipop and higher. So the good thing on new devices is that it is updated very frequently with every version of Chrome. So it's one of the options. But it has some limitations. The problem with working only on Lollipop at the end, there are a lot of devices that you are not targeting if you are using the WebView. Lollipop and newer versions are only like the 50% of the market share of Android. The other thing is that the WebView is an external component on Android. So it's updated outside your application. That means that it's a good thing, maybe. But also, it can break your app. You don't control when the WebView is updated. That means that sometimes people can be running your application in a WebView that you have not tested or maybe you need to update the WebView and people are not doing that. So you cannot force that. And the other thing is that all the video views are inside the WebView. So all the WebView applications, everything is contained on the WebView. So if you want to mix the video views with other components, UI components from Android, it's something that is not easy to do. As alternative for the WebView, this is the most popular one that is crosswalk. It's an Intel open source project that they compile Chrome. And they provide the Chrome build for you to use as the WebView. The good things about that, the pros are that they have 4.0 support. So it works on more devices. You can embed the binary inside your application. So you can decide when to update. That is a good thing, depending on your use case. And you have always the latest version of Chrome that is also a good thing. As cons, you have that as you embed the binary inside your application, the binary size is going to increase a lot. That is Chrome is a very big project. And also, that is a good thing. And a bad thing, if you manually upgrade the version of Chrome, the thing is Google can force you to update your application at some point if there is some security issue or something like that. Maybe sometimes this can happen, and you have to update, or your application can be removed from Google Play. So let's go to the second option that is the Java APIs. The WebPertice already provides some bindings for Java. So it's something that you already can use. The good thing with that is that you have access. All the video is rendered using native views. So you can integrate all the UI from the Android UI with your video views. So you can mix them and create your application as you want. And also, it's manually updated. And the problem here is that WebPertice is a big project. So there are previews available. Pistin.io was one of the most popular ones. But it's outdated now. They are not maintaining anymore. And the other option is to compile it from the source. That is not as easy as I think that other people is going to talk about that today. And for example, for Android, it doesn't compile on Mac or Windows. So you need a dedicated box using Ubuntu to compile it. So it's not an easy thing to maintain. And the other thing, this is other things about the native Java API. The problem is that the API is more complex than JavaScript. Mainly because it's Java and the language is different. But there are other things from Android that make that a bit more complicated. Also, using that approximation, for example, the vanilla size is also bigger. It's like integrating the WebView. For example, Pistin.io, when you create something with Pistin.io, you have the 20 megabytes of APK sizes. Here, in the Java option, you have other alternatives, like Topbox and other platform providers. They provide the Java API that you can use. And they extract all the peer connection API so you don't have to care about that. And also, they do all the work doing all the signaling and everything there. And the last option is the C++ API. Web Particle is done in C++, so you can access all the APIs in C++. That makes sense if your code base is already in C++. That is not maybe a common thing, but for portability, it's a good option. You still need Java access for the capturing rendering. In Java, most of the APIs are in Java and Android, sorry. Most of the APIs are in Java. So you need GNI to access to the camera. There are new APIs, but it depends on the version. So at the end, it's complicated. But the good thing with that is you have the maximum portability. The same code in C++ can run in iOS, Android, desktop, whatever, and other platforms. But it's very complex to maintain. The C++ API is not as stable as the others. So they are changing the API. And sometimes, if you want to upgrade to the new version of Web Particle, you have to modify your code. So it's hard to maintain. Let's start with using the Java API to create an application. This is the fun part, I think, at least for me. The setup. We create a single activity application in using Android Studio. The thing is, to start with using Web Particle, we need to decide something about the signaling mechanism. You can use WebSocket, PubNab. This is up to you. You can use SMS if you want. This doesn't matter, but the thing is, for example, I mean in the example, I'm going to use socket.io that is very easy to set up a server. To get Web Particle, for the example, the easiest way is to get a preview, like Pistina.io. You add that to your graded file. And you already have all the APIs needed to use Web Particle. It's the only line that you have to add. The problem with that is not updated. The last version is from December last year. So that's an issue, maybe. Don't forget to add the permissions needed to access the camera, to access to internet. And access to the microphone. And we can start with the Web Particle initialization. Web Particle is a C++ code. So at the end, this is the thing that I was commenting before, that you need to access to all the APIs in Java. So there is this initial method, the starting method, that you pass the context to access to all the hardware APIs, something like that. And also to decide if you want to use audio and video, and if you want to use hardware acceleration here. This is something that is needed from the C++ to access to the Java APIs. With this, you create the peer connection factory. That is the object that is used to create the peer connection. So next step is the video capture. The good thing is with Web Particle, they already provide all the setup to start capturing. So you don't have to know how to use the camera APIs or something like that. So it's as easy as these two lines. You can get the name of the front facing device. And you can create the video capture using the name. They already provide the implementations for the camera API, so you don't have to deal with that. Also, they added some weeks ago the possibility to create a video capture using screen sharing. So you can screen share the view from the application. So there are some use cases, interesting use cases, using that is a good thing. But it's not available in the Pristinao compilation yet. I mean, they are not maintaining it. So here is the line to create the video capture. And the next step is to add this video capture to something. In Web Particle, we have the concept of the media stream. That is what we use to send the video to the other peer. So we need to add in the stream two tracks, the audio track and the video track. When we create the tracks, we set up the video source using the video capture that we have created before. So after that, we create the audio source and create the audio track. And we have the local media stream. With this local media stream, we can start showing the preview of the local video in the application for the rendering. Here, to see the preview of the local video on the other peer, there are several options available in Web Particle. They already provide these two, GL Surface View and Surface View Renderer. The difference between them is the GL Surface View is a common GL Surface View that is being used by all the renders in the same conference. So you can overlap the video there, but you have to add the renders in the order that you want to put them in the screen. And all of them share the same Surface View. Maybe it's OK for some applications, but there is the other Surface View renderer that you use a different view for every video. So you can place the views in the layout in any way. The problem with this last one is more flexible, but the Surface Views in Android have layout issues. At the end, the implementation of Surface View in Android is done in there. They are not really views. They are like windows over the real window. So maybe you can have some layout issues. That's why Web Particle provides all the APIs to create your own renderer. So at the end, if you have issues with one of them, you can create your own renderer using Texture View. Or maybe, for example, if you have a game, you can integrate the texture or the video frame inside your game. It's something that you have the possibility to do it. So in the sample code, in this sample, we are going to use the GL Surface View. We get the Surface View from the layout. And at the end, we create the renderers. This is the other peer renderer that we are creating first to cover all the GL Surface View. And we are creating our preview renderer, covering only one part, one square in the view. And if we add the renderer to the local video track, we start seeing our preview. So with that, we have half of the way done, I think. The next thing is to create the peer connection. Here it's not tricky, but the thing is, you usually see the peer connection. When you create the peer connection, people add some stand server from Google. But at the end, you need to provide your turn and stand servers. It's something that you need for your deployment. If you don't have that, the peer connection, maybe it works in the local network, but this is very probable that it's not going to work outside your local network. Because at the end, sometimes there are firewalls or NATs that have issues connecting from one peer to the other. So this is very important. So this is where the other three parties are important, because, for example, top box, we provide all these things so you don't have to deploy your own. So we create the peer connection using the peer connection factory that we have created before. And we add our local stream to the peer connection. That way, when the peer connection is connected to the other peer, they will see our video. And here start the SDP negotiation. The SDP negotiation is the same that is on JavaScript. It's a bit more verbose, because the way that you have to do it on Java, but you have to implement the peer connection server and SDP server to get notifications. They are the listeners to get the notifications from peer connection when something happens. We are sending on our sample code the SDP over Socket IOT, that is our Sinalin protocol. And this is the last step to have media flowing. You have to remember also to add, when you get the stream from the other peer, to add the renderer there to see the video from the other peer. This is the diagram of how the SDP negotiation works. It seems very complicated, but it's not so much. At the end, both clients connect to the server with using Socket IOT. And the server decides to send the create offer, that is the start message, to one of them. So one of them, using the peer connection API, creates the offer and sets the local description in this peer connection object. Then that generates the offer SDP that you send to the other peer. The server is just relaying the messages, the message. And the client, too, when it receives the message, sets the remote description and create the answer that travels to the other way. And they have the information of the codex of both sides. So in this process, there are also other things that are called candidates that whenever you start doing the SDP negotiation, peer connection automatically starts trying to guess which IPs you have. So all the candidates are options to connect to your host. So if you have several turn servers, maybe you will see more candidates here because you have more options to connect to you. And at the end of this process, we have the media established after this. The server is something that maybe if you are doing Android development, it's something that maybe is something that you are not used to do. But the server is something as easy as that. This is a Sokitai server, only 30 lines of code. It doesn't have any kind of logic. It only when there is an offer message, it sends the message to the other peers. Answer, the same. And the candidate is the same. And for the first one, it sends the create offer to start the process. The server is very easy. At the end, also the application is also very easy. It's very small. 260 lines of code for the Android application that is the minimum thing needed to have video working. There is no error handling on other things, but it's something that is easy to understand. And the server is very small, as you have seen. Here, you can find all the source code that I uploaded here to GitHub. So feel free to use it and to test it. And some Android tips to finalize. Things that the binary size of the application you see in WebRTC is very big. I recommend to use the split mechanism, creating different APKs for every architecture. At the end, the size is something important for the final application if you're creating a commercial application or something like that. Remember to stop the camera and the microphone. This is something that WebRTC is not doing for you. WebRTC doesn't have access to the events of the application. This is usually handled by the activity. So you have to take care of stopping the camera when you go to the background or stopping the microphone when you receive a phone call. These are important things to remember. Audio routing, this is something that it seems easy. When you connect the headset or Bluetooth headset, something like that is not as easy. If you look at the implementation of the APRTC application, that is the sample code that provided with WebRTC, it's not as easy. There are a lot of edge cases. And it's better to look at an implementation and do something similar. And at the end, if you want to try new codecs like the VP9 or H2C4, by default, it's using VP8. But there is no easy way to select them. So you have to, at the end, modify the SDP and reorder the codecs there to use one or the other. That's maybe, in the future, it's a bit easier, but now it's a bit complicated. And that's all for today. Thank you. Thank you. Thank you. Thank you. Thank you.