 שלום, אני מייקל פרינג, ואני רוצה להסתכל על איך לעזור ה-OS בבקשה בבקשה ה-RTC. So, first we look at how to build an iOS web RTC app using the app RTC pod, and then we'll talk about building your own pod and compiling your own code for web RTC, and we will talk about pushkit and caulkit, which are frameworks from Apple, which are meant to help in building web applications. So, what we will see is how to build an application, which is similar to the example code in web RTC. It connects to an app RTC server. app RTC is an open source project from Google that you can find on GitHub, and our application will connect to that server and be able to make a video chat with another client also connected to the server. So, about the app RTC pod, it's listed on Cocopods at cocopods.org, pods app RTC, and it also has a GitHub repository. It's on isbx.app RTC iOS. It contains pod spec, header files, and a demo application, and the code for web RTC is updated from January 2016. It's a pristine IO compilation of web RTC. So, how do we use the pod? We create a project, for example, we call it CrankyGeekDemo, and in the project directory, from the console we run a pod install, and it will install the app RTC pod for us. So, what we get is the workspace where in the upper part, you can see the files for the app RTC Geek demo, and you also have a pods project containing the code from app RTC pod, which actually has two dependencies. One is the libjingle peer connection from pristine IO, and the second one is the socket rocket library, and this is the pod itself. What's important to look at here is, first, you can see the GitHub path for the app RTC pod, and you can also see the frameworks that are needed in order to build an application using that pod, the iOS frameworks, and the native libraries that you will compile with, and the two dependencies, libjingle peer connection, and socket rocket. So, in order to make the code, these are the key ingredients in the application. In our app delegate, we will have to initialize the SSL library that comes with WebRTC, and initialize it. The SSL library is used in order to be able to talk to our server in a secure way. So, in initialization, we call initialize SSL, and upon termination, we initialize SSL. When we start the application, what we will see is the main view where we will input the room number on which we want to connect. When we press start call, we will start connecting, and in our code, what we will do is create a jingle client, ARD app client. This is an object that knows how to talk to an app RTC server. It's part of WebRTC's code, and we will tell it to connect to our room. The client will know that we are its delegate, so it will call back with methods on the events and the notifications. So, a little bit on the anatomy of our application. We will have a call view controller, and that call view controller will have a video view. We do subviews, one for the remote video and the other one for the local video. And this is the process that will happen. Our call view controller will ask the ARD app client to connect to the app RTC server. It will send a connect request, and when it does that, it will also start the camera, and we will get a callback from it that a local video track exists. After it is connected, a video will start streaming from the network, and it will give us another callback about the remote video track. And what we will do, for the camera, WebRTC will open the camera and create an RTC video source for it, and then it will use that one to create a local video track and give us the callback so we can store it and point it to our local video view for rendering. And this way, the camera image will show on our local view. For the remote, it's similar. An RTC video source will be created for the remote video. A remote video track will also be created by WebRTC. And we'll get the callback from the ARD app client and we will touch it to our view and show the remote side video. So this is how it's done in code. It did receive local video track. It's where we attach the local view to the camera video track and did receive remote video track is where we will attach the view to the remote video track. And more on video in Chris Sigelton's session right after me. So hanging up, we'll just call the client's disconnect function and clean up the code, clean up after ourselves. So this is how to use the ARDTC pod. But what if you want to use your own pod and there are several reasons you may want to do that. So the first, you want to use the latest code from WebRTC. Second, you want to make changes to WebRTC and customize some stuff. For example, later I will talk about Cokit. In order to work with Cokit, you can't use WebRTC as is. You have to make some changes that are not still there in the code and so you will want to customize WebRTC. And maybe you want to use a different server. You don't want to use APRTC server. You have your own server, your own protocol. You want to work in a different way. So you want to make your own pod. So what I did was I took APRTC from GitHub and I cloned it. And I cloned it under RRKalper and APRTC iOS. And then I changed it to use WebRTC from last month. It was the latest code when I prepared the pod. It's not listed in Cokore pod, so in my pod file, I will put the GitHub path instead of just putting the name of my pod WebRTC. And then I built the WebRTC code. So how did I do that? You have a link. It's under webRTC.org native code iOS where you can get an explanation on how to build WebRTC for iOS. So first you get the prerequisites which is to install deeper tools and get the latest ticks code. And then you fetch the code using fetch command. It's part of the deeper tools and you specify you want WebRTC iOS and you run G Client Sync and that's it. You have the code for WebRTC on your computer. It's important to disable spotlight indexing on the directory where you get it because later when you build, it takes a long time to build and if you disable spotlight, it will be a little faster. So for each architecture, I built for ARM32 and ARM64. I didn't bother with the simulator. The main reason was the simulator because it does not have a camera so it was not interesting. So for each architecture, I built WebRTC and what happens when you build WebRTC? WebRTC is made of many modules and each module has its own static file and it's an nightmare to compile all of them in an application. So I took them and I put them all in one big file and I used lib tool in order to do that and for the release version I also stripped all the symbols and this reduced the file by a factor of 10. And using lipo, I took all the architectures and made one file that I can use with the Xcode. Then I copied all the relevant header files and the compiled library into my pod and the scripts for that can show you how to build are and the scripts directory in my github repository. So if you want to look and see how this magic is done, you can find it there. So what did we see until now? We looked at how to build an iOS WebRTC application using the app PTC pod and then we talked about how to build your own pod and compile WebRTC from scratch and next we're going to talk about pushkit. So why do we need pushkit and what is it? In order to get an incoming call in a VoIP app you have to be connected to your server. But the problem with being connected to your server is that you have to listen all the time to things that come from your server and the main issue with this is it drains your battery. You lose battery all the time even if you do nothing. And the second problem which you can't overcome is that this on iOS was being done using a thing called VoIP socket. It was deprecated in iOS 9 and in iOS 10 it no longer works at all. And Apple introduced VoIP push in iOS 8. So how does VoIP push work? We have two clients here and both clients when they start working what they do is they get a token from the operating system which identifies them in the APNS Apple's push notification service. They take this token and they tell the server listen, I'm a client X and this is my token. So from now on when the server wants to talk to client X it knows which token to use in APNS. And client 1 wants to send a message to client 2 and wants it to answer a call. So it sends a message to the server call client 2. The server looks up client 2 and finds its token and tells the APNS I want you to send a push with an incoming call to the client identified by this token. APNS looks at the token he says okay I know this client is located at this IP and this port and sends the push to the client and the client gets the push wakes up and can answer the call. So if you want to use pushkit you have to prepare your application for that. In X code first of all you need to add the background mode called VoIP notifications and as usual with Apple everything is a bit of a headache you have to create an iOS VoIP services certificate and compiler application with that certificate. A little bit a little code in your app delegate you import pushkit and when your application finishes launches you do VoIP registration and this is done by creating a PK push registry object and telling it that the desired push type we are going to use is push type VoIP and the main advantage of push type VoIP is it has a very high priority in APNS and so you will get it in the minimal latency and as fast as possible. So handling credentials update credentials update is where the iOS updates your token so when your application starts or whenever iOS changes the token you will get a callback in your app delegate which is the update push credentials and when you get that callback you can take the token out of the credentials and update your server and say to the server now my token is this and when you get an incoming push notification you will get a did receive incoming push with payload and there you will get a PK push payload and the PK push payload has a special field which is UUID and it identifies the push transaction in the system and we'll soon see when I talk about Cokit how to use that So Cokit What is Cokit? Apple are saying that Cokit is a framework that's going to elevate your third party VoIP applications to a first party experience and what does that mean? So first it means receiving you make a call on your VoIP service appear like any other native call you can start your VoIP calls from contacts recents or any other way native calls are started the incoming call screen will now look like a native call any one of you who has VoIP application which updated to VoIP call probably noticed that when you get a call you no longer get the usual screen but you get the native calls I was very surprised when Skype did that for me the call screen also looks like a native call and here you have an example you can see the incoming call and the difference between the incoming call screen and the usual incoming call screen is that here I have my service name on the screen instead of saying for example mobile or whatever and the call screen also has a small difference there is an icon for my application with my service name that I can press and change the UI to my application UI so Cokit is built of two main classes one is the six provider and one is the six call controller and what are they used for so let's look at them one versus the other six provider is used to receive out of banned notifications and they are not user actions for example an incoming call six call controller it's used for requests from your application which are local user actions and these are internal events like start call it interplays also with other providers in the system for example if I'm doing a call with the usual mobile telephony and I want to start a VoIP call I can ask a six call controller to start my call and it will hold the current telephony call and allow for my call to take place example uses we use six provider to report incoming calls outgoing call connected call ended on the remote side and six call controller we need to use to request starting an outgoing call answering a call or ending a call so six provider sends messages to the system via an object message called six call update and receives notifications from the systems with an object called six action six call controller sends a notification to the system via six transactions let's look at some use cases so we get an incoming call via push our incoming call handler is called we say to a six provider six call update incoming call and it notifies the system and the system shows the native incoming call screen when the user answers the system notifies the six provider with the six answer action which notifies our handler and we now notify our VoIP server that the user answered the call ending a call similarly but this time it's a six end call action so how do we set call kit in our application indeed finish launching with options first thing we do is create a configuration object for our VoIP provider and we create a six provider with that configuration configuration contains our name and several other parameters and we say to the provider that our app delegate is the delegate for call box from this provider and we also create a call controller so if you receive a call from push first thing we do is we extract the UUID from the push and we use that UUID to save the call in our database we will identify it by the UUID and we will create a six call update object and report to the system to six provider we report new incoming call with UUID providing that the call is incoming and we give it the UUID and we may get an error for example if the user sets the device do not disturb then the call will not go forward and call kit will return an error to us but if everything works well according to Apple we need to allocate an audio controller object for the call but this is something you don't do when you use WebRTC as Chris will explain later WebRTC handles its own audio session and doing this will create unexpected results so in contrary to what Apple is saying don't mess with the audio at all when you use the call kit so when the user answers the call we first say the action is fulfilled and then we notify our server that the user answered the call and when call kit starts audio according to Apple we need to start the audio on the device and again since we are using WebRTC don't do that ending the call so the user presses the Hang Up button and we need to tell our server that the call has ended and fulfilled the action starting a call for example I won't cover the case where it happens from recent and other places but let's talk about when it's done from our UI we call the call controller and we give it a six transaction which means start a call the system accepts that six transaction as if it's possible to start a call or maybe there was a telephony call that was going on and the system needed to hold that call and after it holds the call it will give us a six start call action and you are now a start call handler we will notify our server that we are starting a call so this is how it's done in code we create a six start call action and then we request a transaction from the call controller and when the system authorizes the starting of the call we will get the perform start call action where we will tell our server that the call started and fulfill the action for call kit so if you want to read more on call kit you can find it on Apple's developer site there's a very nice presentation there from WWDC 2016