 Next presentation is also from a colleague from Facebook of the previous presenters. It's about the, I always have to make sure I say it right. It's not the cellular community manager, it's the community cellular manager, which complements of course the open cellular hardware that was just being described. I hand over to Umar. Thank you. So talking to you about community cellular manager, now what it does, it's a tool that empowers individuals and communities to set up and maintain their own network infrastructure. It ties together a stack like Osmocom and gives you voice services, user management billing, network monitoring, and a simple way to integrate it with the PSTN. So what does it do? It's a solution that reduces the technical requirements necessary to deploy a network, a mobile network. We've been through, this is a fairly technical audience over here and even looking at all the configs, it's like hard to keep track but imagine putting this out in a community. We want to abstract away what it takes to actually operate a network on the day to day. So when receiving this box on the fields, a community member, all they have to do is take the hardware serial ID and put it into a dashboard. So there's a cloud component and a client side component that's in CCM. They put it into the dashboard and automatically the device will start pulling configurations and start providing voice, SMS, and data services. So the client side application has local breakout. It runs on NidB and can provide local services that are disconnected and uses the cloud for managing your billing and looking at network KPIs, notifying for downtime, looking at usage trends, and also it's a multi-tenant system so this is meant to be providing a core as a service for network operators and the operator of the cloud infrastructure can charge an interconnect fee. So this model has implications in that it reduces the operating costs of actually running and maintaining a network. There's like built-in auto-reset systems that detect outages and it has been deployed in rural communities and running for a few years right now. And the operating costs that it enables allow us to economically, to deploy networks that are economically sustainable where we wouldn't be able to do that with traditional centralized networks. So CCM was designed with a few set of criteria and use cases. We wanted to be able to operate our networks on standard commercial satellite links because enterprise ones are so expensive. So the problem with that is that backhaul goes down all the time so we wanted to be able to still provide services and allow these network operators to continue running their businesses even if the backhaul is unreliable. So all the functionality of the network actually sits on the edge and we have things like local SMS applications that can continue to work so retailers can still sell credit even if the network is down. And when you're sending outbound messages, those are queued for delivery as soon as we reconnect. So the basic day-to-day operations are not severely impacted with network outages. The other thing is that we wanted to make it really easy to interconnect to carriers. In the cloud, we have the ability to interconnect with VOIP providers like Nexmo, but we've also done carrier interconnects. So the cloud provides the interconnection and a BSS OSS-like functionality. So whenever one of these clients is connected to the cloud infrastructure, they can send outbound traffic and it also synchronizes with the cloud to replicate subscriber information for business purposes, upload device metrics so they can tell if the network is healthy. And it has a very small footprint. It synchronizes every minute with this ping messages around 500 bytes of traffic. And the last use case we wanted to support was cross-site connectivity. So if I'm a network operator, I'm running a business and I have a number of communities that I'm covering, I can give one set of SIM cards and build my credit transfer infrastructure around all these communities and move from one community to the next seamlessly. And the HLRs are synchronized across all sites. It can even operate without an internet connection as long as these different sites are on the same network. And as soon as... Like, things like subscriber balances can get out of sync when you're using on one network and going to another and there's no backhaul, but we use... It's designed with that in mind, but when it connects back to the cloud, it will eventually reach consensus. And there's no roaming. So this is different from, like, real roaming. There's no 3GPP roaming and carrier carriers can't... Other carriers can't use this network. So there are two major components in CCM. The first is the client, which says all this local breakout, caching of messages, auto support, and auto-reading remediation during downtime. And there's the cloud. So the client can run on a variety of different radio gear. Actually can run embedded in open cellular gear, the new RAN 1.5, and we even have it running on fairwaves gear. So this is how it's sort of broken down. All traffic into the network is carried over a VPN to mitigate some of the network address translation issues that you run into with VOIP and also to offer some level of security. We have a service that manages subscriber presence. It uses the new GSUP implementation for the HLR and allows us to extract location updates and all signaling from the network. And we can take this information and synchronize it upstream. And it also provides a generic interface. It uses protobufs, and it provides an RPC for other applications on the system to access subscriber data, regardless of what language it's implemented in. There are a few services that manage the client. We have a soft switch for directing all VOIP and SMS traffic. It uses Osmosip connector and a free switch SMPP extension. And using some Python scripts, it talks to the subscriber DB. It queries things like balance. Can we make calls? Should we send it out of the network? Which base station should we send it to? And DAGA-D is managing device registration and lifecycle. It does these periodic pings every minute. It will also do health checks on the base station every minute. And if it detects that we haven't been providing service for the past four minutes, it will drop to zero. It will perform an auto-reboot of all services to restore connectivity. And the last service is Federer. It's a server. And it handles all inbound push notifications from the cloud, things like messages that need to be delivered immediately and actions that we need to perform for management, like restarting a service or so forth. So those are the outbound interfaces for the clients. It all runs on top of Python and DAGA core. And that abstracts away portions of the GSM network, giving you access to things like subscribers, BTS configuration, SMS composing from plain text and handling all the transcoding for you. It abstracts it into modules that can be implemented on top of like CISMCOM. So we have a Python library that can speak with VTY, which shouldn't be doing control interface. So it manages the Nidbi stack that way and pulls out load information and other usage information from the Nidbi, which gets synchronized back to the dashboard. So it's a collection of packages that you can download. We're actually hosting them for Debian and Ubuntu. Just install this set of packages, and you can run it in a VM, and every device has its own unique identifier. I don't know where my second screen is. Oh, there it is. Okay, so you have this is the device unique ID, and I can connect it to the dashboard bottom left. Oh, there we go. Okay, there we go. So here's my tower. This is all in a VM. And I see it's active. I can see the subscribers on the network, so their presence, which base station they've been last been seen on, their balance, usage in the network, and activity. So on the cloud side, there's a few tiers of service that handle traffic and management. The first is in Dogo Web that exposes the GUI that I just showed you, as well as a bunch of APIs that the clients in the fields can check in with. That's just written in Django. There's also a self-organizing module component, so when you actually deploy a new site, it records it in a database. You can request a set of channels in a location that haven't been registered, so you can do your own network planning there. There are tiers for voice and interconnect with operators. It's just free switch, and it has some logic to determine which route or which base station to send traffic or an incoming call to base off of the subscriber number. We use Cano for SMS. You can also use Nexmo. Then we have an async tier. Another big problem with communicating with clients in the field is that they go down all the time, so we queue messages for delivery using asynchronous jobs, and it will retry to deliver messages until they come back online. Now, this is all deployable in a VM, this entire infrastructure where you can test out everything, or you can deploy it on Amazon Web Services, and there's stuff in the repo to automatically do that for you. As I mentioned, there's a few industry partners that have started implementing this, and we are rolling it out with Globe in the Philippines. We have a trial deployment over there, so yeah, it's a very active project. If you want to use portions of it, there's a GitHub repo, so development on it, you don't need any hardware. The only thing I'd recommend is 16 gigabytes of RAM or more, because there's a lot of VMs, but you can virtualize the radio component and test everything else out, but it works great with the Sysmo BTS. It's configured to work out of the box with that. I'll try to set up a demo later. Future work, there's a lot of work we're doing on building out role-based access on the cloud and allowing more of a business organization to manage a network, have a dedicated person doing network configuration who doesn't have access to subscriber data, for instance. We're also building out data billing, so we do have data support, but we want to be able to enable and disable traffic selectively for individual subscribers and build them for it. And enable the ciphering of calls. So there's ongoing work, plenty of ways to contribute back and happy to help anyone get set up on the stack and open it for questions if there are any. Okay, thanks Omar. Do we have questions? And if so, from where? Yes, I have a question regarding HLR synchronization. So how do you synchronize HLRs in the case when you have several sites in one network? Good question. We use a data structure called CRDTs, which allow you to maintain multiple ledgers across base stations. So you're recording traffic for which base station you've been on, and every single time you reconnect to the network, these balances get reconc... They get consolidated, and they reach a terminal state. So it's like, the data structure is mathematically designed in a way that it will reach consensus regardless of when you sync them together. I guess the source code is out, so... Yeah. Okay, more questions. I think there was one more question back. No questions? Well, okay, yeah. Peter first. I just want to know what license you've released this under. It's BSD. Straight BSD? Useful as first. Okay. Straight BSD, we see patent grant from Facebook, like use in other projects? There is a patent clause. Shoot. This is way something. But I don't even know what it says. I guess you're not the lawyer? No. Yeah, additional grant of patents, yeah. Yes. It looks like that. And there is a license contributor agreement that if you want to upstream changes, it's like, yeah, we waive liability and things of that nature, but you're free to use it and do what you want with it. I would like to know if you do anything to optimize the bandwidth on the uplink, the satellite uplink, or you just pass RTP in the VPN. We do two sets of optimizations. So we transcode traffic into... I'll have to look at it. We do transcode at the edge and I send it back to the cloud and re-encode to whatever our interconnect needs. The other thing we do is our check-ins. We only send incremental updates. So if there's a configuration that's coming down from the cloud, we only send the value that's changed and we only synchronize subscriber information that's changed. So proto-buffers have field masks that you can use to send selective fields, but the proto-buffers are not used for things like configuration. So eventually everything will get to proto-buff. This is a new RPC that we're using, but everything was designed over HTTP in the beginning. So we have our own delta compression algorithm and then we compress all the traffic. Yeah, so just in general, why does Google build self-driving cars? Can you say something about why does Facebook build GSM infrastructure? We believe in building tools for individuals to communicate with each other. We want to make the world a more open and connected place and this infrastructure has gotten cheap to the point that it's possible for communities to offer services themselves that they have not been offered by incumbent carriers. So by all means it falls under what we're trying to do and get more people communicating. Maybe sort of a comment to that as far as I understand there's no intention of Facebook to be operating any of this, right? It's certainly, yeah, just to be clear on that. Okay, more questions? Okay, well, thank you then, Omar.