 heard of what I had expected for today's call, but I think we should probably go ahead and get started. So welcome, everyone. We have a fairly brief agenda today and had planned to leave most of the time open for Q&A. Wanted to introduce a few people, key people on the bridge with us today. First off, myself, I'm the magma community liaison for the project. We also have Jonathan Bryce and Kendall Waters from the Open Infrastructure Foundation, formerly OpenStack Foundation, and then a couple of the key technical experts from the magma product or project group today. They are maintainers on the project and have been key developers through the life of the project up to date. Amar, who's name I can't always pronounce correctly, apologies for that, Amar and Ulas. They're both very key individuals in the development of the project. We may have a couple of others join us. They may be stuck in our bridge problems this morning. So I want to thank you for joining. The purpose of the call today is to give another overview of the magma project to talk about what we're doing and why and hopefully build some community among the teams and again provide a Q&A session going forward for the project. So I'm just going to walk through a couple of slides to give a project overview and status. Amar is going to go through a couple of topics on the development priorities for the next release that will be coming up in January and then open for a bit of a Q&A time. So we'll just get right into it. First off, I wanted to speak to the mission of magma, why we're building this product and the mission of magma is to enable bringing more people online by enabling service providers with open and flexible and extensible carrier-grade network tools. And we're really committed to the openness of this project. We're really committed to reducing barriers to bringing connectivity into markets that are currently unserved or poorly served throughout the world and driving cost out of the equation, driving friction out of the equation. Magma itself provides the core network of a wireless network core. It's not intended to deliver radios. It's cooperating with the OpenRAN and the other initiatives that are working to bring radio technology to market in the same way. But magma itself was built to be hyperscalable and highly distributed, ready to be deployed all the way out at the edge of the network. We're building a converged core that's intended to support ultimately LTE, Wi-Fi, 5G, private LTE and private 5G services, fixed wireless access, etc. And we wanted the core not to be tied to radio vendors or transport vendors. So we're completely open and agnostic to the radios that we use. It works better with radios that can be provisioned using a standard provisioning method, but we are working with other closed radio networks in a few cases very successfully. And we're focusing on local breakout for internet traffic to get the traffic off of the proprietary wireless network as early in the process as possible, built to be highly available, moving towards a microservices and containerized model for deployment with remote configuration and life cycle management using REST APIs. And I wanted to talk through quickly the three key components of magma. Magma starts with an access gateway, which brings together the key components of the wireless core in the LTE world. It's the serving gateway, packet gateway, and the MME that build the core of what's required to operate the radios for 5G under development. The same services with the UPF, AMF, and SMF that provide those same functions. And for Wi-Fi, an integrated access gateway with integrated AAA services. Magma also delivers an orchestration function. This is a domain orchestrator. It's an orchestrator for the magma functions itself. It is not the system orchestrator that wireless networks talk about. It's not a competitor for those larger orchestration functions, but rather a capability that allows you to deliver standardized REST API interfaces to that outside larger orchestration function. If you'll excuse me, the other bridge actually just got opened. And I'm going to take just a moment there to let people know that we've moved bridges. And I'll be right back, sorry. Hi, this is Phil. I'm back. Apologies for that. Can I confirm that the audio is okay for me here since I switched back? Yep, sounds good. Yep. Okay. Again, apologies for the bridge confusion this morning. There were about four people camped on that bridge. I don't know how many had abandoned previously, but here we are. So again, the orchestrator for magma is intended to give a way to operate and manage and monitor a collection of access gateways and provide REST APIs to a broader-scale orchestration system such as ONAP or ONAP-related services that may operate upstream. And then finally, magma also delivers a federation gateway. And the purpose of the federation gateway is to allow interoperability with 3GPP standards-based implementations of the core and to express the various interfaces that are required for that such as GX and GY for policy and online charging, extending S8, SGS, SH, etc., to a broader MNO network or more traditionally deployed wireless network. Magma operates today as an open source project. You can find us at magmacore.org and it has a number of very active current contributors. Facebook Connectivity, who had started the project and contributed the project to the open core community, still provides most of the contributions. Also the OpenAir Interface Software Alliance, who has been a partner with Facebook Connectivity through the incubation period. The Open Infrastructure Foundation has joined us to help manage this as an open community project. And we also have significant contributions from RADISIS, ACL Digital, who was formerly Alton CalSoft Labs, FreedomFi, WaveLabs, and a small number of individual contributors coming into the project. And we are very actively seeking other contributors throughout the various communities that may want to join us. It's a very active project. We're seeing on the order of 250 to 300 commits a month, 70 to 80 commits a week on the project. And those are coming from 40 to 50 individual committers on a regular basis. So it's a fairly broad community of committers coming into the project. And we're also seeing quite a lot of activity with people cloning the project and pulling it down. We've seen 749 unique cloners over the last 12 months, which is actually a very interesting statistic because it shows that there's interest in people trying the project and playing with it, and perhaps deploying it even in ways that we don't know about. And so we're very excited to see that, and very excited to see that continuing interest in the project. Back in the first quarter of 2020, the MAGMA project started a collaboration with the Telecom Infrastructure Project. TIP had started an activity known as the Open Core Network Initiative to build a set of specifications and requirements for an open and open converged core, similar to what MAGMA was already working on. The MAGMA team has cooperating with TIP to be effectively the software project of the TIP Open Core Network project and delivering a implementation of the requirements from the TIP project. And as part of that, we've taken on an effort to bring 5G services in a minimum viable core configuration into MAGMA. And we're here in the middle of middle to late November and we're planning to do a demo of 5G services targeted to the end of November, early December timeline, and then fully integrating those services into MAGMA as part of a 5G, 4G carrier Wi-Fi converged core. Finally, there's a bit more of an internal architecture diagram here that we're presenting that shown here. Oh, sorry about that. Showing the components of MAGMA and how it is decomposed. The key thing to note is that internally all of the interfaces of MAGMA are delivered using GRPC or basically REST-like API calls delivered over GRPC. And those protocols are open and available in the get. They're extensible. Other services could use them to communicate with these functions of the project to help make it more and more extensible. And we will be doing other talks about the architecture itself and technical deep dives on how it's put together and the declarative set-state model that we've used internally to help make it reliable and extensible and scalable. So lastly, before we get on to a couple of other topics, I just want to invite everyone a reminder on how to join the project, how to interact with us. You can visit our website at magmacore.org. The get is available at gethub.com. MAGMA. There is a very active Slack. We have about 330 contributed or registered members of the Slack channel with a number of subchannels regarding development work in the various components of MAGMA. And then finally, I wanted to announce here that we will be holding a MAGMA developers conference in the first quarter of 21. It is tentatively scheduled for February 3rd of 21. And we'll be getting a more formal announcement about that out, announced on the Slack, on the mailing list, and a few other channels to go. But that will be a very, very nice opportunity to get people engaged. I want to introduce Amar, who's going to walk us through a look at what our development priorities are coming up for the next release of MAGMA, which we've titled Release 1.4 and are targeting for early January. So Amar, just give me guidance on driving the slides forward. Yeah. Excuse me. Thanks, Phil. Maybe the next slide. Thank you. So I think we're looking at core features, operational features, and something more like bug fixes. So just listing out some of the high-level core features. So high availability. This is a feature that has been asked by a few of our partners. And this is more of a cloud DR sort of a model where the orchestrator is running either in a private cloud or a public cloud. And the access gateway is running at the edge. And if the access gateway fails for whatever reason, then we leverage the EnoteBeast S1 Flex capability and MME pooling to fail over the capacity onto a central sort of an access gateway. So this is more like a disaster recovery sort of scenario that you see from traditional database vendors and stuff like that. So this requires the EnoteBeast to support S1 Flex. Most EnoteBeast do. And some of the partners that we sort of bundled with like Bicells and Airspan, they do support this feature out of the box. I'll pause here if anyone has a question on this. Awesome. So the next big feature is Volte and IMS. So it's been in the code base for a bit, but we're now testing a lot with our lab setup. So this supports a V6 UE IP address as well as a V4 one. And yeah, and it's pretty standard sort of IMS integration. I'll update this dock to also link to the GitHub project so folks know exactly what is going into the release. The third big feature which is more of a usability feature is call tracing. So this is, you know, in case there are customers who have support calls saying, okay, the UE is not attaching or something, the ability to better debug this through the NMS. There's again a design dock that's published on GitHub for folks who are interested in taking a deeper look. And if you have feedback now is a good time because we have only a few weeks. And so if there's something low hanging, we're happy to accommodate that. Header enrichment, this is a basic header enrichment where we don't do encryption on the URLs. So the MSISDN can be injected for specific URLs. And again, the design dock sort of captures the scope a lot better. This is for certain regulatory institutions and you know, captive portals and few other integration cases where the web server requires the MSISDN of the phone number of the user that is trying to attach onto the network. The next feature is mobility restriction. This is mostly for to prevent theft of CPEs. So you know, not necessarily theft, but if like people are trying to move the CPE from one location to another, it would get, it would not allow for that. There are some partners who require this feature mostly to, because they have different pricing for their CPEs based on which market they're offering services in. Again, the scope document, the GitHub issue describes the scope better. I'll pause here if any questions. Thanks. Yeah. Fill next slide, please. Thank you. Yeah. So this is a big feature that, you know, we're getting a lot of help from the OSCE folks. So they're migrating the release 15 and NSA support that they introduced from in their MME onto the converged MME. This requires an upgrade of the S1AP as well as they're starting to work on some of the NSA supports. So this sort of enables feature parity with the OAI MME capabilities that are available today with release 15 and solidifies our convergence strategy between the MAGMA and the OSCE MME code base. IPv6 support, the scope is described there. So just from last week, the changes that we'd likely not land the underlay side, there are probably two to three weeks worth of work more. So at least for the 1.4 release, it's mostly going to be a UE support. And then the underlay is going to be a P50 sort of like a stretch goal. The same thing with the NACV4 to V6. So this is in case, you know, the UE's are supporting a V6 address, but the underlay only supports an IPv4 fabric. Then we will mat that IP address at the gateway. So this is again as a P50 likelihood of landing. I'll pause here if folks have any questions. Thanks. So operational improvements with 1.3, I hope folks noticed that we pretty much revamped the NMS UX for most of the stuff. And I think there's a copy-paste error for the upgrade of S1AP from the previous slide. But the main rewrite here is to improve the UX for the Federation side to make it at par with the rest of the experience and consistent across both the LTE where it's an LTE in a box model, as well as the federated model where you're connecting into a third party HSS OCS or a PCRF. And then the last one is more of a support feature, which is VPN enablement. So the idea is you can go to the NMS and, you know, click a checkbox and provision a VPN tunnel all the way to the access gateway. And it's a short-lived certificate. So it's mostly for secure connectivity from the orchestrator. So there's a jump host that also gets deployed as part of this workflow. This allows for support engineers to actually debug the access gateway over SSH. Any questions on this? Cool. Thanks. I think that's it, right, Phil? Oh, okay. Sorry. Platform improvements. So Stateless MME is a pretty big feature where, you know, if the MME restarts, it's not going to take out the UE traffic. So this is going to be enabled by default in 1.4. The other features are access gateway containerization and Python 3.8 upgrade. The access gateway containerization is looking like, again, is looking as a P50 effort, but we will land it in master, whether we claim production-grade support for it in 1.4 or not is to be determined. Thanks, Amar. Appreciate you taking the time to share that. So the only other item on our agenda this morning was an open Q&A or open discussion time. Really, this is open to any questions regarding the MAGMA project itself, regarding the organization of the platform technical questions. Amar is still here to help us with that or anything else. So the floor is open if anyone has anything they would like to discuss. Oh, excuse me. I have a question. What's the relationship between the MAGMA project and TIPS OSM project? Thank you. Could you share the pronunciation of your name so I can address you properly? I don't want to. Okay, yeah, yeah. Okay. I'm Zhu Qiang from China Mobile and in the OpenStack community for four years. Yeah, yeah. I know a lot of people from the OpenStack community. Thank you, Qiang. Thank you. My apologies. My Chinese is poor. So the MAGMA project operates as an independent software project developing the converged wireless core that we've just described. And TIPS and OCN are also operating the open core network project that is primarily describing a set of requirements and specifications for what an open core network would be. What an open wireless core network would be against a specific set of use cases. And in most, like other TIPS projects, TIPS is not itself delivering product, but delivering specifications and coordinating with developers of those products that would be compliant with their specifications. It becomes a cooperative arrangement. And this idea shown on the bottom of the chart where TIPS is delivering a set of requirements for what the open core network would need to deliver to be a consistent product compliant with TIPS, TIPS goals, and then developers deliver implementations of that product back. We do have a very close working relationship with TIPS. There's a lot of cross pollination of people who are working on both products. And TIPS recognizes MAGMA as essentially the reference implementation or the reference software project for TIPS, but they're not one in the same thing, if that makes sense. And TIPS is interested in and will probably see other implementations, both commercial and otherwise, for aspects of the open core network. So it is a collaborative agreement. As we put together the, as we formalize the MAGMA core organization, TIPS will probably continue to have some leadership on the technical committee or the steering committees of the MAGMA project, but not a dominant position among them. Does that clarify the relationship for you? Okay, thank you. Can I add? Sorry, go ahead. I just want to have these slides after the meeting. Where do we share it? We will, we will be posting a recording of this meeting and the slides, and we'll get an announcement out on the MAGMA Slack for where to find them. Good, good. Thank you. Thank you very much. Yeah, I just had a, this is Mark Collier. I had a follow up question maybe to make sure I'm understanding the difference between OCN and MAGMA as well. Clearly, is it fair to say that the primary output of OCN is basically written documentation that has requirements and specifications and things of that nature as opposed to lines of code and software, whereas MAGMA is, in addition to, of course, having documentation and stuff, it's primarily about producing lines of code, which is, you know, to produce software. Is that, is that the difference? Is that one way to understand it better? Mark, I think that's a completely reasonable way to, to describe it. You know, TIP is not a product organization. They are a industry consortium that's driving the specification of product. And probably, if you look at other things TIP has done, look at what they've done around OpenRAN and the specification of how the radio would be decomposed into components. But TIP itself is not delivering any radio components. There's probably 10 or 12 significant entities that are delivering radios that are then certified by TIP as compliant in one way or another. And TIP also sponsors plug fests where people can prove their, prove their compliance and prove their, their interoperability. If that helps clarify what TIP does. Yeah, thank you. That definitely helped me understand it. I appreciate that. Okay. Anyone else? Excuse me. Can I have a second question? Yeah. Of course. Yeah. I'm from China Mobile. I'm the open source program manager in China Mobile Research and actually at the middle of this year, we have some talk with TIP OCM project because we, we are from, I'm from China Mobile and we have an orchestrator project named ONAB, Unix Foundation Networking. So June or July, I didn't remember. The OCM project tried to talk with us about the project of ONAB, you know, the orchestrator project. So my question here is, with the magma do an orchestrator and also a core network component and a separate orchestrator and an FV core network component. So, so if you think of ONAB as the orchestrator that is looking at the telecom deployment overall, basically looking at all aspects of an operator's network, it has inside it this concept of domain orchestration or in previous descriptions of the, of the NFV standard, what might have been called a VNF manager, which is a component that is managing a subset of the telecom network. And if you put it in that context, the magma orchestrator operates more like the domain orchestration function for magma or the VNF manager for magma, another way of thinking about it. And it provides a set of APIs that a more global orchestration tool like ONAB can call upon to execute changes or updates or collect performance data or statistical data from magma itself. So the term orchestrator becomes a little bit confusing. This is really the local domain orchestration for magma itself. And magma is working to update its APIs and make them available in a way that is more compliant to semi standard programs like ONAB so that we could fit into an ONAB framework. I hope that makes sense. Okay, thank you. So with the magma orchestrator, manage the open-run component. As implemented, the magma orchestrator does provide interfaces to manage the radio network. If magma is deployed in a standalone configuration so that it can manage the radio network for the operator, that would typically be done in a private LTE or fixed wireless scenario where you're not operating with a larger orchestration complex. But we're also able to operate where the radios are managed by the radio manufacturers, OSS systems, if we were deploying traditional radios from the large equipment providers that we all know and love who manage their radios separately, or if a larger orchestrator was managing the radios. Those are all reasonable deployment scenarios for magma. Okay, thank you very much. I hope magma will be a successful project to be the next generation core network. We appreciate that vote of confidence. Thank you. Anyone else? Yep. Hi, everyone. Hello. Hi, Dimitri. Hello. Dimitri from the Nicaragua Company. Thank you, Felix. Thank you. I'm our presentation. I appreciate you a lot. Very interesting. And I have a very technical question, I think. Here, we are starting very specific case with thousands of customers, like potential customers. Unfortunately, I cannot describe it in a few words, but the situation is the following that I found out that the architect itself of magma, how to say, the design of magma is not considering such kind of clients, what I mean. For example, GPRC whole started to be very heavy and databases started to be very big when we have a lot of customers. And I would like to ask a question. How do you think? Is magma, sorry, sorry, have a technical problem, is magma able to serve like a few thousands of customers, like 20,000 of customers by design? Or is there any problem with that? Yeah, so I can take this question, Phil, if you don't mind. Very good. Yeah, so I think, hey, thanks Dimitri. So actually, so we have two issues and we're working through them on scaling. So at some point after 10,000 subs, the message payload gets too large. And then we're getting truncated sinks. So, so we need to, you know, we're trying to figure out what's the right way to do like agination. So you don't have to send all of the subscribers in one request. And then the other thing that we're sort of running into, which is again, metrics related, but of a similar nature is that if you have too many subs on a single access gateway, then again, the payload becomes too large for the gRPC metadata and we start truncating the metrics message. So there is a design dock on the metrics, right? That is, I think somewhere in GitHub, but I can look it up and post on on the Slack channel first. But yeah, I think what we've tested is 10,000. That's the limit that we support today without issues. So if it's greater than that, then, you know, we're still trying to figure out what are the things that break beyond 10,000 subs. Okay, thank you for the answer. I just, from my point of view, when I did the test, I found out the first issue I had with the subscriber DB service. And my question was, why just not to increase gPRC payload size? It's so very straightforward. Workaround, why just don't do that? Yeah, yeah, I think that's a good point. I think the bigger issue, yeah, I think the subscriber issue is probably easily solvable. I think the bigger issue is the metrics one. Because that if you have too many subs on an access gateway, the change is larger. So that's sort of the one that we've been like sort of focused on at this point. But if that is a, if the gRPC one is the one that's blocking you, is the issue for you more just that you have too many subscribers and not that many subscribers that are active on each gateway? That's correct. Okay, got it. So then you won't run into the metrics issue. Okay, so yeah, we can look at the gRPC issue and then also the other thing, I don't know if this is the right forum. But if you have a decent size deployment ahead of you, then there is probably a way for you to get in touch with the Facebook itself. So we can offer a more dedicated sort of support for certain issues that are getting prioritized. So I can. Thank you very much for that. Sorry. Thank you very much for that. Yeah, so I think yeah, we can probably maybe Phil introduce him to Carlos or someone and then take it from there. But from the open source side, yeah, so we're looking at the metrics issue first, because that seems to be the bigger like more complicated engineering challenge at this point. Okay, and if you don't mind if you have a few minutes more, I would like to ask you, Omar, about the Voltee non-federated deployment. We already talked in the slack about that. How do you think? Is it technically possible in this stage of magma development to have Voltee service in non-federated or by design, it's quite difficult right now? Actually, I would. So unfortunately, Ullash had to leave early because he had a doctor's appointment. But I think that this might be a good question for Ullash. Yeah. Okay. Yeah, maybe he can answer on slack as well. Sure. Okay, thank you very much again and I'm muted. Thank you. Thank you, Dmitri. Dmitri, if you want to drop me some better contact information via Slack, I will try to get you in touch with someone as Omar noted who might be able to get you some better support or help get direction on the scaling question that you're facing. Okay, thank you very much. I definitely will contact you and Slack. Okay. Okay, thank you. Omar, there is one question that came in on the chat from NTT. Are there plans to apply acceleration DPDK or XDP to magmas OVS, the UPF and SP Gateway? Yeah, definitely. We're looking at DPDK at this point. So there's some patches that province of streaming. I don't know if province on the call, but yeah. So we're looking at DPDK at this point. The thing with XDP is it's good, but given that our control plane is kind of at this point, at least open floor related, that's a bigger lift. So yeah, but at the end of the day, we want the switching fabric or the routing fabric to be more of a plugin model. So I think in the fullness of time, we'll try and decouple some of this from OVS. Again, I think Ullash had some design dot that sort of briefly covered that, but we're probably at least six to eight months away from executing on anything that's not OVS or OVS plus DPDK. Did that answer your question, Yohei? Okay. We're actually coming up very close to the hour. Could open it up for maybe one or two more questions. If anyone else has anything, if nothing else, I want to apologize again on behalf of the project for our technical difficulties getting the original bridge opened up and thank you for everyone who made it over to this bridge. Thank you for your time this morning and the very good questions and dialogue. I look forward to more and I want to remind everyone once again to watch for an announcement for the MAGMA developers conference as I think that will give us an opportunity to dive much more deeply into how MAGMA is built, how we can build community around it, and how we can develop it further into a V dominant product for the wireless data core network networks. Thanks everyone and I'm going to bring this morning's call to a close. Have a good day. Thank you.