 Welcome back to the Magma Developers Conference. We trust that your break was productive. Next up, we have Augustine Phillips and Alikik Abhishek from ARM to talk to us about porting Magma to the ARM ecosystem. Augustine is a segment marketing director responsible for edge compute strategy and go to market for ARM's infrastructure line of business. He is a keen observer of the decision-making process and proponent of disruptive open source projects. Joining Augustine is Alikik Abhishek, principal engineer responsible for solutions in career, carrier and networking domain from ARM's infrastructure line of business. Thanks everyone for joining. Thanks for the introduction. So in this session, we will go through quickly some of the overview of how ARM is part of the Magma project. And we are extremely excited and honored to be part of the co-founding team for the Magma project under the Linux Foundation. So at the start, right, we will talk a bit about how ARM is involved. You know, ARM is a slightly different company, right? You would know ARM is essentially a technology provider and ecosystem company. And then we'll talk through how our engagements would look like. And from there, we will move on to our current efforts on enabling Magma on the ARM architecture. We'll talk about the status and some of the plan next steps. And in conclusion, we'll talk about some of the recommendations. So as we see the ecosystem picking up more and more ARM-based devices, there are some things that we would recommend, you know, the customers to look out for as we think through designing these systems. So here as ARM, right, we have a pretty broad view of the edge computing space. ARM has technology that spans sensors, actuators, all the way from my ODN points through to various different aggregation points along the compute spectrum into the core data center. Now, if we look at some of the traditional areas where ARM footprint has already been all the way from 2G now into 5G, if you look at the RAM side of the infrastructure, multiple generations have been supported by compute footprint that is based on the ARM architecture. Now, as we see core network technologies now becoming more and more distributed and open and becoming edge-ready, that's where some of the best practices that span these different edge compute zones sort of come into picture. Assume we are pretty confident that the broad ecosystem that is out there has the right-sized compute in terms of silicon and software and accelerator and power and performance and cost that the ARM ecosystem can provide. So the question then becomes, how as we see more and more cloud native workloads coming into the edge, how can we make the experience of working with all these different heterogeneous hardware as seamless as possible? So in here, if you look at the various aggregation points, we see instead of us adding into the definition of edge, we instead try to understand as best as possible our partner's views of the edge. So if it's a hyperscale cloud view of the edge or a telco view of the edge, which is increasingly having a lot in common, we look at enterprise networking players, view of the edge, what some of the companies that are doing behind through initiatives like intent-based networks, we try to understand what's going on in the industrial and industry photo platforms where we see more and more workload optimized, somewhat hardware aware type designs increasingly being asked to host more rich workloads. So through this evolution, we see there is the need for diverse and optimized hardware at the edge, but then it doesn't lend itself very well to that, almost like a cookie cutter scale solution that has made cloud computing very successful. So how do you merge these languages of cloud native experiences and diverse hardware and then quickly get to where you need to, which is the application layer software. So that's one of the key priorities for the ARM ecosystem at the infrastructure and the IoT edge. So if you look at this landscape, broadly speaking, we are able to come up with a generic architecture of a cloud native edge device. And these are, of course, network connected, they are remotely managed and they need to be upgradable all the way from firmware on up. And increasingly, whether it is far edge or near edge, they're running more and more general purpose operating systems and it's virtualized or it's capable of being virtualized in some manner, whether it is VM-based or containers-based. And earlier features like hardware-based route of routes of trust were nice to have, but now with features like multi-tenancy or identity provisioning, being able to host diverse workloads on the same platforms at the edge, having a hardware-based route of trust in all these devices, in Magma's case, for instance, on the different classes of access gateways is becoming essential, almost table stakes. Now, increasingly, these functions that these devices are hosting are becoming software defined, right? So given that, right? So that's why we created initiatives called Project Cassini, which is essentially an open, completely open initiative back with the open source reference code to drive standards-based design approaches and experiences for all manners of platforms at the edge. So Project Cassini operates across three vectors, where it promotes standards-based approach for platform design. Similarly for security, we have initiatives and certification programs across these vectors that, when we talk to ODMs or OEMs who build their own boxes or when we talk to system integrators responding to RFIs and RFQs, things to look out for that can save significant amount of design time, support time, custom engineering burden if some of these best practices are followed. So we work very closely with all of our Silicon partners. We remain completely neutral across all our partner base and with our ODMs and the OEMs to ensure that the platforms that they roll out for the edge are built on best practices. We won't have time to go into the details of all of these vectors in this session today, but in the next, in this one slide here, you know, how are we envisioning these kind of behaviors to affect end stacks like magma, cloud native stacks on the edge, such as magma. So for instance, imagine you're responding to, you're a system integrator responding to an RFI and you have a host of hardware that you need to pick and choose from at the edge. And one of the key elements is the security concerns for these devices. And you will see different kinds of security implementations come to a head at the edge. So you might be responding to an IT departments RFQ which mandates TPM based accesses. You might be looking at somebody more from the OT side which might have a trust zone or a secure element based requirement. So you see all of these different security implementations coming to a head. And then that's where initiatives like PSS certified come into picture. So if you have a PSS certification on these platforms, what that does is it builds assurance downstream into the ecosystem that the platform that you are picking for your access gateway or for your orchestrator or for your RAN processing needs, it has implementations at a hardware level for these different routes of trust that conform to various industry standards and widely accepted standards for that matter. And then if you have a certain operating system or a stack needs that needs to be hosted on these classes of devices. So typically there are corporate engagements or agreements to use a certain enterprise grade of operating system or hypervisor or in some other cases commercial or even do it yourself versions of say Yachter based embedded Linux distributions. A lot of these requirements come into play and you have all these diverse classes of systems. So that's where you see initiatives like the ARM system-ready certification program come in handy. So the idea is if these SOCs that host these stacks if these boards that host these SOCs are system-ready certified, there's a very high guarantee that the OS stack just boot securely with minimal to no customer engineering effort and you can pick and choose and you can easily respond to whichever stack that needs to be supported on these different classes of gateways at the edge. Now imagine you have securely booted your operating system on these gateways. So from then on you are quickly into hosting a cloud native type stack. Here we highlight MAGMA, the access gateway piece of course. But the idea is imagine, if you look at the various different ARM based implementations a lot of different SOCs with different network packet processing, crypto processing, storage processing accelerators are bundled together in the same SOC providing in a significant value in terms of overall performance per watt and in some many cases performance per watt per dollar. So you want the applications to remain to maintain its right wants run anywhere nature. But at the same time these applications need to be able to access the best in class features, best in class security features for instance that that particular platform has to offer. So without having to instrument your application. So that's where initiatives and open source projects like Parsec come into picture. So Parsec is now part of the cloud native computing foundation increasingly finding adoption in some of the key standard OS distros. The idea behind Parsec is if you have end applications you can just link to the Parsec user space library. And then it'll abstract out the access to downstream on that platform if you have a TPM or a secure element or some other kind of hardware based sort of trust. And you don't have to worry about the nitty gritties of the hardware implementation details on a per platform basis. So overall, right? You can think of Project Cassini as an initiative that gives this toolkit that we would actively recommend and promote the ecosystem to look for and reach out to ARM to understand how to enable these kinds of platforms. So the idea is, you know, once you design Project Cassini based platforms you can easily migrate your software stacks between different grades of systems that are deployed at different aggregation points along the edge. So if it's a two core, four core system, fanless system, low cost system sitting at a cell site to multiple RU edge server designs that workload can migrate. If you have platforms that are underpinned by some of these recommended design principles. So we wanted to mention that. And as ARM begins to collaborate closely with the magma team and during the course of this year we will, you will see a lot more of these Project Cassini type platforms that are broadly available for the ecosystem to choose from and to select from and to design in based on your needs, you know for magma performance requirements. With that, I will hand it over to Alokik who will talk a bit about our current efforts on the magma code base on the ARM architecture. Thanks, Nepu. My name is Alokik, I am an engineer here at ARM. So the short rather uninteresting sort of summary would be that, you know all the magma services are coming up on ARM and keep an eye on, you know the main line while we work together with the gatekeepers to upstream it, you know and so that you can play with the code on ARM architectures of your choice. Rather longer and a more interesting variant would be, you know, a little bit of history on why this area is of interest to ARM why it is important to be doing work in this area. So this effort of ours started as part of a research project where we were looking at technologies where ARM's USB which is, you know, cost, efficiency, power per watt can be made to realize at a maximum potential and, you know, private LTE, small cell deployments, edge all these are pretty conducive to ARM's USB. So we started looking at this space and our idea was to sort of string together all different pieces in this area and demonstrate what can be done on ARM and efficiently done on ARM. So initially we started off, we took a OAI based implementation of EPC, their E node B infrastructure and we strung together a rather end to end edge system to run a machine learning object detection workload, fully functional demonstrable, you know including the management and orchestration of application pieces and everything. So we started off with OAI, EPC. There are a couple of cool pieces in OAI that are of interest. I mean, the cups, the disaggregation of user and control plane, although we would have loved to have multi-SP gateway use support for really demonstrating breakouts at different places but I think the community was working towards that. So we simulated sort of an edge scenario with a breakaway USB gateway U from the EPC. In this case, the EPC was running on a solid run ARM based LX2160 hardware and we had another same platform running the USB gateway U this whole system is connected to USRP based SDR. We had a ZT dongle with SIM card, capturing videos from a UE perspective, making a full connect on ARM platform, sending the video all the way up to the edge and the edge then in this case, we had implemented two kinds of edges. One is a more AWS, Graviton, ARM instance-based bar edge that we call or a data center edge. And we also had, which is not shown in the picture, is about like a near edge, which is a smaller platform, which was again, we reuse the LX2160 device. The idea was that can we run this ML workload using the way I EPC running on ARM, using the edge gateways running on ARM and demonstrate our ML flow. And I mean, and we were able to demo it. We have a POC working. So in while doing so, I think what we have done is we started talking to Magma team about graduating to a more production like environment. And we already know from today's announcement that this whole OAI and multiple efforts are getting subsumed under Magma. So that is sort of the future of the next step to go. So in this process, we started working with the Magma team to bring the whole Magma AGW on the same device. And that is where our effort with Magma started. Nebu, next slide. Thanks. So where we are at the moment. So again, I think the developers will appreciate this. There were initially a lot of complexity in getting this to work on ARM. Not really because of the support. It's because initial versions of the AGW is debian supported. We had issues or debian support on some of the platforms of choice. We were implementing that on Ubuntu-based platform. So migrating those all to Ubuntu and trying it out. Praveen from the Facebook team has, I've given him a lot of grief during this process, but he has been extremely patient and helpful. But so we started off delving in that with the Facebook team and the Magma team's help, Shadi's and numbers. We have been able to get the Ubuntu version and lately even the debian version working on the LX20, all this mess of that name, on the Honeycomb working. So we have all, I mean, we have the services running up, up and running on the AWS D4G instance. The services are coming up. We are in the process of doing integration testing. And again, as you can understand, the basic Magma-based integration services are somewhat linked to, you know, in a Vagrant environment and we are still working through that support. So we are in the process of actually testing and interrupting with, again, the OEI APC and USRP based SDR for real connectivity. So hopefully we'll be sharing some good news in shortly after our testing. And we have also, so that is the AWS end of the spectrum. We've also got the services up and running on the Honeycomb devices. So we'll be going through the process of validating that and we'll be working with the Magma and Facebook teams to see how we can upstream it. We can add them in this into the CI CD pipeline. It may take a little bit of, I don't know, it may be a little bit of surgery there, but let's hope that we could get it sooner to the community or as soon as possible. Next step, I think, as I mentioned, is the mainline support and you will also see there is some effort that Nebu and the Magma team is working towards enabling different classes of platform as a reference board for the community to choose from based so that you have more choice with respect to your scale and need. So that would be a keep an eye on for that. Few of my pitfalls that I would like to sort of share with the community here is, I think one of them is that we should think about how to bring all the modules and packages to the latest version. And some of them are bit frozen in time, some of them have been specifically built and hosted in separate repos. It gives a little bit of grief when you're trying to do a multi-arc support. We would also have to sort of relook into the test harness a little bit, maybe work with the team there to see how we could enable if there is no vagrant base or if that is delayed. And of course, I mean, if more and more open source, open community supported packages as opposed to specifically built ones, I think that would really help in accelerating any new architecture or having multi-arc support in Magma. So as a developer, I felt that there's definitely a need to revisit that area or at least think about it, my two cents. And again, I really want to sort of thank Praveen, Sadi, Amar for their help and their patience, of course, for answering my dumb questions, specifically Praveen who has been working with the Ubuntu support and he gave a lot of pointers. So at the end, I mean, the summary, look out for the mainline release for ARM support and hopefully it'll be sooner than later. Neville, want to conclude? Actually, yeah, so some screenshot just to demonstrate that things are working, services are coming up on the ARM instances. So I believe we'll be sharing this slide so you can have a look. And if you have more questions, reach out to me and we can work on the community forum. Thanks. Thanks a lot, Kik. So final slide for us. So we want to raise awareness of some of the initiatives that ARM has been up to in the edge computing space. So we broadly call it Project Cassini, but as you heard, it's essentially a toolkit that promotes best practices all the way from having the latest standards-based firmware, whether it is an embedded firmware or BIOS-grade firmware, as we see more and more enterprise-grade workloads or cloud-native workloads migrate over to the edge. You have this broad ecosystem and range of platforms that has the right size compute for you. So I mean, if you use these tools, that means that you can easily work with the systems and you can migrate your workloads to various deployment zones across the edge. So for ARM, Project Cassini is that tool. It's an open initiative. Please reach out to us on any questions. We are there in the MAGMA Slack channels. Reach out to us directly. We're happy to help. We are always actively seeking for multi-party BOCs to demonstrate joint value in these use cases. And we are super excited to be part of the Linux Foundation effort. And we are all about ecosystems and you can count on us to help you as you decide on the right kind of platforms to get the best out of MAGMA for your deployments. So with that, we'll end this session. Thank you everyone for your time. And back to you, Phil. Thank you. Thank you so very much for sharing. It's really great that we're getting work done on making sure we have an excellent implementation on the ARM platform. And there's a lot of embedded applications and co-applications with the open-ran project that this opens up as a very broad door. So thank you. We are back on time and we probably have time for one, maybe two questions from the bridge. If there's anything anyone has burning on their heart, please feel free to unmute and Vincent. I can read through, there are some questions on the chat. So we are master-based, latest from the master branch. We have been developing there. OAI ran support. Good question. Six months ago, I had looked into that briefly, but for strategic reasons, we had not implemented it. I mean, we focused on EPC rather than RAN, but we can discuss Lewis, we can discuss about that. It was, I mean, as I said, it was an interesting area to delve into. We would need some support. I tried that briefly and then I just left it. So yeah, do reach out to me. If you want, if we can collaborate on that, it'll be great. Yeah, question here. Prakash here. My question to you is OAI, you have a fork, which is what magma? So which one are we talking about here? OAI or magma? No, the work. So our initial work was on OAI EPC based, right? But for the purposes of this call, I think we are on magma's master branch for our porting work on ARM. So we have already made OAI work on ARM that was a couple of months ago. We are making the magma master branch working on ARM now. Thank you. Thank you, gentlemen. Unfortunately, that's all the time and as with the other presentations, we'll take the remaining questions off the bridge and get back to you with answers from our speaker.