 Hey folks, good morning. My name is Mithra Miskasky. I'm a co-founder and head product at Platform Mind Systems. And I'm joined here today by Naveen Bhairati. Naveen is VP of Platform Engineering at Mabinir. And together we are here today to talk about simplifying edge cloud computing operations using SAS management. So let's get started. So by way of introductions, I head product management at Platform Mind Systems. And Platform Mind, for those of you who may not be familiar, we are an enterprise Kubernetes as a SAS service vendor. And we specialize in providing an enterprise-grade Kubernetes platform that's deployed using a unique and powerful SAS-based management model that enables you to run your private or hybrid or edge Kubernetes cloud deployments through a SAS managed infrastructure model. Naveen, over to you. Sure. Naveen Bhairati, way of introduction. I'm working as VP of Platform Engineering at Mabinir. Mabinir is a company that's trying to change the game for 5G radio specifically by disaggregating the radio, as they're known for, and also producing other products like 5G Core, IMS, et cetera. So again, I'm here to help them along the journey from a platform Kubernetes perspective. Back to you, Mithra. Great. Thanks, Naveen. Here's a look at the quick agenda for today's session. We'll start by taking a brief look at the edge market opportunity. We'll then talk about the challenges of managing server edge micro data centers, and we'll contrast that with the management of a traditional data center. We'll then take a look at SAS-based management for server edge infrastructure as one possible solution to address some of the challenges around edge cloud infrastructure management. We'll then follow that with an introduction to Maveinir, and we'll do a pretty thorough deep dive into Maveinir's web-scale platform in the context of 5G rollout. And then we'll talk about how Maveinir and ProfMine are partnering together to utilize SAS-based management to simplify the rollout of 5G VRan and packet core software. So let's get started. So let's start with a brief look at the edge cloud market opportunity. So this is one of the latest research reports coming out of Gordner, and as Gordner sees, while the edge cloud workloads represented a very tiny subsection of all of the workloads run across the world, that percentage is expected to grow in a pretty significant way over the next five years or so. So according to Gordner, edge workloads only represent about 1% or so of all the world's workloads, but that percentage is expected to grow in a pretty massive way to about 30% or so, where 30% of all the workloads run across the world will be run in edge locations. And this massive gain is contributed to in part by the requirement to run a lot of intelligent devices, such as smartphones or smart cars and other smart devices that require, in a way, that computing resources are run at edge to support these devices that are spread across the globe. And so through this and many other digitization trends as the scale of edge workloads is increasing in a pretty dramatic way, Gordner also expects to spend for edge cloud computing to grow in a pretty significant way. So by around 2024 or so, edge cloud computing spend is expected to grow to somewhere around $80 to $100 billion or so. So to put it in simple terms, this is a pretty massive market opportunity and one that's expected to grow in an almost an exponential way within the next five years or so. So let's look at what does operating an effective edge cloud involve, right? And to be clear, when we talk about edge computing, there are typically two ways of doing edge cloud computing that are commonly referenced. One is server edge, and the second is device edge or IoT edge. For the purpose of this session, we're focusing purely on server edge. Some examples of server edge include retailers, such as, say, Macy's or Starbucks or McDonald's, where each of their store locations spread across the globe is an example of an edge data center location. Or telco that we'll talk about in this session in a pretty detailed way where an individual cell tower location for a telco provider represents an edge site location. And so these edge locations are typically referenced as micro data centers. And it's because they are a fairly compact version of a data center. They'll typically include one to about three physical servers or so, and they're spread across the globe. And at times, they're at locations with fairly limited access for a human being. And at times, these locations might be based on a fairly high latency loop bandwidth network, et cetera. Now, because these edge servers or these edge micro data centers are spread across the globe, it's not always possible to standardize on same type of hardware across them. So you will find the diversity of hardware as one of the characteristics of these edge locations. You'll also find a mix of legacy workloads as well as modern apps. Legacy workloads, because a lot of apps run at these edge locations for, say, a retailer or even telco edge, tend to be built in a way that cannot really be modernized. So they might require a Windows operating system to run them. But some portion of the apps, for example, the 5G VLAN software that we'll talk about in a bit, are following modernization and are getting built using containers. So depending on the nature of workloads, you might need to run a mix of legacy as well as modern apps at these edge locations. And then finally, the biggest characteristic for the edge micro data centers is the scale, right, the massive scale at which these data centers need to be operated. As an example, a traditional enterprise customer, even a large enterprise, when running at scale will typically have to manage tens of data centers at the most hundreds of data centers across the globe. In contrast, a typical internet service provider, a telco provider, will have to manage probably tens of thousands, anywhere between 20,000 to 30,000 or so of just radio cell towers in order to manage or support all of their subscribers or customers located in a particular geo, right? So we're talking of a significant order of magnitude difference when it comes to scale for these edge data centers when compared to the traditional data centers. And that scale in part adds to a number of complexities involved in management of the edge data centers, where traditional data center management paradigms just don't work well. So understand that better. Let's take a microscopic look at what a traditional data center management entails. So in a typical data center, you will find one or more of all of these. There is a large inventory of servers provided by any of the popular server vendors like Dell, HP, or Supermicro, Huawei, or many others. These servers are typically configured with Windows or Linux operating system. And then they're coupled with top of rack network switches or gateways and block storage devices to together provide the resourcing needed to run the apps. And a portion of the server capacity will be dedicated towards management software, right? The most common management software is VMware vSphere. For example, that's used to deploy virtualization on a lot of traditional data centers. There will be a pool of databases because a lot of management software relies on external databases to store their persistent state. And then on top, you will have a massive pool of applications, including internal apps built by the ops team, as well as dev test workflows, as well as production apps that are consumed by that vendor's customers. And so depending on how these data centers are managed, typically, there tend to be a lot of manual operations or extremely time-sensitive, non-automated operations that are involved in upkeep and caretaking of these data centers. For example, installation of operating system across the physical server or hardware tends to be automated for a lot of enterprises. However, configuration of these operating systems or security patching or some other operations tend to at times be handled manually or might require someone to be sent to the data center to do some configurations, et cetera. The install and config of a management software is almost always manual, and it's extremely intensive and time-consuming, especially troubleshooting issues with the management software or having to scale it out or performing upgrades to newer versions of vSphere, for example, tend to be very intensive operations and almost always require someone to be there in person to babysit them. Database operations are similar where dedicated database administrators need to be available for maintenance and upgrades. And finally, the app layer will be owned by the ops team or the end users or developers building it. So in a nutshell, the management of a traditional data center almost assumes that you will have an army of resources available at times to be able to go physically to those data center locations to address and triage issues. But this paradigm finds it very difficult to scale when you're talking about a massive inventory of micro data centers that might be located in absolute remote locations across the globe. So if you take an example of a retailer, each retail store, as Vista said, represents an edge location. And it's almost impossible for a retailer to physically dispatch people at each of these locations to deploy complex software frameworks like virtualization or container orchestration and then to babysit them or handle the upkeep on an ongoing basis. So as a result, if you look at retail edge or possibly retail co-age, what you will find in a traditional environment is they are kept relatively simple. There's an operating system installed on top of one to two to three physical servers. And then there will be likely some windows-based apps that are directly installed on that operating system. And so at a high level, the installation of apps and OS tends to be very manual, and hence they're infrequently updated because there isn't automation available to update them regularly. Troubleshooting requires a human and hence is time consuming. And finally, you don't have a choice of deploying complex technologies such as virtualization or container orchestration. And so this creates a number of challenges for operating edge cloud environments, especially at scale because it will take multiple days to troubleshoot problems. It will likely take more than months to update the application or to update a security patch operating system. And you'll find that there's a proliferation of control plans because there isn't a single vSphere, for example, that could manage all of your edge site locations. So there will be a large pool of control plans which creates another problem of how to manage these different control plans. So there's lack of centralization. And then all of this adds to extreme high costs, but also equally importantly, it adds a ton of time delay in terms of time to market for that vendor when they're trying to roll out any new or critical initiative to the market. So as one example of this, when we just referenced is retail edge where in terms of the topological configuration, you'll typically find that there is one more corporate locations for that retail vendor that are then connected to a few centrally located data centers, probably tends to add the most hundreds of data centers. And then these data centers are in turn connected to a large inventory of stores, tens of thousands of them spread across the world. These stores will typically need to run anywhere between one to three to five applications locally. So those apps cannot really be outsourced to public cloud. These are your points of sale applications or your security surveillance, radio monitoring apps or media apps, et cetera. And so that's the nature of server edge deployment which incorporates some of these complexities in managing this infrastructure. Another example is telco slash 5G edge, which also includes a lot of very interesting nuances in managing different parts of the 4G or 5G infrastructure. Naveen, would you like to quickly talk about the deployment and conflicts of these telco slash 5G edge locations? Yeah, thanks, mother. I think one of the things that came about when I came to Mevenir is just the amount of latency that is tolerable is so different than the enterprise workloads, typically speaking. On the access side, the requirement is 200 microseconds from the time they call it UE user equipment, which is a cell phone or a car, which is IoT sending data using 5G signal. And the tolerance for that is 200 microseconds from the time the signal comes in to the time it gets processed and then response since it gets back to the actual device. So I think it's very different from a latency perspective. And also as you go into the core, it's a little bit more tolerable. So at the edge, it's more like two to 20, depending on the application and what's going on. And if the customer is going to video application or ARVR, any other newer stuff. And then at the core, it's more of the 3200 milliseconds. So very different in a telco world. Back to you, mother. Got it. Thanks, Naveen. And so those are kind of at a high level, some of the challenges of having to manage it's server deployment set scale. And there isn't really good centralized management software that's designed to manage these fleet of micro data centers to scale. So as a result, Gartner products that about 50% of edge deployments will actually fail over the next few years. So to be specific, they predict that through 2022 about 50% of edge cloud computing solutions while they worked as proof of concept will actually fail to scale in production deployments. And we can see why for a lot of these reasons. And so as a solution, one possible solution that we would like to present in this session is using SAS managed infrastructure as a way to fundamentally address some of the complexity and challenges that are involved in managing edge data center locations and doing that at scale. So when we talk about SAS management, it typically entails from our perspective, a cloud hosted management plane or sometimes you call it the control plane. But the unique characteristic of this management plane is that it is detached from the underlying infrastructure that it is designed to manage. So while the underlying infrastructure might live in a customer's data center, central data center or at one or more of these edge locations, the management plane itself is outsourced and is running at a different location, right? And so this kind of architecture has several benefits. One of the key benefits is that it can offer centralized management of all of your inventory of not just the core data centers but also the edge locations together because of this kind of architectural paradigm. Because the SAS based management plane can scale out horizontally and outwardly independent of the data plane that it's managing. And so as you add more sites to your edge deployment, the management plane can scale out dynamically to address these additional site locations without those sites ever being impacted or even being aware that that management plane is actually going through a scale out. The management plane can handle a lot of heavy lifting of operations within itself. So instead of offloading all of the work or a lot of work around say installation and monitoring and management of your edge locations, all of that can be offloaded to that cloud hosted management plane which in turn enables you to deploy some pretty complicated software at these edge site locations and still keep up with the SLA and the high availability and uptime requirements that you have. So the SAS management plane in other words can guarantee that you're able to deliver your end apps with a very high SLA to your end users. For example, your radio cell towers as Naveen will talk about in a minute can run with a very high SLA of uptime while still being able to run complex technologies such as Kubernetes at each of those cell tower locations. So that's one of the beauties of this model. One of its characteristics is it can let you provision all of your software resources completely automatically. So for example, again, in the case of a radio cell tower or a retail store location, once your physical servers are wrapped in stack and have network connectivity at that edge location, then your remote hosted management plane can take over and it can boot up the servers, get the operating system installed from ground up. So in other words, remotely provision the bare metal and install the operating system on it and then deploy some complex technologies like virtualization or Kubernetes all remotely again and then allow you to deploy your legacy apps as VMs or straight on the bare metal or the containerized apps on Kubernetes. So this kind of model lets you as an end user build out your CI CD automation that will deploy one or many edge site locations through a single click operation, right? So it's complete software based automation using APIs that are exposed by this remote hosted management plan. And so in summary, the SAS based management model can fundamentally attack some of the core challenges that edge server infrastructure management faces. It can provide a very high SLA. It can remotely auto provision bare metal which can be a huge differentiator especially when you're talking about managing sites that don't have access. So you cannot send personnel at these inside locations. And finally, it gives you ability to deploy fairly complex technologies, virtualization or container orchestration or et cetera. So with that, let's take a deep dive into Mavenir and their use case to understand this better. Naveen, over to you. Thanks, mother. So I think, you know, set the stage what mother has done is right, you know, the edge is coming. I think the edge is going to be big. The question is how big is it? So, you know, I want to say at least there's going to be a million pods at the edge for each network. And the reason I say that is because when you think of the number of towers that are out there, when you think of how routing is done from a traffic perspective today, traffic's still going all the way into the data centers and then going out from a core perspective. So I think they're trying to, you know, move that as close to the tower as possible. I think there's added benefits like, you know, cost of the infrastructure is going to go down, you know, everything is containerized. That means the infrastructure is capable of coming up whether it's the RAN or the packet core or the IMS on the same infrastructure that, you know, Telcos never had the chance. So I think if you look into 5G and the aspect of 5G that's talked about a lot is radio, the purpose of that is the carriers have felt long that the incumbent vendors have always been, you know, more guarded about the radio hardware and software. It was shipped as a black box. It was not actually meant to be opened up and looked inside, right? So with the 5G spec, it's basically been disaggregated where you have the tower side and then there's everything else, which is running on x86. So let's go to the next slide and I'll jump into some of the details of this. So I think we at Mavinier looked at this and said, how do we address for Telcos a common infrastructure platform to run not only their radio, but also radio core IMS, any other workloads and also be able to move this to the public cloud. When we say this, it's the ability to move infrastructure using the smallest components. And like Madura said, how do we automate this? How are you able to scale up, scale down? Back in the day, people would ship software after they shipped hardware because you had to ship the hardware to go, hey, this region is at 90%, we need to ship more hardware, get that rack stacked and then somebody had to go install all of this. So the whole idea is it's plug and play. When you say plug and play, you could move some of this extra workload to public cloud and it should be seamless from an operator perspective. And the way the stack is structured is so the cloud itself becomes a component that the CAS and the past platform utilizes. So that becomes the fundamental component that we rely on. And this partnership with platform nine has been very beneficial because we're able to lean on them from a SAS model of the CAS and help scale our services faster. Now, on top of it, we also have the application layer. So because of the application nature, there are some common services provided as well. And then on top of it is some of the management layer which is where do you want this? How do you wanna orchestrate some of this network functions? Again, the entire idea is provide a platform that can run all of the services and network needs and also if needed, some of the other services like IT functions as an example and be able to move from public cloud into private cloud and private cloud back into public cloud as needed. So it's load driven and not constrained by hardware. And I think that's really the goal we're trying to accomplish here. It's just in the beginning stages, I think it's got a long ways to go. Let's go to the next slide. So this is a little bit more of a deeper dive into the aspects of some of the frameworks. So what does that mean? So today when we talk about private cloud, traditionally telecos have run on some version of physical hardware like Dell or Supermicro, they have switching layer and then they have the concept of let me go build you an operating system, let's then put something on top like open stack. And then on top of that, we can provision some services. What we're trying to talk about is, it could either be that or it could completely be an AWS Google. And then on top, we bring in a CAS platform which means Kubernetes fundamentally goes on bare metal and Kubernetes is gonna take care of everything whether it's scale up and scale down. Yes, there is some orchestrator that's orchestrating this resources from an application viewpoint, but application is seamlessly able to scale up and down as needed. And also it can be deployed as far as you want, which is really under the cell tower if needed in a single server, still part of a Kubernetes cluster because maybe that runs as a stretch cluster. So the idea of a stretch cluster is, if you have multiple availability zones, you consider the cell tower and availability zone and you kind of load balance it out with some other place. So if you have an outage, you can deal with some of these things. Now on the past side, we have a pretty standard infrastructure setup, which is how do we set up somebody with the traditional stuff, which is an ingress controller, a service mesh. And you need things like Prometheus Grafana and EFK to look at all the logging that's coming in, look at all the KPIs coming in and also look at some of the other stuff, which is can you really make this an observability driven platform? So there's an analytics engine on top that would power this to make some real-time decisions. And on top of that, we have the application now, which could be any of the network functions starting with the radio, which is split into DU, a C-U-C-P, a C-U-U-P, or it could be 5G core and there's a whole suite of services along with Rick, which is one of the newer, things coming to 5G, which would be radio controller driven in an interactive manner based on the KPIs. So this paradigm is completely different from the previous world where everything was a black box and radio and everything on the infrastructure was very manual, very human-centric is what I would say. And now this is going towards that automation and it's all going to be driven through Kubernetes because fundamentally the industry, and I believe this is really true, which is Kubernetes has been built for enterprise but will help Telco stretch to the limits of the edge in a very easy and hopefully in an automated manner. So let's go to the next slide here. So some of the detail that we showed earlier, this is a slightly different view. When we talk about a service and it's being offered to some of our applications internally, what does that mean? We talk about how do we make the application developer life easy so they can go, hey, I wanna move or I wanna deploy a service in a location and they really should not be aware of some of the constraints of building infrastructure and it should be coming up in as fast as possible manner. So primarily what we provide is on the bottom side, it's Kubernetes heavily driving the application deployment, driving the application placement and exposing the applications to be able to be scaled up, down, being able to manage the life cycle, maybe you wanna upgrade the application. So this package of how we do this has become super critical in our opinion because everything else comes later whether you wanna be able to scale services up, down and then you wanna be able to do service mesh and you wanna do some of the fancier upgrades you wanna do with CI CD, right? All of that is fundamentally dependent on this CAS layer, which we truly believe should be managed from a central location because some of these aspects are not something you wanna manage in hundreds of thousands of locations with millions of pods. I think what you wanna do is you wanna be able to bring that view back in and aggregate that view and be able to show to the operator what is it that you wanna do next or if you're even a little bit more aggressive, give that to an analytics driven machine learning model where you go, what would you like to push next? So I think that's really where the industry is going. And on top of that, in the MTCI layer which we're working with the Linux foundation to open source this and the XGVela group there's a set of application services that are very telco-centric. So think of them as paths, but telco paths. So these are services that are being open sourced by Mavenir and these provide anybody to come and make it a plug and play environment. So you bring your network functions which is applications and you're able to run this on a standard Kubernetes interface to your heart's content. And I truly believe these are gonna be a million pods in each network at the edge. So it's almost running like many EKS's and that's where the real scale kicks in. I think that's where platform nine offers are pretty good, supreme solution where it's literally for us, it's plug and play. So let's go to the next slide, mother. This is more of a physical representation just to illustrate what the setup looks like. You have hundreds of thousands of towers on the left side. So you may have about four to five towers or up to 10, 20 towers, these get aggregated and there is a 5G term called it's a front hall. It's the data flowing from the tower back into the VDU. The VDU is the first component that's running on an X86 server. Now the nice thing about this is it could run in the LDC if there was a local data center concept or it could run under the tower itself in an X86 server. Now all of this gets aggregated and it also talks to the CUCP and CUB, there's a couple components that are associated. There are a little bit more upstream that are more on the regional data center side. And you'll notice that there's a management system that is more oriented towards the application and the application function orchestration but that drives the infrastructure as well underneath. And on the right side, you have the core which is more of the national data center. So all of this traffic goes back into CUCP and back into the core and that's where some of the key decision making happens. But on the left side, you have the radio and the actual physical radio, what I would call the digital signal being processed by the VDU. And then some of the brain or some of the components on the right side which take care of more of the, is it an SMS? Where does it go? Is it a voice call? Where does it get routed? So some of the other components are on the right side. It's not a, it's a very small picture just to show the complexity of the network in a network operators environment. And truly putting Kubernetes everywhere and stretching the edge which was very much a monolith running in a black box and replacing that with X86 hardware and running at scale at the edge. I think truly is the next steps in the evolution of this, what I truly call the edge. So I think this is the, let's go to the next slide, Naveen. Naveen, when you speak of this, when you speak about stretching Kubernetes, you're actually thinking of putting Kubernetes nodes also in those radio cell towers, right? Absolutely. So when the radio cell towers need a box, the X86 server sitting underneath is really a worker node running on bare metal with Kubernetes on top of it. It doesn't get any closer to edge, I would say. That's right. Great. Naveen, thank you so much for walking us through that. So in summary folks, what we believe is SaaS management can in a pretty significant way simplify your operations at scale at various layers of management across your edge deployment. As Naveen said, in a telco example, for example, it's your inventory of radio cell towers but also all of the underlying regional data centers or local data centers or the core software that all needs to be managed at scale and across the globe to truly run an effective 5G network. And so incorporating SaaS management can really be a game changer in this kind of scenario in the level of simplicity that it can bring to your management and hence the kind of SLAs that you were able to commit to your end customers. So those are some of the key takeaways that we hope that you take from this session. Again, thank you so much for joining and Naveen and I would be happy to take any questions if you might have them at this time. Hey guys, so there's no questions. This concludes our session and thanks everybody.