 All right. I guess we'll get started on time. So we're here to talk about Edge Kubernetes OpenStack and LovNFE. So I think that wins the buzzword bingo for the day. So I'm Randy Levensoller, architect at Cable Labs, working on SDN and NFV. Hi, I'm Shamik. I'm from Aresint. I lead the Aresint and in general the edge computing strategy as well as our solutions based on open source and virtualization. I'm based out of India. Yep, and we work closely on some open source projects together with some installers and we'll talk more about that in the presentation. So just want to start with setting some context for where we're at today and where we see kind of the service provider industry going in the future. I mean, if you're not familiar with Cable Labs, we're a non-profit R&D organization that does research on behalf of the worldwide cable industry. So kind of we see today and some of our members are fairly far along within this lift and shift, which is where you think about OpenStack, a lot of the first generation network functions. They look a lot like the physical devices. That's one of the sales pitches, right? My VNF does everything my physical device does, which is great if you don't want to take advantage of the cloud. We're starting to see some of these infrastructures as a service, platform as a service, network as a service as this evolves, and some level of automation. We're getting a lot of learning through this phase. Next phase, which is kind of where we're at in the next one to three years and when I say where we're at in that timeframe, that's looking at deployments for the network architecture, a lot of the technologies available today. We are seeing some proof of concepts for cloud native and even some of the more advanced features, but really not seeing the wide adoption yet. And so we're seeing kind of the next phase, containers, cloud native, and actually seeing smaller VNFs, not quite microservices, but moving towards that direction so that we're actually able to take advantage of the cloud infrastructure or Kubernetes. And we'll talk a lot more about kind of where we're at with Kubernetes. And then you're kind of going on into the future, right? Once we have everything automated, we have microservices and moving on to mandoservices to show up throughout another buzzword. We really want to see autonomous networks, self-healing networks. We'll really be able to collect a lot more data, have more learning from that data, and then be able to do the self-optimization, self-healing networks. Which you can do smaller portions today for where you actually have that end-to-end automation. But really kind of once you get everything across your network supports, programmable switches, open flow or P4, some kind of SDN controller managing the switches, that's really where we're kind of excited and seeing things going in the long term. And again, I promise to talk Edge. Of course, what does Edge mean, right? I think everyone has a different definition. So for us, Edge means everything from the customer premise. So this could be your cable modem for a residential customer or a stack or set of servers if you're more of an industrial user. Access nodes, so these are like, they're the little pedestals or things on telephone poles. You can put compute there or kind of your Edge, which is a head end or hub for cable, central office in the telecommunication space. This is where you have some air conditioning. You can actually use traditional racks. So you get a little bigger, a little more oomph there, but still space is limited than kind of core and regional data centers and hyperscale. And typically, we'll move to the next one. Talk about why do you want to look at these different areas. The more centralized you are, the less expensive it is to run. But you also have more latency. If you look at trying to render things for VR goggles if you host it and it's 50 nanoseconds away, you're going to get queasy. So you want something that's really within 40 kilometers or less, very close to your house for some of these low latency applications or high bandwidth applications. You really don't want to have that bottleneck anywhere on the network. And so that's what we're looking at when we're looking at these Edge applications where move it as close as you need to for that application. And HA comes into this as well where you may run more stuff on the customer premise just to have everything within that availability zone. And some kind of rough level things about why would you look at placing an application in these different areas. So again, some use cases, I'm not going to go through this too much. But basically, we can do more data plane stuff closer to the Edge. Some of the access networks, 5G is one of the things everyone loves to talk about. We're not going to talk about that too much today. But that's something that you can run in the Edge when we talk about the infrastructure, but a lot of the next generation cable infrastructure can also be run in a virtual environment at the Edge. More central things, you have your mobile core, your voice. And then lots of things like Netflix and all can be streamed centrally. And then you have your CDN closer to the customer site just to kind of give us some little more context. And we have more and more requirements as we move for these network virtualization, low latency, high throughput. We care about how many packets per second can we get through. How close are we to line rate? If you move things from physical hardware to virtual hardware, you don't want that to be a slow down, right? If you're playing game, doing your gaming at home, you don't want to add a half millisecond to your ping because, hey, we virtualized. You want to have that same speed if not a better scenario by having a smarter, more intelligent network. So we really do focus on a lot of this. And some of it may mean we're doing less oversubscribing of compute nodes and things like that so you don't necessarily get the same VM density at the edge for these applications as you would in a traditional cloud. So again, probably won't spend too much because everyone here knows KVM. But hypervisors and kind of open stack based virtualization is really good for general purpose compute, especially if you have a large data center, gives you a lot of flexibility, a lot of security, a lot of isolation. It's a lot more mature than things like Kubernetes. But there's a lot of overhead that comes with that. It takes a long time to spin up a VM as compared to a container. It just, a lot of extra management. And if you're managing thousands of sites, when we're talking about the edge, we're talking about thousands of sites at the access edge and then millions of devices or millions of individual sites when we talk about the customer premise. So we can't do things that need any active management. All needs to be passive. You need to be able to roll it out and have it all automatically managed as much as you can. Again, open stack. We're at open stack, so I really don't need to cover the slide. Sorry. I left this in from another presentation. Let me hand it over to Shamik now. Yeah, hi. So I will be speaking a bit about the edge computing in general and how this particular market is picking up and what kind of problems we are trying to solve in edge computing, particularly with respect to the open source project that we have recently started. So as you can see in edge computing, there are many domains. There are many verticals where edge computing makes sense, primarily not just in communications, in consumer, in rail infrastructure, and energy as well. And if you look at this today, one of the fundamental problems in edge computing today is to find the right use case for edge. I mean, if we can save 10 milliseconds or 20 milliseconds out of public cloud to edge or from a central office to edge, then does it really the right thing forward? Because edge is expensive. There is a lot of hardware to be run there. It is highly distributed. You will have thousands of edges. So is it prudent enough to run everything in edge? Or what exactly we should be running in edge is perhaps one of the key problems today that we are trying to solve. It's also important, perhaps, that we also need to look at edge computing from a developer's perspective. So application developers who will develop these applications like AR applications or applications for cloud gaming, which essentially needs to perhaps run on edge for latency purposes. They all come from a completely different world as compared to the telecom or the communication providers. So getting the edge more developer friendly, to get edge more accessible to developers so that they can seamlessly transition from maybe public cloud to edge, or, for example, they can start their own edges to try out a few things. So that's one of our critical foundation for this project that we are doing, where developer is a first class citizen in edge computing. Let's move. So when we started this particular journey, we also wanted to understand which are the applications that make sense for edge. I mean, as I mentioned, not everything will be requiring an edge. Some would perhaps can run on public cloud, or it can run far away. Maybe it's the latency tolerance is high. But then what exactly should run on edge? So based on a few observations, we came up with this kind of a list that, of course, the latency is important, otherwise most probably don't need an edge. The other, perhaps, important aspect is the power. So today's AR glasses are large and bulky, and it gets heated up pretty soon. And its prices are pretty high, right? $3,000 or $4,000. Now if you have to make the AR glasses more accessible for users, then to bring down the price of those devices. And if you have to bring down the price of the devices, then you might have to offload the computing from the device to edge. So there must be some ways, some innovative ways of offloading computing from device to edge. What kind of battery consumption and application requires today on the device, and how you can actually move that consumption away from the device are some of the key challenges that we have to solve. It also means that you would be doing a lot of computing on the edge. You would be running, perhaps, neural networks. You would have to run a lot of data-related algorithms. So the edge also needs to have an architecture which can support data designs as well. So it has to support, may not be big data. I mean, that's a misnomer, but perhaps you need to support streaming analytics. You need to support a lot of capabilities which today can run on a device. But once you move it to the edge, you can do much more than one device. So today, a device can render one particular application, the same application when you're running intents and thousands of them on the edge, then there must be different architectures to handle that. So this was a kind of an exercise that we did for some of the applications. Not all applications really require edge. But what this particular set, I'm not going to go through the list, but what this actually means is that why we perhaps need a microservice-based approach towards edge compute is the simple reason that a developer today's developer, an application developer, doesn't build everything from scratch. So they use a lot of open source. They go to a marketplace to get some microservices to build their applications. Those microservices could be built by somebody else. For example, if I'm doing a face recognition application, a very specific algorithm could be built by somebody who has nothing to do with edge as such, but built a face recognition algorithm. And as an edge application developer, I might want to use that. Now, can I use it in the same structure, in the same format that is running in a telco edge? Most likely, no, unless there is a way to consume APIs. There is a way to launch those applications and microservices together. So that's the critical factor. Nobody today develops everything from scratch. How do we get microservices which are not exactly meant for edge to work on edge? Can there be a standard method for the infrastructure? Can we do serverless? Can we do container-based architecture so that these microservices can be consumed by applications? It's something critical for our journey as well. Apart from this, there are some other fundamental problems on edge compute when it comes purely from the developer's perspective. Now, today's developer might be running 10, 20 virtual machines or 50 virtual machines in a public cloud, and is able to serve, well, hundreds and thousands of consumers. So how did this happen? It happened primarily because of two things. So you could store the server side of the application. You run it on public cloud. You can disseminate the client side of the application through app stores. So basically, you can reach out to all consumers today. Now, if you expect this application to be just lifted and executed on edge, most probably it won't be the right economic model. Simply because 10 virtual machines today would become 10,000 if you run all those virtual machines across all the 100 edges or 1,000 edges. So the developer's cost per application increases drastically if he has to move towards edge because he cannot possibly lift and shift it. So there must be some ways of re-architecting the application so that the overall cost of moving applications from cloud to edge reduces. So that means that there must be a way to run applications which can be started on demand, which can be switched off on demand. It could be a microservice running on the edge, which is corresponding to one device. And you can run thousands of such microservices and then reduce them as the devices switch off or the devices go off. But it cannot be a simple lift and shift because it's simply economically not viable. You cannot possibly ask a developer who is running 10 virtual machines today to run 10,000 for edge. I mean, it simply doesn't work that way. The other big challenge for a developer who is developing an application is that, how do they actually access the edge? I mean, they can't, right? It's somewhere deep inside the operator's network. It's on a telecom pole or some telephone, some basement somewhere. It's on the tower somewhere. So today, the application developers have no way of accessing that. What they can do in public cloud is that they can just go log into the virtual machine. They can check what happened there. They can find monetization data, how much money they made out of the applications. But they would not be able to do that when you run this deep inside the edge in the deep inside the service provider network. You cannot possibly expect the application developer sitting somewhere in his home to connect to an edge running somewhere deep inside in the countryside and somewhere in the city and is able to monetize or monitor his application's behavior. So there must be a method to bring all these data together and aggregate the data and able to show the developer some view of how his application is performing. Remember, it's an application that's on steroids. Like, it was running 10 instances in a public cloud. Now it's running maybe 20,000 microservices across different edges. So the kind of aggregation required for an application is simply too high. So this is another problem that perhaps down the line we have to solve it. Right. So what is our approach? We are not trying to solve everything in the first go. We are going step by step. So this particular project actually is trying to solve one thing. That's creating an environment for the developer to develop such applications. And what's a model is that we have the Edge provisioning platform, which is what we call as the Edge provisioning automation. And then there is a runtime stack software, which is based on OpenStack and Kubernetes. So it's an automation platform which helps the developer to build their own Edge and try out applications without having to worry about the access. So this access could be anywhere. And the access location could be anywhere. But the developer still gets an easy way to run his application in the runtime. What we are trying to do now is to find ecosystems and partners and figure out use case development. Because each use case will bring certain change to the stack itself. I mean, if you run an AR application, there would be a specific need of running certain components in the stack itself. And if you run a robotics application, there will be something like a point cloud running somewhere. So you need specific customization of the stack in itself based on use cases. So that's another aspect that we are looking at the community to build different use cases to figure out what's the right stack for each of these use cases. And finally, of course, we are looking at the operator community to try this out and give us feedback in terms of what kind of capabilities are required, what kind of proof of concepts we can do with this particular stack, and using the community's applications. And that's what we are trying to build the way forward. So this is our building blocks. There are three tiers here. We haven't built everything. So we have started to build it. So on the customer premises or on the edge locations, what we have are edge hardware. It also has a platform, open source-based platform based on OpenStack and Kubernetes. This is what we call SNAPs. Runtime platform, which is running on the edge hardware itself. On the second layer, the management layer, we are trying to build two things. One is an operator device management and provisioning, which is basically an automation platform where you can spin up a customized edge stack, whether it's a Kubernetes OpenStack or Kubernetes with certain layers specific to a use case. For example, a TensorFlow and a bunch of other capabilities and GPUs. And you define the stack, and that stack can be provisioned on an edge hardware. So that's the work that we are currently doing. That's called the SNAPsX on the management side. What we are trying to build on top of that is an orchestration system, which is totally distributed. This is going to be in a roadmap. We'll start working on it very soon, where the orchestration within an edge is contained, self-contained. And then there is a central model which is going to monitor a very few things. For example, get monitoring and diagnostics data from different edges. Now, why we are doing this is primarily because the orchestration has to be decentralized. You cannot possibly run everything centrally anymore because there'll be thousands of edges. You cannot run an orchestration platform in the central office and expect them to manage all edges. So we'll have to split the orchestration capabilities in such a way that there is an edge-specific agent and then there is a central management. It sort of talks to each other minimally and ensures the authentication of the application, the user, the consumer. Remember, there are many stakeholders there. There is a developer whose application must be authenticated. There is a user who must be authenticated. The edge itself needs to be authenticated. So all the decentralized way of authentication is very important. And that would be our next big challenge to solve. Maybe we'll pick up some of the existing open source projects and try to use it for solving this purpose. But that's in the roadmap. That's the next domain. On the top, you see something called a customer-facing services. These are primarily not what we plan to do. This is specific to an operator, specific to users, how they want to use it. People might want to use it as a science project. They might want to use it as a live production environment. So we don't know that. And also, we don't know how each individual user would like to use it in terms of monetization. So we have kept that open, but any suggestions are welcome. We have a list of email addresses where you can send us your suggestions on how we can take this ecosystem further. OK. So talk a little bit about these specific open source projects, you know, SNAPs, which originally stood for SDN and NFV application platform and stack. As you can see, I got tired of saying that at conferences. So we started saying SNAPs. Man say this is very similar to OPNFV. And it is. And that's kind of where we started. But we really wanted a platform that was a little more stable and focused on targeting what we can do with the VNF. So something that runs a little more. It comes pre-configured. There's less decisions to make when you stand it up. If you look at a lot of the NFV demos, it's like we're using standard. Plus we did CPU pinning, large memory pages, DBDK, and things like that. And we've actually integrated all those in the base install. Done a lot of the network configuration and some of those kind of fun things up front. So you don't have to do that. It's an open platform. We've had issues in the past where we've tried to collaborate on one of the four-fee versions of OpenStack. And even though we were working with the vendor and the partner, and we had all the licenses, and all it took a month and a half to get the right license key to run on our site and not the customer site. And that's why it's nice to run in pure open source. We haven't modified the upstream packages much. We really are basing this on COLA and COLA Ansible today. So we're just trying to make that easier to install, make some more of the decisions on how we configure it. So it's a little more consistent for doing proof of concepts and testing. You can see kind of our main projects we have. Boot, this is where we install Linux. We set up the large memory pages. We do the CPU pinning. We set static IPs, all the stuff that most of the installers, at least the open source ones, don't do today within Kubernetes. OpenStack, so we have SAF, we format the disk. And just to make it a little bit easier, a little more usable to do, we have a couple members. So Cable operators have run this on their environment. And we're seeing a lot of vendors run this. And so that way they can actually show that we have this interoperable. We actually had at our summer conference virtual CCAP. So the data plane for the Cable network running on Snaps OpenStack and a commercial version of OpenStack were able to run the same set of VNFs to demonstrate the interoperability, had one orchestrator managing both. So that's kind of where we're trying to go with this, right? It's a lab. It's not meant to necessarily replace what you do in production, though. It could be hardened to move to production. Kubernetes, this is where we've been spending most of our time these days. Like everyone, right? It's new, it's shiny. We're really looking at Kubernetes on bare metal because you look at a lot of the Edge OpenStack cases. It's Kubernetes or containers with OpenStack on top of it. And then, oh, by the way, you're going to run containers and Kubernetes in the VM. So you kind of have Kubernetes, OpenStack, Kubernetes, just so you have a very flexible, secure environment. And with the Edge, especially the closer you get to the customer premise, you really don't want that extra overhead. So that's where we're looking at Kubernetes. So some of the stuff, right? We have data plane acceleration that we need here. So for packet processing with SmartNX and DBDK, transcoding, you have to have your TV coming in. You don't want to have to transcode it on your TV. We use GPUs. On cryptography, there's dedicated device Nix or stand-up cards and some SmartNX where you can actually do the cryptography on. Yeah, and this is some of the roadmap that we have planned. And that's what is the call to the community. Of course, the orchestration that we talked about, the distributed and decentralized orchestration, is perhaps one of the most important things to build for Edge. So we would be working on that. We also need to solve the problem of monitoring and diagnostics in a heavily distributed Edge. Like, there are thousands of edges, and application developer needs to have one single view about monitoring and diagnostics on its applications. So that's the other next problem that we want to solve with this particular project. We also need perhaps a service mesh which goes beyond a single Kubernetes cluster. It needs to talk to East and West. In Edge, there are use cases where we feel it's important for edges to talk to each other, particularly from an application standpoint. And we perhaps have to build that kind of an architecture next to ensure that the applications are resilient and able to move between edges in a certain way. Even if it is stateless, it still needs to move across edges. Security is very critical. And these are some of the capabilities that we would like to add. Many of them are existing open source projects which we are evaluating and trying to see if it fits to our set of requirements. This includes image scanning, policy engines, authorization, identity-based networking. There are plenty of security work that still needs to be done specific to Edge. And then, of course, serverless. I mean, that's another keyword. And it's important from Edge perspective because we want to run completely even-based architectures and off-load computing from device to edge seamlessly. You know, talk a little more about that kind of inbuilt tooling and monitoring. You can't kind of take all the data from all the head ends and centralize it if you do. We've seen what's just doing that with traditional hardware environments, which probably have a little less metrics. It often takes a half hour to consolidate the data across multiple regions and geographies. So we really want a lot of that done at the Edge and then only sending the interesting stuff back to consolidate globally. So that's where a lot of this stuff is a little unique to the Edge because you have thousands of data centers and you can't necessarily have that large heavyweight solution to do it at every data center, but you still need to do something. And then, of course, we have a few things to do on the cloud native side. The developer ecosystem still is fairly, I would say, at a nascent stage and we need to build that. That's one of the things that we are trying to do. We also want to standardize SDKs in such a way that the application developer doesn't have to worry too much about how the system is functioning, how the access network is. That's not an application developer and AR developers strength. You don't expect an AR developer to understand NFV. So how do we actually abstract that kind of complexity to an application developer is also some of the key problems to solve in Edge. And then, of course, there will not be a case where you're running everything on Edge. So there will be situations when you're running something on Edge, something on public cloud. There must be a kind of federation that needs to be done across Edge and public cloud. Kubernetes federation is one way of doing it, perhaps, but there could be some other ways as well. So that's another area that still requires exploration. Yeah, I think we'll skip these next couple slides and trying to be complete in our stuff. And talk a little more about our Kubernetes release. It is a certified cloud-native compute foundation installer. We just completed the 111 certification. And then we'll have 112 done fairly soon. Again, we'd love to have collaboration. We've had quite a few people start to pick it up already. Pretty much everything's on GitHub. We also have Project Snapse OO, which runs a bunch of the test tools and is used by a lot of the certification and other tools within OPNFE, Snapse OO. And again, that's hosted by OPNFE. Yeah, we're seeing new challenges with NFE and Edge for operators. But we want to take advantage of what's out there today. We don't want to reinvent everything. We want to make something that's accessible to a broader developer ecosystem. You should be able to write an app and have it run on a telco or a cable operator or all the operators' infrastructure and take advantage of those low latency, high bandwidth opportunities. And we have our own little IRC channel, but mainly do things over email. So again, reach out. And we have plenty of time for questions, maybe? Any questions? You mentioned a unified approach for the solution. But what do you think of extending it for these workloads targeted for Edge? Like allow users to leverage the same solutions as infrastructure uses and maybe even Kubernetes on top of it. And there is really a lot of things that can be unified there. What do you think? Yeah, of course. Eventually, perhaps having a unified project is excellent. But I think there is place to have multiple initiatives as of today because everybody is trying to solve a different part of the Edge, different problem of the Edge. We are trying to build a platform which is easily installed and you can easily run applications from a developer standpoint. And that's perhaps a domain that we are trying to solve. We don't know what's in future. Maybe there will be some kind of an unification of projects. But as of today, we are trying to solve a certain set of problems that we defined. And eventually, based on usage, we'll take a decision on that. Yeah, thanks. I can think at least of the one thing which is really common for all, probably it's state synchronization. Is what? State synchronization, like database or message queue. And here is a nice space for unification. Exactly. Thanks. Thank you. So you had mentioned a couple of pain points around policy engines and authorization. Can you speak a little bit more to the requirements that you're finding in Edge and how those systems in OpenStack and Kubernetes are lacking in that area? Yeah, and we've seen a lot of progress over the last few years in this area. But it really comes down to how can you get that low-latency IO performance. So you need to coordinate your schedulers around both compute or acceleration and network and possibly storage. And you really want to have, if you have for your VNFs that are service-chained together, you really want those on the same system and then have them on a different server for doing the other chain. And you want that done automatically, but also could work across multiple if you have to. Some of the other things, looking at things like RDMA to lower that overhead of doing that communication between the VNFs within the data center, where you don't have to have all that overhead of the IP stack and the latency of establishing a connection and sliding window and all of that when you have a much more reliable connection. OK, so like round. And there's another thing about the end-to-end authentication. So you have a device which could be a drone and an application running on it. And the application has been written by a developer who must be authorized to run on the platform. Then you have multiple edges. And then you have the operator. Then you have storage. Then you have so many different moving parts in edge. So having a common way to authenticate applications and authorize applications to work in a seamless manner is not going to be easy. I mean, it apparently looks so. But we'll find it out soon, whether the existing infrastructure is good enough to handle so many different authentication chains. And then, of course, there's access network. So like identity sprawl. Like too many user accounts, essentially. Yeah, essentially. Multiple user accounts and multiple trust domains are needed to span those trust domains. Got it. Thank you. Because if you were at these servers, you could see everyone's internet access in that area. Hello. Hello. You have the rule map to do the management and the orchestration. And my question is, do you have some rough idea about this type of dynamic orchestration between different edge nodes? I'm not sure whether I got your question. So it was the different. Some rough idea about the dynamic orchestration between these edge nodes. Yes, so we did some analysis on that. We still don't know which project to adopt there yet. It could be on app. It could be OSM. It could be something else. But currently, most of the orchestration systems that we have today is extremely centered around the VNF ecosystem and not exactly meant for edge applications where the application developer doesn't have to fill hundreds of pages of VNF descriptors. We don't want to do that. The dynamic orchestration between edges is very important in the sense that there will be mobility. There will be applications moving. A device will move. Unlike in case of NFV in edge, you will have the end devices who will be actually moving around. There will be orchestration from one edge to another. You might want to move back the application one step behind and move it back to the central office. At times, the application might even work better with public cloud. So we need to think of a mechanism where applications are placed at the right location based on the right attributes. Today, this problem, I don't think, is solved as of now with any of the existing orchestrators. But we intend to work on that in our roadmap. Do you have some specific open source component? Have you already looked at some open source component now? Not yet. So what we have in the open source component is already in our presentation. But there is our email addresses. You can send us your queries. We can try to answer them as well, whatever we have today. We have run OSM in the past on the platform. We'll probably get ONAP running at some point. It's just we need to designate support to support ONAP. And we didn't have that initially, as well as we have to clear, I think, all of our VMs off of one of our labs to spin up ONAP. And another thing is that in edge, the cores are valuable property. It's expensive. So we don't want to waste cores on heavy orchestrations. So we want to do very lightweight orchestration on the edge, but so that it does most of the stuff that you need to do within the edge. And then the heavy lifting can still happen from the central office. We don't want to waste cores in edge by running too many orchestration overheads on edge itself. Oh, thank you. We'll be around for a few minutes if we have some questions. Thank you.