 Okay, welcome everybody. We have our session panel session talking about multi cluster, multi Kubernetes cluster networking, mesh and application orchestration in an open source way we have representatives stakeholders from the Linux foundation projects. They're going to talk about this subject today and go into an example open source project that we're all involved in. My name is Bob Monkman I work for Intel and I am part of the next foundation community involved in a number of projects including one of the projects and code that we're going to talk about today. And I'm also an open source, you know, marketing a strategy person for Intel. And so I would like to have everyone on the panel please introduce themselves starting with you Robbie let's go down the line here top the bottom. Good afternoon everyone. I am Ravi Chinduru associate fellow at Verizon. My key focus is helping to build next solutions in partnership with ISV and SA partners. And also designing and orchestrating next solutions that uses both partner and version services. I'm also a member of Linux foundation and co project. I became the TSC member very recently. And this is the subject of this panel today. Hey folks. This is Arun Rajukul here and with Reliance Geo. I am responsible for technology development for enterprise and cloud services at Geo and previously been involved in a few open source projects and all that have you know, further the cost of our service providers, adapt and adopt and enhance, you know, open source applications and use it for the our benefit. So I'm happy to be involved in some way with this open source project here as well. And happy to join this panel with the rest of you just wish. Thank you Arun Amar. Thank you Bob. My name is Amar Kapadia. I'm a co founder at a startup called Arna Networks. We are a software company working on orchestration and management for 5G network services and edge computing applications. I'm an active Linux foundation community member have been for several years, and the main projects that I'm involved with are Emco, Onap, Anukkit and Magma and it's a pleasure to be on this panel today. Thank you Amar. And last but not least Kathy. My name is Kathy Zhang. I'm a senior principal engineer at Central Software Engineering team of Intel. My concentration areas are Kubernetes, serverless and edge computing. I'm a TSC member of this project, which is which will be a Linux open source project. Thank you very much. Welcome everybody. I think I'd like to start off with really just let's level set on when we talk about multi cluster orchestration and the need for, you know, managing network services across geo distributed locations. What are we really talking about Amar, could you tee that up and then we'll talk a little bit about use cases. Yes, absolutely. What we are seeing with edge computing is a new concept called composite applications. And what a composite application is, is it's an application that is made up of other applications underneath. And each of these applications could then have their own, you know, microservices multiple microservices as well. And the interesting thing about these composite applications is that they're not deployed on one site. They're meant to be deployed across multiple sites that can be multiple edge sites that can be edge to public clouds. And let's take some examples so 5G is is a great example. So, even in 5G, let's take the ran, for example, the radio area network, where you have different applications for the ran composite application and these sub applications maybe a distributed unit do a central unit see you the radio intelligent controller the Rick, there may be some associated databases, and all of these make up the ran 5G ran composite application, and these components can be orchestrated in different places. The same thing goes for say Azure IoT or Amazon green grass applications. There's a piece that runs on the edge there's a piece that runs on the public cloud and the list goes on. There's a whole range of edge computing applications with the same attributes, such as analytics AR VR SD van and more. And now we have to worry about two things and I'm sure the others on this panel will get into that is, first is, where do you distribute these applications. In some cases it's the edge, some cases it's edge and core some cases it's edge and public cloud and the second is how do we distribute these applications and the decision making can be based on parameters such as cost performance reliability platform capabilities as well. Very good. Let's talk to thank you for that that overview and that made a lot of sense. It sounds like a complicated problem. Let's talk a little bit about use cases Robbie. In your sphere. What kind of use cases can you tell us about where this would apply. So working with our essay and as we partners, we can clearly call it out that communities and containers or the de facto model for building new Mac applications coming to the use cases. There are several use cases like smart warehouse smart manufacturing cashier list store to name a few. So let us pick the cashier list store that uses 5G public Mac. The customer uses the retailers mobile app to authenticate and enter the store. This mobile app connects to an application running say in the public cloud and it need not be the same cloud provider as in the public. So as this customer picks the items from the shelves the cameras and other IOT sensors stream the data say like a computer vision application running in the public Mac that is in the carrier network. So these apps processes the stream coming from cameras and IOT devices identifies the picked up items and send the data to an billing app running in the enterprise data center. Then the payment is processed and the city generated as you have observed for this use case. We need to bring up clusters to raise clusters in different clouds public Mac public cloud and even the enterprise data centers. The traffic between the applications which has been traditionally within the data center is now on the van, of course, which needs to be secured. So from this use case, or all these Mac use cases, there is a need for an orchestrator that is cloud agnostic can manage multiple children's clusters across different types of clouds and takes care of deploying the applications, both the 5G core, as well as the Mac applications, even taking care of the network network functions and going with a flexible and highly available deployment models and the last but not the least securing the connectivity between the clouds. Very good. Thank you. That was a pretty interesting use case. And but I imagine that mco is not limited just to, you know, 5G telco use cases. Yeah. Other other sectors. Can you talk about please a room you are muted. My bad. That's okay. It's, it happens right so. I didn't know there are lots of different use cases that are that lend themselves very well to orchestration with a very intelligent capable multi cluster orchestrator like mco. So, so 5G is a very, you know, very popular use case that everybody, every operator, every major operator across all the geographies are are working on but it is not just limited to the wireless or IOT type of use cases right there are a lot of other enterprise type uses that are of significant interest to carriers and operators across the world that I think would lend themselves very well to orchestrating using an intelligent orchestrated like mco. Sassy as a, it's a, you know, it's a very overused hyper leverage term these days, you know, secure access, you know, service edge, right. So the, the Sassy use cases where you have distributed gateways that need to be deployed in cloud environments, typically service provider cloud environments in a private cloud type environments across the service provider network footprint and how you turn them up how you orchestrate the deployment of that with other enterprise services that need to be delivered to enterprise customers. How do you tie that back into other telco applications and services that the service provider is offering to probably the same customer. So you have a wide variety of enterprise use cases that you could target using a multi cluster orchestrator like like what we're going to be describing here. So the point is, you know, we need, you know, we need to have these applications distributed, like Robbie was describing earlier right not just in a single location or in a single large data centers or just a cluster of data centers they need to be deployed out to the edge of the network the far edge of the service provider, provider network it could also be deployed in public cloud location so kind of taking all of these use cases, specifically targeted towards enterprise customers and kind of integrating that with other services that the service providers offering to enterprise customers is rich for for, you know, for picking here right in terms of how we can use intelligent multi cluster orchestrators to to target that space. Very good. Thank you for that. And so with these different kinds of use cases. You know, one of the projects that we're involved in to address these issue is is the M co project which which is then for the edge multi cluster orchestrator. Kathy, can you tell us just give us an overview. I've got a slide that you prepared as well. What is what is M co and you tell us a little bit about you know it's capabilities and it's it's architecture. Okay, yeah. So encore is a geo distributed application orchestrator that intelligently places a complicated workload onto one or more clusters, the clusters that the workload is placed can be a public cloud cluster, or enterprises, private cloud cluster, or IOT H side cluster. The workload can be a complicated application that is composed of multiple simple applications, or it can be just a simple application, or it can be a network function. And these apps or functions can be in the form of container or virtual machine. It provides a self service portal and the one click deployment of complex applications and network functions across one or more of Kubernetes clusters. It also configures service mesh, and security policy like mad firewall, etc. to enable cross cluster communication between the deployed applications, or between a deployed application and external service. It also supports multiple placement constraints, such as affinity and affinity, a platform capabilities, latency and cost. It also provides application lifecycle management, including upgrade on the comprehensive status monitoring of these deployed applications is also provided. It automatically enforces security isolation between the tenants through the tenant authentication authorization, RBAC, logical cloud concept. So basically by using encore, you have one uniform control and management plan to automatically deploy applications, network functions, security functions, and automatically set up the needed networking connection. So, but so M co is primarily concerned with, you know, managing and orchestrating the applications and the network functions themselves, not the cluster underlying clusters that correct. Yeah, what's the, can you talk a little bit about the relationship between M co and Kubernetes itself, I mean builds upon standard Kubernetes right. Right. Exactly. So, community, you can think of Kubernetes orchestration schedules, or workload onto a node and core orchestration schedules or workload onto a Kubernetes clusters. So it's one level and co operates at higher level than Kubernetes. It makes decision on which clusters, which clusters, or one could be one or more clusters, or workload should run. And then it interacts with the Kubernetes API server and hands over the workload to that Kubernetes control plan. So for that Kubernetes cluster control plan to schedule the workload to a specific node. Thank you. I want to come back to you, Robbie, because one of the things that I've heard is that, you know, with this sort of multi cloud multi domain mech application environment. There are some challenges. And maybe you could speak to some of the challenges that open source communities like M co can focus on. So in my view, there are four key areas that M co can focus on. Number one, typically, I don't know spoke about end to end make solutions needed in enterprise use cases. So we need a designer tool where, you know, the services can be stitched together the services can be partner provided services and it could be make container applications and any other network and network function services must be stitched together. There's a need for designer tool. And also, we need a way to manage the data operations, which is quite critical when it comes to deployment and provisioning. That's one leg of the orchestration. We also need the inability so DevOps can come and easily integrate with this ecosystem and perform the data operations. So, number two, M co can look into securing the new type of this East West traffic that I spoke about in the earlier use case, which is connecting the applications and services between clouds. So we need a service mesh or enhance the stereotype of service mesh. So there is a zero trust application access between these services. Number three, to get a network awareness into the application so that we could do an intelligent application placements. So there is a fuzzy FF for specifications that many operators are building. It will be good if M co can integrate with that spec and get that network awareness in the application placement. Finally, the fourth one, as most of you are aware that my cops run in the carrier network. It means they're not accessible from public internet. So M co can work out on enhancing again the style or any service mesh technologies so it can securely expose this make services for public internet. The need is for enabling the service to service communications or public internet as well. Thank you for that one thing I wanted to talk about where we're getting a little bit short on time is there. There was a lot of interest and a lot of input coming back to to us at Intel about M co and wanting us to drive this into an open source community. And so we have actually done that we've kicked off and a new repo under the next foundation for M co. And here are some of the participating companies of course you all here are representatives of some three of the companies for the companies here. But there are others as well so we've got really good critical mass right out of the gate. M co has been around actually for about a year and a half and again a lot of interest and a lot of use case and a lot of different assessments and evaluations going on about its capabilities. So we're looking forward to a very robust, you know, set of use cases and important features on the roadmap. And, you know, the, you know, the, we meet on a weekly basis there's a mailing list that I'm going to talk about a little bit here I'm going to move this out of the way so that people can share one of the questions that will come up is, you know, how do we get involved in M co and so here on screen is a link to where the wiki is for the M co project, and also a link to the mailing list. So, you know, one of the things that we're really looking forward to is, is getting more and more people with different use cases to come in and, and speak to and bring us more requirements and more use cases for multi cluster orchestration and of course, we all know that there's lots of, this is a really important area in the industry right now, and there's lots of solutions out there, albeit, they're from particular cloud service providers or other vendors that are, you know, sort of their solution to multi cloud and tied to one particular vendor so one of the things that we think is really powerful. And what I heard when we were talking to people about open focusing the M co project and the Linux foundation was, it's really important to have, you know, an open, independent collaborative community of forum to drive these issues and pass find the capabilities that are needed in multi cluster orchestration so M co at the start is about, you know, orchestra, it's about orchestration but it's also about automation. It's about security, and it's about open collaboration from the stakeholders in the industry to to make something really powerful so we invite folks to check us out on the wiki and come and join our mailing list come and join our meetings all the information is there they're open meetings and anyone can participate. And we want to drive drive this project forward and amongst the stakeholders so I want to thank everybody for being on the panel today, and we'll move to questions in a few moments. Thank you.