 Hi everyone. I hope you have had a good break. So it's great to see many of you here, physically here today. After the past few years, my name is Yang Zhen. I'm working as a product owner at Ericsson. And I'm Leonid Zhuan. I'm working as a software engineer at Ericsson Software Technology. So today we're going to talk about network service mesh as a scale for telco networking. So first of all, let's talk about challenges for telco cloud-native application network functions. So telco cloud-native network functions, or CNF in short, require networking scalability and high availability. In addition to that, CNF also has special demands on networking capabilities, such as end-to-end traffic separation and isolation. Also, it is well known that a network address translation is problematic for some of the telco protocols, such as CIP or SCTP. Hence, there is also a requirement to support for traffic without NAT for L3 forwarding with load balancing as a service. Another important requirement is to allow for a cluster-wide VIP address as source address for outgoing traffic originating from application ports. Last but not least, to allow for high performance accelerated user plan traffic handling. So those typical telco requirements are not easy, cannot easy be sold within the constraint of COVID-19's primary networking. Hence, secondary networking solution have been developed to get around some of the limitation of the Kubernetes primary networking. However, the problem is that there is no standardized way for cluster-wide connectivity management. And those solutions, like known solutions tend to be highly application Pacific. They are not technically interchangeable, non-reusable, and not intend to facilitate cross-platform portability. So when there is a problem with energy engineers will look for a solution, and as an answer to the problem that I have just described, network service mesh offer a framework that allows additional plug-in network services that can handle challenging requirements for telco CNFs deployed on Kubernetes. So what is network service mesh and why network service mesh? Network service mesh is a CNCF sandbox project, and the community has been very active, extremely accommodating to user needs and very responsive. Network service mesh solves complicated L2 L3 use cases in Kubernetes that are tricky to address within the existing Kubernetes network model. Inspired by Istio, network service mesh maps the concept of a service mesh into L2 L3 payloads. One of the important aspect is that network service mesh does not require any changes to the existing Kubernetes network and can run alongside any CNI such as Calico or Multus. It supports for additional cluster-wide coordinated connectivity for CNF workloads. Network service mesh provides a framework that allows to plug-in additional network services that can handle telco requirements, and on top of the framework provided by network service mesh we have been developing a project called Meridio. Meridio addresses the typical telco requirements such as a node net or VPN separation. The architecture of Meridio is designed for a variety of network services such as stateless or stateful load balancing, external traffic attraction and firewall. It is also an open source project published on GitHub. So Lionel will describe more in details the design of Meridio as well as the issues related to NSM that we found during the design and implementation of Meridio and as well as like how they have been addressed by the community. Thank you. So first what is Meridio? So Meridio has the objective to facilitate the attraction and the distribution of external traffic within Kubernetes via secondary networks. To achieve this, multiple strategies are provided to the users to control the different concepts Meridio offers. The users can modify traffic attraction with configurable external networks, for instance VLAN or host network interface. They can deploy new network services and configure them with traffic classification which will separate the traffic into multiple different logical groups applications can subscribe to. In Meridio everything will be adapted during runtime so all networks and all virtual wire will be added or removed based on how the user is configuring the system. The last configurable part is on the application port. A sidecar container is running to provide a gRPC APA applications can use to connect or disconnect the network services. And in the same way based on this request the networks will be adapted and the virtual wire will be attached or detached from the application port to start or stop receiving traffic. To better support Meridio design extension have been proposed and abscaded by the NSM community. Has a default use case or example in the NSM a simple point-to-point connection have been proposed between one single network service client and one single network service endpoint but this is not sufficient for Meridio we're then using multiples for others that are providing multiple different capabilities and the first one is for the front-end service. A forwarder is required to connect the network services to the external gateways if you are for instance VLANs. Using this specific forwarder the network service endpoint will only exist to provide a control instance and will not terminate any virtual wire or provide any traffic. The second one is the point-to-multipoint. The point-to-multipoint is required to avoid exposing the full mesh connection and also to provide a single network interface per network service to the user ports. The traffic is attracted from external gateways by several instances of the same network service. The traffic can then traverse through any of the network service before it reach the user port and therefore the user port has to be connected to all available network service instance forming a full mesh. Has a default, sorry, has a temporary solution we are using a proxy. The full mesh connection is created between the proxies and the network service instances and a point-to-point connection are created between the proxies and the user ports. But the long-term solution for us is to use the point-to-multipoint forwarder since the bases are already in place in NSM and the next step will be to develop it in collaboration with the NSM community. Finally, the last main design extension is about the policy routing. The default routes in a Kubernetes pod are pointing towards the primary network interface. Also, user ports might want to connect to multiple network services at the same time that are handling the same VIP address but with a different protocol. The outgoing traffic then has to be routed thanks to the NSM policies. The policy routes are added based on the network service configuration. So, as part of the request to NSM, the Sidecar container will specify the routing policies and if the network service is updated, then the Sidecar will update the connection via a new request to NSM and NSM will update the connection and the policy route inside the pod with no traffic interruption. So, with NSM being used for cluster-wide connectivity in conjunction with Kubernetes, we have practically observed that the limiting aspect of Kubernetes network can be effectively overcome. From us, who are working in Meridio, we are very open to discussion on best practices as well as collaboration on the way forward. For more information about Meridio as well as the network service mesh, you can find the projects on GitHub. On the slide, you can find the link to Meridio project where you can find more information about Meridio as well as to reach out to us. Thank you. Thank you.