 Hi everyone! In this talk we are going to do something which seems like a fairly normal daily activity these days. We are going to announce a brand new service mesh, which we call L7MP. On the surface L7MP is just like every other service mesh around it that provides connectivity, routing, security and observability to microservice communications. But digging a bit deeper you will see that this is just not like your everyday service mesh. But before that let me introduce myself. I'm Gabo Rhetwari, part-time academic part-time industry research fellow and recently I've been leading a joint effort between BME and Rickson to experiment with brand new ways to build telco applications fully embracing the cloud native technologies tech. This talk is the outcome of this activity. So why do we need yet another service mesh? The problem is of course with legacy applications that several industries have been building, maintaining and marketing well before the advent of Kubernetes or before the advent of the service mesh frameworks for that matter. Think of video game servers, telco applications, VPN gateways and the countless other appliances people want to deploy into Kubernetes these days. These applications are usually built as software monoliths and use their application specific protocols. For instance the 4G mobile core network uses three different versions of the GTP tunneling protocol or running on top of UDP with at least half of those and no standard GTP variants also in wide use. Decomposing these monolithic applications into microservices would greatly benefit from the standard service mesh goodies like routing, observability and security and the like. But there is a problem here. The current line of open source service mesh offerings do not quite support the legacy protocols these applications are talking between themselves. In response, industry is actively trying to work around cloud-active technology stack and develops competing products with various degrees of success. S7MP is an experimental project we announced today. It is a completely different approach with sketch a new service mesh design that would accommodate legacy applications with the legacy application layered protocols as first class citizens. But first let's see how typical monolithic legacy application looks like. In this case let it be the so-called session border controller. The SBC is a proxy that stands in the way of voice over IP LDO streams and encapsulate it as real-time protocol voice sample buffers and send over the network as UDP streams and implement some critical value-added features like firewalling, rate limiting or media transcoding. Problem is the same as it typically is with monoliths, how to scale, how to enable resiliency, continuous integration, etc. The solution is, of course like usual, de-aggregation and microservices. First let's separate out the CPU intensive part, the media transcoder into a separate microservice and let's deploy it behind a cycle proxy and then let's remove the embedded state so that we can freely scale it up and down with current load. Second let's add an ingress gateway that load balances and routes RTP audio streams across the transcoder backends, provides retries and timeouts, counter deployments, terminate encrypted streams, etc. Third let's move rate limiting and firewalling functionality from the monolith into the ingress gateway. Fourth let's deploy monitoring infrastructure to gain per media stream visibility into the traffic and finally let's write a Kubernetes operator that will program the gateways and the sidecar proxies based on the high-level network policies specified as Kubernetes custom resource objects. What is surprising now is not that we ended up with the service mesh, what is surprising is how naturally this design emerged even when we're considering the legacy applications. Indeed we need the usual combination of routing, observability and security features, but as it turns out this is not your everyday service mesh and this is because the TACU requirement space is quite unique. First by typical service mesh handles mostly HTTP and related protocols, TACU is fundamentally multi-protocol. This does not only mean that the proxies must understand lots of exotic transport and application layer protocols but also that they must be able to translate between these protocols seamlessly like for instance translating from UDP to RTP, from GTP to VXLan, etc. Second in TACU the main traffic profile is typically a long-lived real-time audio and video streams with their stringent quality of service requirements as opposed to typical short-lived request-response type of workloads that cloud native workloads typically handle. Even the same words mean completely different things in these two words, for instance in TACU rate limiting usually means policing traffic based on the number of packets per second or based on the number of bytes per second transmitted by a stream when in cloud native the KPI is fundamentally different, for instance the number of HTTP requests per second. And finally the scales are also quite different in that TACU often faces with extreme performance requirements. S7MP is an experiment to understand how to build a service mesh that is from the ground up open to such legacy applications like TACU. It concentrates on the main features like control plane data plane separation as per the best software defined networking principles and it is fundamentally multi-protocol. At the same time it tries to be small and easy to extend so that it can serve as a perfect playground to experiment with new ideas. For a teaser we are now experimenting with a new eBPF-XDP based sidecar proxy accelerator that promises with orders of magnitude performance improvements over plain user space sidecar proxies. Eventually what ends up useful we will rewrite and upstream into cloud native stack for instance we can target Envoy. To get an idea on how how an S7MP configuration looks like consider the session border controller use case from the previous slides. Here is how the transcoder service is exposed to the rest of the cluster via the sidecar proxy. S7MP uses the familiar service mesh subtractions for instance the transcoder backends are made available to the rest of the workload exactly how is done in most service meshes using the virtual service abstraction backed by a plain Kubernetes service. It's just that the application layer protocol specification is not HTTP but RTP running on top of a UDP transport. The important part here is the route spec which very firmly details how to hand over the media stream to the transcoder application which is usually quite picky about how to accept calls like that it only accepts RTP streams to and from even numbered UDP ports. Then the gateway configuration makes this the session border controller data plane available to users. Here the slide does not show the configuration for the voice over IP protocol control plane message routing but this is also doable in L7MP. Again Ingress gateways use the same virtual service abstraction but the route spec is not quite different. First it filters incoming calls based on the source IP address. Second it load balances calls using a consistent hash policy ensuring the sessions are sticky based on the RTP, based on an RTP header field and finally it defines a custom retry with timeout resiliency policy in that field transcoder connections will be timeout after two seconds and will be retried at most read times. The L7MP project is still under heavy development but we now have a working SBC prototype built on top of L7MP and we can already route route plane UDP calls end-to-end providing load balancing, country deployments and resiliency and our monitoring infrastructure is also beginning to fall into place. But we firmly believe that L7MP is more generating than telco. So if you are struggling with a legacy application that you just cannot seem to be able to squeeze into Kubernetes networking we might have some ideas to share with you so please come and talk to us. But if you are a service mesh vendor and you want to go seriously after legacy apps we'd like to hear from you too. Or if you just like to play with new service mesh design like we do, L7MP is made for you. Finally we'd like to thank Ericsson for the generous support they provided to this work and we are looking forward to any comments of yours. Thanks for the attention.