 Well good afternoon and welcome to the first lightning talk of the day. My name is Giles Herron. I'm a principal engineer at Cisco Systems and today I'll be giving you an introduction to Media Service Mesh. So before I describe Media Service Mesh, I think it's worth discussing why we might need such a thing. And I'm going to start with an application taxonomy. So a friend of mine who did an MBA assures me that any problem can be reduced to a 2x2 matrix. So here we have a 2x2 matrix describing networks applications. So in one axis you have whether an app is non-real time or real time. In the other axis you have whether it is interactive, so typically request response semantics or streaming, so typically published described semantics. So I think it's fair to say that Service Mesh is today really focused on web applications and those apps are in this top left corner here. Now you could use a Service Mesh to interconnect the nodes of a message bus, but you'd probably just be transporting that traffic as TCP, so you'd be transporting the message bus rather than being the message bus. The way you might struggle with Service Mesh today is on the right hand side. So real time applications and that's going to be our focus for Media Service Mesh. Those apps could be interactive such as online games or they could be streaming apps such as closed circuit TV. Now I called this a fuzzy application taxonomy. Why did I say fuzzy? Well I would contend that these dividing lines between non-real time and real time and between interactive and streaming are actually quite fuzzy, so in terms of real time I might be watching football on a streaming platform. It's live football but it's not quite real time because it's using HTTP streaming and so if for example a friend of mine is watching on cable and my team scores a goal he might text me to say wow great goal and I'm going to be annoyed because I haven't seen the goal yet, though maybe not as annoyed as I would be if the other team scored. However, even that supposedly real time feed he's watching isn't quite real time. So those of you who like me are old enough to remember analog TV will remember when the news would come on and just before the news you might see a clock counting up the seconds as it came up towards the hour. We don't have that anymore because the time taken to decrypt these MPEG digital streams is slightly nondeterministic so they gave that up. However, if you're working in a TV studio that's producing that football stream you're going to want it to be real time so you might have a live camera feed coming in from the stadium, you'd be adding on at the top of the screen something to say you know here's the score at the bottom you might have a strip that's showing news flashes or showing scores from other games all of that's going to be put together live in real time before it's sent out and those streams are typically uncompressed multi-gigabit streams so this is very different from the web applications you have today. In the other dimension of course it's also fuzzy so what would you call a multi-party video conference? Would you say it's interactive or would you say it's just a set of unidirectional streams? I suspect truth lies somewhere in the middle. So there are three goals that I would identify for media service mesh. The first is simply to extend the benefits of service mesh to real-time apps. So those benefits are going to be things like you know very flexible load balancing, support for canary deployments, the ability to export metrics and statistics, the ability to do authentication and encryption from pod to pod. Now all of those things are things that we'd like to hand off to the service mesh rather than handle our application. We also want support for interactive and streaming applications and those have slightly different challenges so interactive applications such as games you very often see developers rolling their own protocols on top of UDP. The challenge there is that UDP itself is connectionless. It's just datagrams. There's no connection, there's no sequence numbers so it's very hard to track connections to tech packet loss etc. What you do see however is some de facto standards for games so as an example I got this from the improbable.io website so in their spatial OS protocol stack they have two different UDP based options there's KCP and RACnet they also in fact support TCP and they have graphs on their website showing how much better the performance in these UDP options is than TCP but then in fact KCP is slightly better than RACnet. Now KCP is a newer protocol and in fact when you look at it it feels like a really good match for service mesh in that you have these three layers of reliable transport, erasure coding and encryption and these are all things you'd really want to hand off to your service mesh rather than handing them in your application. On the streaming side most people use RTP so at least there's a standard there for a layer above UDP. So here we have the WebRTC protocol stack and you can see RTP when in fact SRTP, so secure RTP is a key part of that. On the left hand side you see all of the standard web stuff so these three layers very much mapping to the three layers of routing and Istio so TCP, TLS and HTTP but this whole right hand side is things that can't be done today with a service mesh running on top of UDP and say particularly RTP based traffic. Now RTP in fact generally is run in conjunction with another protocol called RTCP that provides metadata feedback for RTP. RTP will run on an even port number, RTCP will run on the next highest port number so it might be port 8000 and port 8001. However those ports are very often dynamic and so what we'll see is that RTP will run in conjunction with another protocol that might be CIP or it might be RTSP. That protocol will typically run over TCP, it will hand out those UDP port numbers used for RTP and it will also use URIs or URLs which again if we have a layer 7 proxy we can now start to have routing rules for that traffic so we can be routing it in a more fine-grained manner while at the same time watching those allocations of RTP, RTCP ports and then wiring those into our data plane proxies. In the CIP world people talk about pinholes a lot so having these individual forwarding rules for the RTP traffic to get it through a firewall or whatever. Now the other thing on the streaming side is very often I mentioned earlier that we have published a drive semantic so very often what you'll see is one stream and then multiple clients consuming that stream and so again what we'd really like is for our media service mesh to handle that fan out so that the video server for example so taking an RTSP camera as an example it can send one video feed then the proxy can explode that out to multiple clients. Finally we want to support three classes of applications so we want to support applications that are running one cluster, apps that run across multiple clusters but of course we also want to support internet apps that run from outside the cluster so your client might be somebody at home whether they're playing a game or whether they're on a video conference and you want to tie them into a resource that runs in the cluster. I wanted to give a works example of media service mesh and operation the example here is a CCTV camera using RTSP proxies. I don't have time to show a demo of that today but there'll be a link to that on the next slide so the first thing that the client does when it talks to an RTSP server is it asks the server what options it supports so we'll satisfy that locally from the proxy in this case. More importantly the next message asks the server to describe the piece of media. Now in this case the proxy doesn't even know where that media is far less what it comprises so it will first have to find it where it is and it will use the Kubernetes control plane and DNS to do that and now it will know that it can send towards this endpoint address here and so it goes to the remote proxy through to the camera and that will flow back and now the described message will have all the streams within that media and so now the client will send a setup for each of those streams with a separate stream ID and what you can see is it's put its own ports on there this is because it's running over RTP in this case so there'll be a port for RTP and a port for RTSP. If it was to run over TCP there'll be no need for port numbers this example is all RTP based but TCP can in fact be useful when we have clients connecting in from home etc that all works on the same proxy and a proxy can mix and match between the two but in this case with RTP what we then see is the camera's response and now it has its own port numbers as well as the client port numbers. One thing you need to remember here though is that in Kubernetes there's a lot of address translation going on so those port numbers are being changed. Now the nice thing here is that we can modify what we send and we can also program mappings as those pinholes in the RTP proxy now that RTP proxy could be as in my demo the same piece of code as the RTSP proxy or it could be separate as shown here and that's not really important but the key thing is those pinholes are set up so finally the client can issue a play commands that play command will flow through to the camera and then the camera will start sending the media stream. Now when another client connects in fact we don't need to connect all the way to the camera it's a live stream that the camera is already streaming through the proxy so now we'll just communicate with the proxies we'll set up the pinholes and finally the media will get played out from the proxies. I wanted to finish with a call to action so my email address is there and please do send me feedback whether good bad or indifferent there's also a link to a blog post I'll put a link there through to the recording of the demo so you can see that but really and ultimately my goal for all this is for it to become an open source project and to have a community around it so do please contact me and please let's all start building media service mesh so thank you for your time and do enjoy the rest of the day.