 So next up we got Giacomo here, all-timer from the room as well, and he's going to tell us about modern VoIP infrastructures. Take it away. Yes, thanks. So this is a sort of backchart that we've been having for a couple of years with Federico, and we've been working on this concept for some time now. And Federico did a presentation a little bit more extended than this in September at Januskon. And today we'll see something like a light version which is more focused on signaling. And the other parts that we typically cover are more related to media, handling, QoS, debugging tools, and security. So I've been in the VoIP area for some time now, and I've been in various companies that use open source components and involved with the Kamaila project and other projects in the area, like Janus, Astris, Fritzwitz, and RTP engine, and so on. So let's see the overview. As I mentioned, we'll cover mainly signaling today, and we'll see a little bit about the evolution of the infrastructures where VoIP is actually deployed nowadays and which bits of VoIP are impacted, and some work around what you typically see today and possibly some thoughts for the future. So why the cloud and why we are developing, deploying VoIP platforms in cloud infrastructure is easy to sell. There are many advantages. Sometimes even the customers or the partners somehow expect it. HA is definitely more easy to achieve, and scalability comes in an easier way. And if you are starting small, it's easier to then grow whenever small upfront investment. Geographic distribution, which is very valuable, it's easier to achieve even with small implementations. And sometimes you get, for the easiest things, you get tools that are just there off the shelf like HTTP load balancers or caching systems like Redis and DNS already there and so on. So there are some challenges though, because when you choose a cloud provider, then if you already have a system, you most probably need to redesign either the entire infrastructure, the entire architecture or parts of it. And instead, if you are starting from a specific cloud provider, you probably need to take some decisions that you will need to cover in the future. And you may pay if you decide just to move to another provider. Sometimes you don't have shared dedicated resources, and it's difficult to assess the impact in the real-time context. And so it's not easy if you instead decide to spread a little your strategy and not relying on a single cloud provider. You may have part of the infrastructure in a cloud provider, and there is no standard, simple solution that works every time. Typically you need to either do something specific like VPNs or discuss with the providers for a specific solution, but there isn't anything that you can just use. And sometimes you have tools that are just specific from those cloud providers. So just in general, we moved, so starting from 2001 and so on. I think we moved from the server side where max-up time was a reasonable goal and was considered the positive achievement. But now what we focus on is the maximum possible resilience to restart of the applications. And we moved from configuration updates just maybe more recently to infrastructure that can be called immutable, where when you want to change something, you don't change the configuration. You change the components that are involved, like for example deploying new container images. And so we grew up in our VoIP experience with a very simple infrastructure where everything was under our control. Provision wasn't that simple. Things didn't move fast, but we could know everything. IP addresses, we could have public IP addresses directly on the machines and we have full control on firewall and what now are called typically security groups. But then if you look back, so we stumbled upon this tweet from Rosenberg last year and he said, well, you know, this is how much time has passed since this work on this protocol started. So if you see just RTP and CIP talking more than 20 years. But if you compare it with the evolution of the infrastructure inside, you see that most of the protocols were actually designed when the infrastructure was different than today. And I think this is probably also visible in some aspects and we're going to take a look. So this is more similar to what we would like to see, not care that much about the orbit. Being able to have our systems provided by containers, so any orchestration system, generically, even Kubernetes possibly. But then, in particular for the inbound part, having a component like this blue load balancer thing that we draw, a component that is VoIP aware and is able to manage the incoming and possibly also the outgoing traffic. But with minimum configuration and with minimum work, as you can do for example with HTTP. The problem is what we typically end up working now is more likely something like this. So you have elastic IPs or static IPs depending on the cloud infrastructure, so floating IPs in general. You need to take care of their location and associate them to your virtual machines or to your containers and manage the relationship with the service that's behind those floating IP addresses. Sometimes if you want to maximize reliability, you typically have a virtual machine or a container in active mode and another one in standby mode. But then you have constraints on how the standby can do the health checks for the active one. For example, in AWS you can do level three checks only inside the same availability zone. And you need to take care of all these details by yourself. So in general, something that impacts the architecture is that the IP addresses when you are in particular with containers that can be brought up and down. The IP addresses change not only the relationship with the public interface but also if you redeploy a container you may have a different IP address. This doesn't work well in general with signaling. And typically you only get one to one nothing between your machines and a public interface and you don't have a direct visibility of your public IP addresses. So slightly related but not for this session is also the difference between the bandwidth that the cloud providers tell you that you have and instead the packet rate that you actually have. And also typically you don't even know what your maximum packet rate is because the bandwidth is computed to jumbo packets and not with the small packets that come with codec optimization. And also containers are ephemeral so they can be brought up and die and then you need to do something for the calls that do have a state. And other things that are less important but still critical for operations which is related to logs and other information like traces for example. So the main difference between an HTTP based or web based world and VoIP is that you may say VoIP sessions are sticky. They are not part of the request and response paradigm and this doesn't cope well with an architecture that adds components but at the same time can remove components which may be even more tricky. So you need to find a balance between the immutability of the state of the calls but also the volatility of the components that are serving, that are providing the service. And as we mentioned the IP addresses are ephemeral so one time ago you could decide this is your box. It will have a long time, it will have a public IP address, possibly one or more private IP addresses and it signaling for ongoing calls could rely on those IP addresses for correct routing. This is difficult to achieve now with the volatility of IP addresses and so you need to shift the design of the architecture more towards DNS in general. So having for example to work with console and be sure that routing is done with DNS and not with IP addresses which is an additional complexity and it doesn't really provide a solution for RTP and we'll see later a little bit what Rosenberg has been proposing recently in relation to this. So in general we have a lack of native components for VoIP in cloud infrastructure and in particular we don't have what we would like to have which is basically just a zip load balancer. So what typically happens is that you talk with people in your team and they say well okay just use a load balancer, just pick up a load balancer but then I'm having these conversations over and over. This is just doesn't work. So first of all AWS elastic load balancer work only for HTTP. So the network load balancer are very nice and powerful but at the same time first of all for TCP and TLS which is the best scenario they are based on the stream. So they don't do load balancing, they do stream balancing between a source of the stream and target group and for UDP it just doesn't work and we will see quickly an example and the same can be exactly say at this level of abstraction with Google Cloud Platform. So just to give an example so as long as you have UDP traffic coming in if you use just an AWS NLB it will choose one target and will route the requests to that target but also it will route back the responses which is very useful. But if the call is long enough that is there are requests from the server like a re-invite or a buy from the server after some time you may end up in the scenario on the right where there's no more trapped connection in the load balancer and basically the requests go directly from the target to the client and it's more likely that the client will not even accept those packages. So of course we've been reading during these months and we were looking at suggestions and recommendations and then I don't know you can pick up for example this AWS white paper about all the solutions for real-time communication then you start getting more and more excited as you read but then you find this eventually when you need to find a real solution for SIP networks you find something like this. That says if you really want to do level 4 load balancing and UDP is involved and you SIP then you need to search for an application in the marketplace basically and use it which is not what we want because more or less it's what we are all redesigning each one in our own case. This is media I'm going to cover it not this one and not debug so this is part of a bigger conversation so as we said there's no standard interconnection with clouds so and these are the workarounds that we see today everybody is rebuilding their old load balancer so we are not doing a common work from this point of view. Of course we can use Camellia we can use open SIPs or Dracty or other solutions then we are more or less duplicating the work and there are still around drain scripts there's nothing automatic in the infrastructure themselves and so quickly to conclude. If you take a look at this proposal this is a proposal for changing the way trunking is made in between VoIP providers it doesn't cover this client to server communication but in our opinion it can be extended to cover that and basically it's a way of setting up trunks using HTTP3. And also having the media flowing through parallel quick connections rather than using RTP so this could be something that at community level we can discuss so just very quickly. SIP and RTP just as an example are old but at the same time even WebRTC is dragging more and more usage of these protocols because they made the bridging between all the new WebRTC applications that are being designed and the good old PSTN world. So for the long term what we would like to have is a VoIP load balancer a concept that can scale up internally that is aware of the target servers that can distribute calls and proper managing the dialogues and the VoIP sessions. Avoiding vendor lock-in so we can move more easily from a cloud provided to another and have some best practices and possibly refine the other protocols and that's all. Yeah thank you, thank you Giacomo. Maybe we have time for one question until Sol set up. One question, no question. Okay and now thank you. Thanks.