 Up next we have Maxime Bougier and his presentation is building an open source place for your streaming platform. Hello everyone, like I said, I'm Maxime from Radio France. I'm part of the cloud and infrastructure team at Radio France. We host many things, we host websites or podcasts and no streaming. In this context we use a lot of open source software like Kubernetes, Terraform, etc. We choose to build a new streaming platform with open source source software, so that's what we see today. So we stream audio. We're a radio station, we broadcast radio on France. And we have a lot of radio. We have 7 brands, 6 national channels, we broadcast everywhere in France. 47 local channels, mostly France Blue. And 23 music, web radio, which are broadcast only on the internet. We have a lot of listeners, about 2 million listeners a day. You can see here a graph with the peak in the morning where we get 200 friends on simultaneous listeners. So we need to stream this. Before we built the platform, the audio were produced in the home of Radio France, send it to the third party with kind of a black box for us, and stream by SKS to listeners. So we are not really satisfied with the service provided and wanted to modernize the streaming platform and make it our own. The first thing was to put it in the cloud. Why? Because the rest of our technical stock is already there. And we wanted to stream HLS in place of Icecast. So just to be clear about Icecast and HLS, the two streaming format. Icecast is a streaming server. It was a historical way of streaming radio. It works by the client connect to a single bit rate and the server push in audio over long TCP connections. You cannot cache it and you cannot change bit rate on the fly. You have to disconnect and reconnect the listener. In HLS, the principle is to change the audio into small parts. And we have a sliding playlist that tells the client which file to download and in which order to play them. In HLS, we can have an adaptive bit rate. That means the client can start with a low bit rate and change it without disconnecting. So it's to provide way better mobile experiences, especially in transport. And one of the most important thing for us is it's catchable. It's just static files, so we can audio files and play these files, but it's not a stream, it's files. So how do we build our platform? The first thing was to transport the audio from the home of Radio France where it's produced to the cloud. We're in AWS, so we get to transport there. Next, we'll see how to produce the two formats, Icecast and HLS. Then on to deliver the content to listener. As I said, we have a lot of listener. And finally, we'll see how to monitor the platform, like any system in production. We need to monitor it and operate it. So first part, how to get the audio to the cloud. At Radio France, the audio is produced in several ways. The national and local and web audio each has its own way of producing audio. But at the end, all this audio is impacted in the multicast network of the home of Radio France. So the first thing we did was to create a direct connect, which is a dedicated link between the home of Radio France and the cloud. So as Fiber. But how do we transport multicasts into AWS? There is a problem here. There are no multicasts in AWS, like in any other major cloud provider. So here we come to SRT. If you saw the previous talk, we talked about that. There are three parts in SRT, secure, because it can encrypt the stream. Reliable, because it can retransmit the packet, so we can prevent packet loss. And transport to dynamically adapt the transport with the network perturbation. So the main advantage for us is that it can take multicast stream and transform it into a unique stream to the cloud in a reliable manner. Because we will travel many firewalls and many networks between the home of Radio France and the cloud. So we have SRT here. So the thing we did is just put two servers in the home of Radio France in the data center and take the multicast stream and transform it into SRT and send it to the cloud. But we want to transport the stream in a resilient way. A lot of things in this story are about resiliency. Each radio channel has two sources at the home of Radio France that we call here main and backup. And we have a third source, which is a satellite backup, which I'm not going to talk a lot about it because it's done by the third party. But the principle is that we have two servers in the home of Radio France, which each takes the two sources of each radio. And we create a full-match interconnection between two SRT listeners in the cloud. So at the end for each radio channel we get five inputs here. This is the two listeners are exactly the same at the end. So we now have a way to transport the sound into the cloud. But now we have to produce the stream format. So the goal is still to produce ICAST and HLS. If you saw the previous talk, you know the software to do this exists. So we choose Leak It's Up. For those who don't know, Leak It's Up is an audio and video streaming language that you can take some input and create output with some control on it. We decide to collaborate it with the Leak It's Up team on this to adapt it to your needs. So we use it for several parts of the channel. So you can re-save SRT stream. So the SRT input will create SRT listener. We use it then to transcode the stream in the format we want. So HLS and ICAST in multiple format, AAC and MPU3. And so you use it to control the stream. We have a need to switch between sources. As I said, there is the main sources and the backup sources and the satellite. So we need the mechanism to switch between sources. We need the fallback logic because the source can't fail. And we need to monitor it. So we use Leak It's Up to expose metrics about what we are streaming now and what input is up or down. So we did put Leak It's Up on a server that I will call Transcoder for now. It's bad naming, but it's not important. We put two Transcoder in the cloud. What is there in a Transcoder? There's Leak It's Up, which receives the five inputs we talked before. There are instances of the ICAST server that we call there an ICAST master. Because he has the master stream that we would relay after that. And we create HLS files. As I said, HLS is just files. So we can use a web server like NGNX or any other to serve static content. Let's zoom in and see what Leak It's Up does internally. Here we define inputs. We have the five inputs we talked before, the main from the two SRT cooler. Main cooler one, main cooler two, and the backup and the satellite. I will not talk about override that's just for us internally. All of these sources are available. They all can fail. And we create another source that is a safe blank that will serve us. If everything else has failed, we will switch to blank because it's better to broadcast blank than nothing. Next, we need the logic to switch between sources. So we'll use a native Leak It's Up function to switch. And our function is playing with the Boolean. We just tell, does this source has to play? Only one of them can be through at the time. Next, we need a fallback logic. What happens if the source I want to play is down? So I use the fallback operator of Leak It's Up. At the first, the live source, the source we want to play before. But if that's not available, the fallback operator will take the list in order. And at the end, we have the safe blank we thought before. So this never failed. Now that we have a source that never failed, we can output it. Again, we use a classical Leak It's Up operator. This is output.file.atfs. He takes some parameter like the segment duration. It's the size of the chunk of audio. And the source we created before. Here, I create a source that have two quality, mid-file and high-file. I use ffmpeg to encode them with libfdk.aca. And mix them in impact.fs because it's the most supported format in HLS for the client. As a woman talked before, Leak It's Up use a lot of callback. And we make use of them. For example, a fail change is a function which is called every time a HLS segment is created. We use them to upload segment to CDN, for example. But you can do everything you want with the file. And we use a segment name to create the file name of the segment with time stamp and the position of the segment. Next, we output to IceCast, which is much simpler. We output it in localhost because IceCast master is on the transcoder. Again, we use an encoder to make the AAC stream. And still with the same sources, which never fail. We don't know how the two streams IceCast and HLS files hash. This is playlist and segmented audio. And we needed to deliver that to your user. So how to scale it? We can start by IceCast. We have two IceCast masters here and there. IceCast is the classical master and HLS architecture. So we just put a bunch of IceCast HLS that in each IceCast HLS will relay all the stream of the IceCast master. IceCast does have the functionality to switch between sources if one fails. And in front of them, we just put a load balancer in the cloud. There is no cache here and we can't autoscale IceCast because if you scale down with IceCast, as it's connected to a protocol, when you scale down, we will disconnect all your client. But we have an automatic fallback between the two IceCast masters. To scale HLS is much simpler. It's almost the same architecture that you have the two transcoders. But at least you can cache static content. So the cache layer is actually scalable. If this is an example architecture, it's not exactly what we use. But you can use the cache layer anywhere you want, like in Kubernetes or in instance groups with autoscaling. And to make the fallback between the two transcoders, you can do this at the CDN level. Most CDN provides this. So the main advantage for us at HLS is that it operates just like a website. And not like a stream. So let's see how we operate the stream. As I said, on the transcoder we have a naked sub, but we just don't have one naked sub. We have one naked sub per channel. So we needed a way to automate the installation of this. The answer is an NCB. We didn't really invent the way. We created a dictionary with all our radio channels. It's input and output. The ESSRT port and everything. So with this dictionary, we create all the configuration needed. So each channel has its own SRT color, has its own naked subscript, and I SCAST master and relay configuration. And next, we need to collect metrics. So again, it's pretty classical. We use Prometheus in all our technical stack for our website and etc. in Kubernetes, and Grafana to visualize. In this platform, we mainly use three sources of metrics, and not exporter for the metric at the server level like CPU and RAM. Naked sub can actually expose Prometheus metric now, and I SCAST expose their own metric too. So what do we get? It's an example dashboard of the internal of naked sub. We can see that all the sources are available. There's the main color one is playing. There is no blank output. And this is a nice story of the stream. In a stream platform, one other important need is to know how many listeners are there now. And for SCAST, it's easy. It's a connected protocol, so the server know how many clients are there. So we can just crop metric from SCAST, and we know how many listeners we have. But for HLS, it's more complicated because the only information we get is access log. So we need to work with the CDN to put the logs to us. And we had to build the custom log ingester for this that creates Prometheus metric and tell us an estimation of how many listeners there is. At the end, we have a dashboard like this. So we can see we have about 50-50 HLS and SCAST listeners. This is a pattern on one week of radio with the pick in the morning. And now the platform is pretty complete. So if you want to learn more, you can check out my personal project which is an example of the Likid soft script views at Radio France. Check out the SRT GitHub and the Likid soft website. I would like to take a special thanks to the Likid soft teams, Roma and Samuel here. Thank you for your attention. There are two questions here. I'll start with how do you feed the satellite into the cloud with SRT. The full party provides us with an SRT stream and we just use it like on the stream. And although the failover mechanism works, it was in the fallback slide. So Likid soft actually detects if there is any content to stream and if there are no content, it goes back to switch to another sources. This is the fallback mechanism. If one source is unavailable, it switches to the next one. Question, have you tested audio quality of the SRT back AAC encoder? Not really. Yes, but not in the details and we don't know if it's better than an FDK AAC or not. You showed your own CDN but you still use Akamai in the same time? It wasn't really clear because we are not on CDN. We do use Akamai because it's public contracts and so we had to use it. So it's just cheaper than playing AWS. Probably not. Probably not because you have to have one Likid soft per listener if you want to personalize it. So kind of difficult. For HLS we create a lot of files because each listener will have its own files. Why didn't you offer a dash stream? Because we don't need to form there. HLS is more popular for the moment. Maybe in the future we will but for now. Question from the floor? At which? It was a question about the fallback and how do we detect the fallback? I have one question, it's a fallback at which level? There is a fallback in Likidsoft but there is a fallback at the higher level between Transcodeur. What's the question for the fallback in the Transcodeur? In Likidsoft it was the same question as before. It's a fallback operator of Likidsoft but at the higher level it's kind of difficult. You need to know if there is a stream or not. But the thing we do is to return an HTTP error code when there is no stream. And the CDN can fall back on HTTP error code. For legacy listener. There is a lot of websites that takes a radio front stream and provides them. And their players are not compatible with it. There is about 50% HLS and 50% ASCAS for now. For all ASCAS? Yeah, for now, except Apple. Obviously. Thank you.