 So thank you for coming here and thank you for inviting to do a small talk about the live IP software toolkits That's what we produced at EBU and it's open source. So that's the good part and Sorry, I have to show you the next slide here as well But this is actually the reason why we did it. So when we talk about European broadcast union, it's about broadcasters So normally you saw well antennas and that sort of things but lately of course It's also about webcast podcast and that sort of things So for those guys, it's a big change. Of course not for you guys So usually I present this type of presentation for broadcasters saying well This is probably the biggest change in 30 years after going from Paul to SDI SDI that's a professional format to transport video And now we want to move from this SDI specific format into an IP format for our professional broadcast plant now that's interesting Because higher management always speaks about well, this is the iCable We need to rip it out if we want to move from SD to HD or UHD Each time we need need to break out everything and start from scratch and those are long projects So actually we want to have some sort of flexibility to move from the From one format to the other format as soon as we can and as soon as needed or maybe go back as well So this is why we say well IP for us, it's like a container ship if we put in SD, HD, VR Whatever type of audio format it will work On the other hand As broadcast engineers, we were very proud because in the 90s we had 270 megabit per second as a professional format and IP was at 10 megabit per second So we were very advanced But now of course due to the scale of the industry we see switches of 32 times 400 gigabit per second If we do the calculation back to 720p 60 formats You see that we can have 10 000 video streams in two units instead of multiple racks 48 use high, so that's a nice advantage for broadcasters to go to IP Shareability that's another story because we're not there yet Shareability for a broadcaster means well, we want to run everything on generic hardware And this is exactly why we did this live IP software toolkit Now to give you a little bit of the insight on the industry. This is a very interesting roadmap. Who has seen this before? That's very good right on track. So This is our sdi technology and we wanted to move to IP A few years ago. They said well, let's take all the sdi. We wrap it into IP and we transport it But actually that's not what we need to do because sdi is encapsulated with audio video ancillary data So if you just do that that's a waste of bandwidth in a video plant Because you need to de embed and embed all the time all the ancillary data in the audio So then they said well, let's go to elementary streams Let's put the video part the audio part and let's flow all those streams through the network And of course we need to synchronize them again So we use the the standards within the it world IEEE 1588 the precision time protocol Now there are a few standards in this But not to bore you with all these 70 documents. There's a very interesting one Dash 21 and this one talks about the traffic shaping Of those packets over the network and that's the important part and also the part where some politics are coming in So There's another part as well Who has a device at home that uses sdi? Whoa, that's a good room. That's a really good room Most of the time everyone even if we do this this course with broadcasters. No, well nobody shows his hand and The reason why I ask this is of course Well sdi is very secure because you could almost say it's security through obscurity. Nobody has it And on the other hand Don't forget it sdi was damn reliable you plug it and it works Well with ip you need to have an ip address of course and that sort of things So there's a little bit of work on the road and this dash 21 document that talks about traffic shaping is just the key element Of moving with our professional media into the ip space So if we talk about this reliability, it's just to talk about the network. So let's dive in This you all know but it's an important slide because well for data timing is not critical for media It damn is and if we miss a pixel somewhere in the video, maybe you didn't see that we missed the pixel That's a hard message for real video engineers, but voila se com sa Interesting you can dive into the difference a little bit more and you see well You have the circuit based networks like our sdi and you have packet based networks specialized for data But this is the important part here We can move towards that sort of networks for professional media. The only thing we need to take care of is the packet delay variation Now packet delay variation. It sounds all very easy. So, uh, what can you do? It can lead to increased latency, but also dropped packets And this is exactly where the the small detail where we want to focus a little bit So i'll try to run you through this packet delay variation in a quite easy way I hope and I like to compare it with this nice racetrack This racetrack as long as the gray car is up front. I would say this is perfectly behaved traffic Now what will happen when the gray car disappears you will have Yeah, this is what I would call bursty traffic Now the real ip guys will say no no no this collision, but sorry, it's not a collision domain I just want to say it's bursty traffic Now another thing that broadcast engineers don't perceive when an IT guy says something about bandwidth The IT guy says well, you're consuming 32 megabit per second or whatever, but by saying it's Megabit per second it's averaged over amount of time. So the broadcast engineer thinks oh, there's a packet There's nothing. There's a packet. There's nothing equally spaced. Well in reality. It's of course not Now on a single lane who cares about this? But of course a network is just not a single lane A network is more than this. Oh, by the way average speed 55 miles an hour, let's say But if you have birds you use the link speed Packets could be so close to each other that you use the link speed So multiple lanes. So let's try to compare it to traffic. I would say but actually the comparison stops here Because in IP networks when you go to a reduced set of lanes. Well Instead of going slower, you should go faster. Otherwise. It's a little bit of It's just a waste. So this is the type of networks. We're talking about in our video plans spine leaf networks And imagine there's hosts connected and there's an interlink between the spine and the leaf of 40 gig So as long as you don't have more bandwidth here than you have here available We could say this is a non-blocking network and imagine all these ECMP stuff in the middle is working perfect So we can balance packets instead of just streams imagine This means for our broadcasters that we have for instance 12 cameras connected on a 10 gig link. So 12 times one third the bandwidth That equals four times the bandwidth. So in theory this all works pretty happy Now of course when I say there is something like packet delay variation Of course you have Another issue because in reality we connected 120 gig to a switch and we want an uplink of 40 gig So there's an issue and especially if the traffic is bursting Because at the end of the day you're starting to use the buffer of the switch By the way, you never know how the switch Buffer memory is allocated. It could be sliced in four parts and copies could be made because you're using multicards and that sort of things So at the end of the day, there's a small packet. That's probably not happy and not welcome in the buffer and needs to be dropped And this is exactly the space where we don't want to be Now this is actually the space where vendors do want to be and this is a huge political discussion we had because of this The closer you are with your implementation and not pointing at anyone but Assuming an fpga implementation The better you are at pacing your packets on the network If you have an operating system in between and your application on top, you're less in control And we think about virtualizing rip facilities For broadcast that's quite far off So this is why we were interested in the standard to tweak a little bit those packet delay variation numbers because they wanted to make it so so specific that only fpga implementations were allowed So where would you go with open source? That's an issue Another part, I just plugged it in because I had something flipping in my mind. Let's say hey We're using as sorry. We're using udp. We're using multicast. So actually we cannot make a non-blocking network We need an sdn controller You can imagine a situation where you plug in just another receiver and suddenly you start Using more bandwidth between spine and leave than you have available That's the way the ip network should work But it's a bit of an issue for a broadcaster So this already demands for an sdn controller Okay I promise to talk about traffic shaping. So let's dive in This uh, Semti guys, they do did understand it. So they say we need something to control this traffic shape So we define virtual buffer models and it's nothing more than a funnel It's like when you want to throw water in a bottle you use a funnel you do the water in and when it overflows Of course, then you start to drop packets And you've been using this in in the it industry for many years. So nothing new for you guys On the receiver side, they decide they define something similar. It's called the vrx the virtual received buffer That needs to be measured at the sender side It's quite the same as this network buffer model one exception. The packets are read out of the buffer according to the ptp timing This is where the fun comes in So what I just said, we have the cmax buffer model to protect the network and then Let's dive into a little bit what is actually meaning So this this model says it's draining packets onto the network as a function of its video format And it's just dumping it And when we say where we have more than Three let's say four packets possible in this funnel You'll get a result on your switch like this So I'll run you through this graph because it looks a little bit complicated, but it's not This is an egress capacity of the switch of 3.2 terabit per second a memory of 16 megabyte And we say we have a cmax in this funnel of four This means if we use the switch for 10 percent Up to 80 percent of the traffic of the switch. There's no problem at all When we go to 90 percent, there's just a small patch That if we load this fully and in the baddest situation, we start to drop packets Now some guys say to me. Well, willem. This is just an old old switch. There are new switches You I mentioned this before there's a 12.8 terabit switch per second With 64 megabytes of RAM But the ratio between the egress capacity and the memory is the same So the equation just results in the same situation The only difference is of course you have more streams that you can run through the switch Now if you would say we move from this FPGA implementation towards a software implementation, which is a little bit more jittering It could burst up to 16 packets in that buffer then you could see something like this Quite interesting, right 90 percent Not good. You're starting to drop packets anyway and your application will fail Now the nice thing is the higher in resolution you go you get more into the greens And that's quite simple. This is an application of uhd 120 frames per second so the amount of Package rate and data rate is higher, but you have Just 58 streams. So you have to divide your memory between 58 streams So each individual stream can jitter a little bit more. It's it's quite easy So the vrx buffer I mentioned this before it's similar like the c-buffer, but you need to have ptp Simulated to to really see what what it does Now these numbers are defined and this was the biggest struggle in the 70 in the on the 70 calls And it's this c max And they said you have a narrow sender Which is strictly defined and shall not go above four packets in this virtual buffer And you have a wide sender which shall not go above 16 packets And at that time they were just defining a number because they could and there was no real implementation in software and those guys Had their nda signed with switch manufacturers. So they know how the the buffer works But actually this was too small for software even the 16 And i'll try to uh to give you a visual Overview of that And again, we're here. Uh, well in order if you buy something This is usually for broadcasters if you buy equipment you need to measure this And there's no test and measurement equipment on the market today that does this So this was the actual the right spot and the right place for open source and for us to come up with something Just to tease the market a little bit Well, I explained this and this as well. So i'll go over it. So where did we start? They were defining the standard. So I said, well, let's use excel to try and figure out all this works And of course excel. Well, that's not your best friend if you have this amount of data Then somebody told me oh, you can use python. There's a pie shark library and and and use that and you're good And then I went to some interrupt tests and there were 60 devices and python is nice, but it's a bit slow So then uh, we asked somebody to write a c plus plus optimized Program and this is actually how it started and then we discovered something else Because our broadcast engineers, they're used to this screen And the it engineers are used to the other screen Well, good luck with wire shark if all the packets are passing by just for one stream on three gigabit per second You don't see a thing And good luck with this by the way You don't know what you're looking at either Well, no So we need to bridge those worlds and this is exactly where we started Well, that's another example. I'll skip through it And how did we do it? Well, first we said let's take just the tcp dump And the tcp dump we upload through our uh, oh no I'll skip this So we take a tcp dump to produce a pick up file the pick up file We upload through our GUI and it just gets to the pick up store When it's in the pick up store, we say Start the middleware and then we use the stream preprocessor to run through the pick up file Because in order to calculate the model we need to know what it is. Is it the 720p? Is it uh, YCBR and that sort of things And actually most of the things you can find in the stream But for a few others like YCBR you need some heuristics to guess what it is So that was already a tricky one because the those guys defined. Well, let's use sdp But they didn't say how to give the sdp So some manufacturers give you the sdp file via akid or not via email and some use other mechanisms Then the next thing of course when we have these static data, we put it into the mongo db And then we trigger the middleware again and then we look through the time series of data And that's why we use the influx db to put it there Now that looks pretty basic. So let's Show the GUI a little bit So this is actually what the heuristics does it looks into the pickup file. It says oh, there's an audio stream There's a video stream and we find some ptp data And we try to make it very simple like if it's green, it's it's probably okay. If it's red, there's something wrong The heuristics show up in some more details about the video file And on the right hand side, it's pretty small But it actually says if it's compliant with the c-buffer model and the vrx buffer model Now to give you a visual. This is a very good one very strict There's at maximum one packet in this C-buffer that's very nice and on the right hand side you see the reflection on the the the excel grid I showed before And let's see if I have a nice one This is a nice example of another one This is the virtual receive buffer and this is a tcp dump we got And the guys didn't know why their solution was breaking but actually it's very very simple because you have underruns There are no packets in the buffer when the application needs to read The video frames according to the ptp clock So those are the vrx underruns and actually we can run through frame by frame and exactly see It's on this frame. What happened? You see the the typical green line. There's no data available to reproduce the picture and It's a collaborative collaborative project So we joined the cbc colleagues from canada and they said well We're actually implementing audio over ip and we see a lot of issues They wanted to have low latency and that's that's an audio profile with 125 microseconds So that's really low latency for in-ear monitoring Otherwise you have a com filter effect and it's not easy to sing So they wanted to see the jitter for audio and 10 years ago. There was a model written tsdf We just implemented it and it matches perfect with the is 67 spec And it's all open source So what did we see actually for this application to work? We just need a precise timestamp With a nanosecond granularity And a timestamp must be given on the network interface card as close to the to the to the ethernet jack as possible And of course it needs to be synchronized with ptp I'll skip this one and I showed the latest develop development with it What I told you right now is just offline. So you need to have a tcp dump and do it last three weeks We were a little bit Diving in we dove into our lab On the left hand side, you see our ptp generator a switch some sdi Senders a prism And that sort of things but on the right hand side you see something simple. That's a nuke Well, it's it's a powerful. We don't need such a powerful beast and this is just an empty cradle We plugged in an internet Board it's connected to a thunderbolt and actually we cannot just do a tcp dump. We can have the precise measurements life Open source nothing special the difference of this box and another box that you buy on the market. Well, it's about 58,000 euro A peu près. So to be sure that we have good results We also looked at the jitter of the clock and the Resynchronization of ptp4l how that worked out and then we did a capture with those official high-grade boxes and with our Shapements box and it seems that we're pretty close to all the other captures So I can say really confident that our live implementation is precise as possible If you want to know more, there's a url tech ebu ch Slash list, of course, there's the github and for real broadcasters that are not into Well, I need to see make and all the all the things with the code. We do have a docker container as well And if you don't dare to do that We just have list ebu.io, which is the virtualized version of r2 any questions one Hello, is it just one flow? You'd both Is it just one flow between one producer and one consumer and therefore does it Hash well across the entire network if you're no, the problem you could have is Well, normally when you have a live production or you have somewhere a production Someone at the end says oh, I need to have a monitor there. Will you plug it in? Well, you don't know what the bandwidth is you're using maybe you're already using well in that example theoretical example there were 20 receivers Just if you plug the one extra You need to well you subscribe to it and you get it But the bandwidth is is broken for everyone. So all the receivers will will just get stuck A small follow-up question. Well, do you know what the average kind of frame sizes for the for the udp transmission? What the packet sizes? It's it's normal. It's 1500 So it's not jumbo frame size. Okay. Thank you That's a key run It's interesting you talked about the tsdf timestamp delay factor for audio. Have we done any work on how sort of multiplexing of the audio and video Works in practice with some receivers, especially receivers which are not Time-stamping the audio but just playing it through. Well, there's a few of those as far as I've noticed. Have you done any work on how Sender should multiplex the audio with the video because obviously there's a very precise timing model for video But there's not a model for how a sender should multiplex the audio video Obviously the hardware senders send it spot on But as a software sender where the audio is being processed somewhere else Um, so usually the video is done with kernel bypass and the audio is done with sockets How do what actually matters in real life with regards to the multiplexing? Well, that's a very good question and we're just up to the fact that we virtually produced different levels of audio so you have level a bc and ax bx and cx with 24 bits 48 kilohertz 60 96 kilo and all of those we created and then We made them a little bit variable as well So we have good examples and bad examples and that's the next step is to run them through pipes And load load pipes because this is what we didn't do yet Because we don't have enough material or equipment at the moment to really load the switch But in theory you you could also calculate that whatever just switching through the network You have an impact already on the low audio profile Okay One last question No, I got them all to sleep. Sorry. No, you were crystal clear. Thank you. We then thank you