 After we looked at the bot video workflow, let's look at an example live video workflow that, for example, broadcasters use when they are streaming a channel 24-7. So in this case, you could have different parts of the pipeline. So you can have a camera here recording the event, then you can have also files, for example, an SSSD drive, which then we made bot to live. You could also have that solution or, for example, you could have the streaming software like OBS as the input source. The next step is to then actually transmit these files through as a live file or as a live stream, for example, from OBS software, then you just go to the encoding part. But let's say we have a static file that we want to make into a live channel, we then can use something like ffmpeg, for example, to make that live rntp stream and then pass it through the encoding parts. Or if we have a very high bitrate channel, we could have, for example, sdim compressed going into the encoders, and we're looking at 10 bit 8 bits, for example, and a bitrate of around 1.5 gigs, so quite a lot. So the next step, once we have this source configured, is to actually perform the same operation that we do for bot workflows, which is to maximize the video quality while minimizing the file size. So in this case, we can have software only or cloud encoders, so we're doing a bitmoving or mocks, and these can run then on kubernetes clusters that make it scalable and allow for parallel processing of a lot of videos and channels, or you could have an hardware encoders, for example, by Armonic. And these can ingest SPI, for example, so 2022-6, which has a very high bitrate. And it's basically a compressed input. So we take the input from previous step, we then have an hardware encoder that can handle this much data rate. And for example, we have also backup encoders, which function in a way that if one encoder goes down, then the other one can substitute it. In this case, if we have a cloud example, it would be another kubernetes cluster doing that job instead of, for example, the backup encoder. But this allows for scalability in the cloud. Then the next step is to actually use the encoding profiles, like we saw for the bot example. And in this case, as well, we create different resolution and bitrates, depending on where the user will consume the media from. So for example, as we said before, we're not going to send a five-megabit video to a cell phone. We might send a 640 kilobytes one, because it's much lighter and can be consumed on a smartphone with a small screen and probably not such a stable connection. So we create these versions, either for dash or HLS. And we have AVR, which is adaptive bitrate. So basically, the video file, video string gets split into chunks of a couple of seconds long. And these chunks allow for the quality switching at the player level and also at the context level. So for example, if the user is a high connectivity initially, they're going to get a higher resolution bitrate file. And maybe they move then with their cell phone to an area of lower connectivity. And then the player will automatically pick the version at a lower bitrate resolution so the video can keep playing. Also, this lower resolution might be picked up first so that the video can start up quickly and then switch to a higher quality. So once we have done the different versions of the video file or of the channel that we're transcoding to live, we're then going to place the resulting manifest and video and audio files on a CDN. For example, we use Agmai MSL4, which then basically copies the files across a different number of servers across the globe. So users that are accessing from different parts of the world on different devices can get the files faster. So for example, if a user is connecting to the channel from Italy on a cell phone, it will be faster to deliver the video if the server is in Italy than if it's in later. So how does this all come together from the video dev point of view? The video dev then uses different parts for automation testing, for example, and visual quality analysis during the encoding to make sure that the resulting video files are good enough to be given to users. You can also have KBI's at the CDN level and also analytics there to see how the files are consumed. And you can have analytics at the player level and at the user level to see which files are being consumed most, which resolutions, and if they have any startup problems. And all of this information together allows the video developer to then adjust the encoding profiles and the visual quality analysis. So as you can see, there are a lot of figures needed for this job. We have software engineer, we have operators and architects because it is a very complex workflow. But at the end, it makes sure that users can get to their videos quickly and without buffering and enjoy it. Thank you.