 All right, so we're gonna start with our next talks. Our next talk is about the S-Review Video Review System, which is a video system we use at FOSDEM to review video. So please welcome the author, Wouter. Hi, thank you. So yeah, my name is Wouter Verhelst. I've been a FOSDEM staff member since 2010, and I've been involved with FOSDEM video on and off over that period. And since two years, since 2017, actually the third year now, we use a video review system here that has made our review workflow be a lot more smoother and a little better. And I'm going to try to show you how. So let me first start with a little bit of a background. Why did we do this? At FOSDEM here, we've always had video since very early on. I think 2001 or 2002 was the first year that we did video. But we never really record every room until 2014. Before then, we just did like five rooms. And then reviewing that, that's an afternoon, and after the afternoon, everything's reviewed. And you just need to transcode, and that's scriptable. 2014 was the first year we did every room at FOSDEM. Now, everybody here will know that FOSDEM is huge. We've got 24, 25, 26 rooms. It differs a bit from year to year. Now, if you have 16 hours, two days of eight hours, 16 hours per room, 25 rooms, that's all of data. That's a lot of video that you have to get the interesting parts out of. I mean, we don't want to do a lot of editing of the videos. We'd still need to have just the part that's interesting. And we don't want to see people walking in the room, people leaving the room, that kind of thing. Originally, in the first few years, only the video team would do the review. And there's like a handful of people there. They have to do it in their spare time. It takes a long time. And in 2014, the first year with every room, we were finished by September. And that was only because at that point we just gave up and we said, well, screw it. We're not going to get ready anyway. We'll just do whatever is in the database right now, and we're not going to review any further. So if you watch some of the videos from 2014, you may actually see that there's some data in there, which is like not really what you want. We did a bit better in 2015 and 2016, but that was still, in both cases, by the time we were finished by releasing all videos, most, in most cases, we actually already started to organize the next FOSDM. And obviously this was not sustainable in the long term. So in 2017, I decided to write S-Review, which is a system I'm using now. And it helped because in 2017, we were ready by late March, which is a month and a half after the event, which is a lot faster. 2018, we did a bit longer, but that was because we had a few issues that we couldn't really fix easily. And actually the majority of our talks were released two weeks after FOSDM, so a lot faster even. So that was a progress, right? So I'm not going to pretend that I invented the whole system. There were other people who wrote review systems for conferences, because we don't really have a lot of post-processing. You just need to review, check out the interesting parts, and then transcode and release. These are the three ones that I know of that are all free software. First one is a set of scripts. They're not actually named. They're scripts that hook into PentaBARF. They're very, very basic. It's just a web form. You mount some NFS thing. It assumes DV everywhere, because at that time, DV was what DeppConf was using. It runs over NFS, so you actually can only use it from the event itself because you can't do NFS over the internet unless you want to be insecure, et cetera, et cetera. So there were some issues with that. The CCC, Cares Communication Congress, they wrote their own thing called C3Tracker. It's a fairly complicated setup of Samba shares and fusive file systems, and then KDN Live to do the actual review. But the downside of the system is that it requires training of the people who are going to review. It's not a lot of training. It's only a few 10 minutes, but that doesn't scale to what we wanted to do here. What we wanted to do here is to crowd-solve the review. We're just going to save people. Here's a link, review your own talk, and come back with our review. And if you need to train people first, then that doesn't work. So I didn't want to use it. Also, very important to me in a way, at the time when I wrote Srefew, the C3Tracker was not entirely free-suffered or a few parts of it which were not free-suffered. That has been fixed since, but that was another argument for me to write something else. Finally, there's something called ViPAR. ViPAR is a review system written by Carl Karsten, who used to do a lot of the Python video stuff. I don't know if he still does that, but that's where he started. It's written in Python. It does a lot of things really well, but a few other things are not that easy to use, and it's also very difficult to configure because it's not that well documented. I can see people nodding and agreeing with me, so yeah, it's nice, but yeah, there's a bunch. So how does it work? How does Srefew work? Well, it's almost fully automated. Every bit that I can automate is automated. It assumes that a room just has a timeline of video files. There can be multiple video files. It just assumes that every video file has a start time and a length, and it just goes like, well, this starts at one o'clock and it's half an hour long. The next one starts at half past one and it's also half an hour long, so you have an hour of video, right? It knows that there's a schedule in that room, so you need to actually put the schedule into the system, and it will go like, well, we have an hour of video and I have a talk that's scheduled from 10 past one and would run for half an hour, so yeah, we have all the content for that video. And it automatically creates the first cut. That first cut, which in this example would be, will exactly be one half hour, starting from the schedule start and ending at the scheduled end. It is almost certainly wrong, but we have a first cut that's sent to the speakers and the room managers who get a link. They can see that. They can make adjustments, and after they made those adjustments, they can ask for a new cut. And the system just goes back through the first stage and does the new cut again, and then the next one is probably right because we just entered corrections. So then the speaker can say, yeah, this is good. This looks good. And then all transcoding and publishing is done fully automatic. There's some pre-roll that we do, which is like an opening credits which shows the title screen. There's a post-roll in which we show our sponsors. It also does audio normalization using a tool that is full automatic. So it makes it quite nice and it should be doable, right? Oh, right, sorry. We did do a lot of work on the user interface this year. I just did a talk at two o'clock in the open design dev room, open source design dev room, sorry. So if you're interested in how that worked, I recommend that you look at the talk when it gets released, the video, which is not there yet, but that's how that worked. How do we do it internally? Well, it's mostly a web interface. The actual transcoding and cutting and everything is done with FFMPEG. The original version that I wrote in 2017 just queried the database and then did system FFMPEG command line that had gone through shell, through interpolation, et cetera. So it was fairly basic. Right now, we've done a few things differently. I have an object-oriented interface to deal with videos, as we view videos. So you can just say, here's a video file. Tell me how long it is and then we'll just run FFPROPE in the background and give you that back. I want to create a video that's that long from that input file, so it will then just generate an FFMPEG command line that does that. And it's sufficiently abstract, I think, that it doesn't actually require FFMPEG in the backend. And I actually like it, and I'm thinking of maybe splitting off SREU video from the SREU code base itself because it could be generally useful outside of SREU as well. I actually wrote pretty much everything at least once since 2017, except for the database. The database layer was designed quite well. I expected to have to do that because in 2017, I just wanted it to work. It was a quick and dirty hack and I cleaned it up quite nicely since. It's still fairly small. It's less than 10,000 lines of code. When I checked, it was like 9,500 or something, so it's fairly, fairly small. I have to run because I'm going fairly slow. What is not in SREU? SREU does not have a scheduler. If you, I mean, there's lots of jobs that need to be run. You need to transcode things. You need to upload things. So we've got one master host and a few hosts that run jobs. We don't actually schedule that. I have been doing some work in high performance computing and I just used one of those tools to schedule jobs on all the systems. Grid Engine in my case, but it doesn't require Grid Engine. You can use any DRM system. So that's actually a major difference between the C3 Thacker and Viper. C3 Thacker is mainly a scheduler and the review part is just while we used KDN live, so they didn't implement it. Whereas in my case, it's pretty much the opposite. Viper has a fairly important scheduler part too, although you can work around it as different. There's also a few things that are not in the code yet that I do want to add. The administrator interface currently is fairly basic because I haven't gotten around to making it well yet. So that's something I want to deal with. Right now, if you create an output profile, so you can tell SQQ, I want a WebM and an MP4 and an AV1 version of this video and it will then transcode them one after the other. So my plan is to also make that happen in parallel so that it will upload a version as soon as it's ready. The database abstraction, we have been starting work on that, but it would be the intent to work on that a bit more. Also, there should be an RPC sub-system which is related to that. Right now, every script just accesses database directly and I want to do some XMRPC or JSON or whatever, something like that. And well, I actually found some bugs when I was dealing with it right here, so fixing those would be nice to ensure. That's pretty much it. Actually, I thought I had more slides. I can maybe do a little bit of a show how it works. Let me open this. Where's that browser? Oh, yeah, that helps. Oh, there we go. So this is the overview system of how SQ itself. We have things here. Those are all the talks that it knows about. There are many talks because Fosname is a large event. We have an overview at the bottom. So basically, every talk is in a particular state in the system. Waiting for files means we're still waiting for content. Cutting means while that part is currently going through a catch that we were extracting just in content. These are waiting for people to review. Well, it shows there. Five of them are currently transcoding. One of them has already been released which is the opening talk of this morning. These are where people entered something in review saying, well, can somebody please look at this because I think it doesn't work. So I'll have to go in there. And this is because my colleagues of the video team made a mistake and I have to keep those in hold for today. We'll fix that today. The final one, ignored, is because there is no data for the key signing. And obviously, we don't release a video for that which would be very boring. So that's why that one is ignored. But the schedule is important automatically from the website. So if I remove it, it would be back the next time the schedule runs. So that's why we just said, do ignore it and then we didn't see that. I think I can. No, I can't. I was going to show you the review interface but the interface requires a link that has 64 random characters and I'm not gonna type 64 random connectors on my phone. So that's not happening. Any questions? Go ahead. Yeah, wait for the microphone please. Sorry? Have you considered any sort of physical button for start and end of talk inserting cue markers into the stream so that at a minimum, there's a copy that can go up that is at least trimmed? So Viper does actually have support for that and it's not necessarily a bad idea. The problem is, let's say, I say thank you and then everybody starts clapping and somebody pushes the button and then somebody goes, hang on, but I have a question. You've got a marker on the wrong location. So you will need to have that review anyway. And for that reason, I don't think there's much of an advantage because it will confuse you because or the person who was pushing the button pushes it like three times so which one is the right one now, right? So it's not a bad idea per se but it won't be good enough. You will need to review anyway. So in that respect, I was like, let's not go there. But was there something else to say? No, yeah, it could be useful but not really. Yeah, well, that too, exactly. Anyone else? Other questions? I ran through my slides, I guess. Although, what we could do is that I couldn't check my mail in. Yeah, that's a good idea. And now you can watch, yeah, that we can do. Just leave me. We can do a live review here. Sure, sure. So all the review requests are sent to the speaker as well as the deaf room responsible and since he is a deaf room responsible. Yeah, that's cool. That's good enough. Maybe make it look good again on the. Yes. Something's working on it, making noise. There we go. Okay, I think it's good. This is not what we want. That's not. That is not what I want. You've got the control video. I think the external screen is not on the mirror right now. Oh, shoot. Sorry. Oh, no worries. There we go, there we go. So basically, it shows you what video we're reviewing and it gives a little bit of instruction on what to do. If you scroll down a bit for me. So this is the video, you can watch it and then if you go like, something's wrong with it, then we select the bottom option. Can you do that for me? The video has problems. So yeah. That one. Yeah, no worries. There we go. And now we can select, well, it's too early that it starts or it starts too late or we have previews of the audio channels because the microphones over here, like the lapel mic is going through a mixer and is going with XLR into the camera. But we also use the on camera microphone as a backup which goes to the other channel. So we actually have two channels. And then if something is wrong with my lapel micro or whatever, then the speak and at least select the in camera microphone as a backup. And the third one is both mixed together. Yeah, yeah, yeah. We used to have some AV sync issues in past year. So then we can fix that up as well. And if that's not good enough, then people can enter some explanation here. And then the talk, it's marked as broken. It's a really basic interface. It's a hand holding interface. It's much easier to use this time around than it was last year. So if you were a speaker at Folsom last year, you probably won't recognize this. But it's exactly the same thing. And the backend is just scripts calling it, right back. Yeah, okay. Stereo, wow. So does fiddling with this, if you say here's the other channel, does that actually reprocess that? Or is this just send a message to somebody with a clue? No, no. Or if you have a more complicated problem, then you send a message to somebody with a clue, just say actually it's completely busted. You'll need to take a bit of this and a bit of that. All of the form up there is processed automatically by the system. So every change you enter there will be used. If you enter something here free from there, then somebody who has a clue will have to log in and check and see what's happening. In most cases, we'll have to say, well, sorry, we can't help you then. Or we can say, well, here's access to the raw data if you want to just go ahead and I'll inject it into the system. That works too. We do that occasionally. So someone could get access to all the raw streams if they needed to like, if you had to drop out for part of the talk, so you had to use this bit and then you could flip the other one and then put it back in. Like now you need to do some video editing. Right, right. Is there a way of that? Well, we don't actually throw the data away. It's on the server. Some of it is on SD backups, et cetera, et cetera. So there is an option to do that. It's not more work and we won't do it. But if a speaker says, I want to invest the time, then we'll go, yeah, sure, here's the files and have fun with it, right? And then when they give it back to us, we'll upload it into the system and then publish it as well. But other than that, we don't actually, we don't do much of that. Any other question? We still have five minutes to go. This is good. No? We have one minute and 10 seconds left. So. Are you doing lots of reviews for POSTA next year? Do you want me? I hope to be here next year. I don't know yet. Andy knows that I'm moving to Cape Town in a week or so. Okay, so thank you. Welcome. Presenting.