 Great. Hi. Hi. My name is Paulio. I'm Nicola, and we're gonna talk about the Debcom video team so all the wonders of how to set up things, how our gear works, all the funny bits you've missed because you weren't there at midnight last night. All these kind of things that are great. Working on the video team. So what's the video team? We're the annoying people that tell you to stand up and wait for the microphone. So what we do is we record the events of the Debian community and we record all the conference presentations. We allow through streaming the participation of people who couldn't come to the event. So we allow remote participation and we try to have tons of fun working towards a free software and hardware conferencing video stack. Yeah, we're like 90% there. Yeah, pretty much. So the Debcom video team has lots and lots and lots of people. The people on this slide are kind of the core of the video team. Members, some of them are here, some of them aren't. But the video team wouldn't be able to work without the dozens and dozens of volunteers that help us at conferences. So thanks to everyone who helps with the video stuff. And also thanks to the sponsors because the recordings that we do, the hardware that we buy, the resources that we use wouldn't be here without the Debian sponsors. So thanks a lot to all the sponsors of Debian. And to Debian for buying all the stuff. Yeah, sure. Yeah, sure. So system overview. The microphone is dropping. Anyway, see, fun with the video stuff. So the system overview, our system is quite complicated and we normally divided between audio and video. But if you can see here in the projector, basically our audio is all geared with the mixer that is in the back of the room here. We have multiple microphones. We have four microphones in total. So that is two headsets and then two hand microphones that are all wireless. So there's a bunch of cables going all through the room for this. And then there's a video. We have two cameras per room. So at larger conferences like the EPCOM, what we don't really do is we have multiple rooms. So in a single room like this, we're going to have two cameras. The first one is... I'm going to use a microphone that works. So the first camera, fun with the video team. So the first camera is to record the present data. And then the second one gives us a second shot on the audience for questions. It also allows for a more dynamic recording of people talking. So you don't only have one camera taking things. And these cameras are both wired to a video mixing console. So it's a PC basically with two capture cards that I used for capturing the video. And we also capture the presenter's laptop. So if you see the podium here, I have a HDMI cable that goes out from my laptop. And through a mess of cables and equipment, this goes to a box that converts the signal and allows us to get it on stream as well. I think that's pretty much what we have for this overview. So close up of the video recording equipment. That's one of our cameras. So we got decent, very decent camcorders. We got them in 2017. So we have six camcorders, which allows us to record free rooms with a consistent setup. And there are 4K SDI outputs, SDI recording camcorders. We only use the digital output that feeds into the capture PC. The cameras are 4K ready, but we don't record in 4K for technical reasons. The footage is really, really large and our mixing PC couldn't handle it. So we're recording in 720p for the moment. Yeah, I think there's been quite a lot of improvements on the software stack. I know, for instance, that the CCC records in footage now. So yeah, but 720p for a conference video is pretty good. At least we consider it pretty good. We also have two tripods. So the two tripods allow us to have a full set of hardware to record mini-cons. When we need more, when we want to record free rooms, we hire some hardware. We have tally lights. So tally lights allow the camera operators to see that their camera is on stream. So there's a light that goes on when the camera is enabled. They're basically a serial, like it's just an LED that gets activated by the DTR line on a serial RS232 connection. So we can extend the run for very long with just a few RG45 adapters. So we have a CAT5 ethernet cable that runs all the way through the room. For the audio, we've renewed the equipment for the audio setup this year. We got a new mixing desk with more inputs than what we had. We have also new microphones and receivers. So basically, thanks to FlutterMouse for sponsoring us the mixing desk, and Debian gave us the rest of the audio equipment. So as we said, two headsets, two hand-head microphones, and four receivers for all of that. Sadly, these receivers have certain frequency, and we couldn't take it, for example, and bring it to US because certain countries in the world have laws on radio frequency. So these receivers are blocked for center frequency, and they all change depending on different countries. So we decided on this setup because most mini-coms are in Europe, so this works for the greater Europe and these kind of things. But if we went to the US or like in Asia for mini-conference, we would have to hire some more gear for that. Wireless regulations around the world are a pain in the ass. So we've put most of the audio recording gear inside a flight case that we can take so we can just roll it around, and it's pretty much all in one place. We used to have like a pile of smaller flight cases. It was quite a mess to ship everything around. So I think we've streamlined four packages and two tripods, which is kind of... It's okay, it's manageable. So the way we capture the presentator's laptop is through a project called AGMi2USB. And it uses... I don't know how it works. And it uses a board called a pneumatoopsis, which is open hardware. And it has... Isn't it? Yeah, it is. And it has an FPGA inside. So the AGMi2USB is a project we flash on it. And what it does is it has two inputs and two outputs for AGMi. So you could input two different inputs and then use the outputs. And it has a USB output afterwards. And we take this USB output and put it in the recording PC. So it gives us basically a way to capture what the presentator has. And we need multiple outputs because one of the outputs is going to the projector in the room. So we're using this matrix board to be able to record. So it's really nice because we have one output going to the projector and we have one output going to the screen in front. So we can look at the screen in front and kind of peek at the audience rather than turning around like I've been doing for the start of the talk. So I'm going to try to actually stay in the right place. We've been using that project for a while. In the beginning it wasn't the most stable project. But it's really improved. And nowadays we basically don't have any problems with it. And it's a really solid project. So thumbs up sort of folks. It's J-Miner USB. Yeah. And so basically this device presents a webcam over USB. And so we have a USB-C or single board computer plugged into it. It's a minobot turbo. So I think it's an Intel atom based system. Fanless, pretty nice, tiny. It all fits in a 1U case. So this large case, I think I have a picture of the case, of the inside of the case. And it's like one board, one tiny board, and then lots of space. We had space to put a switch and lots of stuff inside. So it's kind of tiny. It's nice. For the live recording, for the live mixing of the presentations, we use a desktop PC. Basically this desktop PC is connected through the network with the presenter slide capture PC. And it's connected with SDI to the two cameras. So it needs a fairly big CPU. We've managed to make the system work with lower-end i5s, I think, last year in Taiwan. It wasn't great. We had to shrink down the previews and stuff to keep the CPU use OK. But with this machine, we can have full-size previews. It's pretty nice. The robin with needed for multiple streams coming from the cameras is quite high. So to mix these then with a live mixer, it takes quite a lot of resources. And so this computer is also used as a NAT gateway so that all the computers that are on the video network can access the internet through any kind of a plink that the venue might provide. So, for instance, here, the plink is a very long cable run that goes through a switch that goes to another room through a window. It's a bit messy, but we only need one network drop and then we are self-sufficient on the network. So we can kind of make do with any situation that a venue can throw at us. And yeah, it works well. For the live mixing software, we're using something called VoktoMix. It was written by the folks at the CCC, so the CCC VOC team. It's great. It's built in Python. It's all in Debian. What to say? So when you look at the mixing software, which is this window on the left, you have the two camera inputs and a slide show input. And then you have a preview of the live mixed version. So this is the GUI for the desktop that we use over there. And here we put a few previews of the actual stream so we can make sure that we actually stream stuff. And we communicate on IOC. The team communicates on IOC. And so the director has a window open on IOC so they can get feedback from either the audience or anyone else that might have something interesting to say. For instance, here, people are talking about the scheduling for the next sessions, I guess, in this window. And on the bottom left here is just a view of the recordings so we can make sure that the recordings are enabled and keep recording. I personally lost half a day of recordings because there was no recording. So we've added that since. It's always a good thing to check that you're actually recording something. Yeah. So yeah. Vectomics, all package in the biennial. Thanks to the C3 VOC for building it. It's great. Live streaming, I guess that's my area of expertise. So basically, this mixing software outputs a raw video feed that we need to put somewhere, push somewhere. So what we do is in every room we have... So this mixing PC is pushing using RTMP. So we encode in H264 and AAC the feed and we push it through RTMP to a streaming backend which uses NJNX with the RTMP module. RTMP is the industry standard for pushing real-time audio and video feeds. And the RTMP module basically slices the video and puts it into the HTTP live streaming format or HLS which is a standard-ish thing that was developed by Apple originally and then it became the industry standard for streaming on the web. So what the HLS format is is basically a playlist of chunks of video and audio that you have to put in a HTTP directory. So for the client, you just have to do HTTP requests on a loop to the playlist and open the slices one by one and play that. What we have for the front-end is basically HTTP caching. We don't need anything more than just HTTP cache because it's just a plain directory. So it's really easy to actually distribute front-ends around the globe to reduce the latency for the player. So we do nice stuff like downscaling the stream. So basically the central streaming backend also runs FFMPEG on the incoming stream to lower the... So it creates downscaled versions with lower quality, lower bandwidth requirements and HLS allows you to have adaptive streaming. So you can adapt... The client can adapt the files that it's going to get according to the bandwidth that it actually has. So this allows streaming all around the world for low bandwidth, like remote attendees. It's worked, I think, somewhat better than what we had before. We used to use ICAST, too. Yeah, ICAST, what was the protocol? Yeah, I see the light blinking. Which is all right. Anyway, so NGNX RTMP module, it's packaged in Debian. It's in the stable back port, so it's going to be in the next release as well. So pretty cool. FOSDEM actually is really happy that we packaged NGNX RTMP module because they used it, but they had to package it themselves. So we actually collaborated with them to make sure that it was included inside the Debian NGNX package. So FOSDEM actually uses these packages now as well, which is pretty cool. So, yeah, geographic distribution. So we don't use GODNS. We use GUIP on the server side. So basically, when you connect to the server, it can redirect you to the appropriate frontend. Usually in mid-DebConf, we only have one frontend because there's a smaller audience than there is at DebConf, and there's a lower setup time. So we don't do the whole shebang of having 20 frontends and everything. It works okay. The frontend is just something fancy. I think FOSDEM really uses their frontends because they have a lot of remote attendants. It's less of an issue for us. So we have direct access to the DNS zone thanks to our friends at DSA. So we can just push our changes and we are autonomous on that area as well, which is nice. So once the video has been recorded, one thing we do afterwards is to review them before publication. So the system we're using is called SReview. It's been written by Walter, which is a member of the video team. It's also in Debian. So basically, SReview manages preview generation. You can manage the start and the end of the video. So for example, if we started late, then we're going to need to adjust when exactly the talk started and then when it ended. And then it cuts the video, it transcodes it, and it archives it for us. We import the schedules of all the conferences we're going to in SReview, and then SReview manages metadata so we can have some metadata afterwards for the videos. For example, we're going to talk about it later, but we're uploading also our videos to YouTube, for example. So that gives us some metadata to upload there. This system is also used by FOSDM, and the idea behind it is for it to be as easy as possible to review talks. For example, FOSDM, what they do is that they ask the presenters to actually review the talks themselves. We do it ourselves because we have a large team of volunteers to do it. So the talks are recorded, they're transcoded, where do we put them? So we have a meetings archive. So the main meetings archive is an FTP server. So basically it's a bunch of files on an FTP server, which is kind of a Git and X repo, but sometimes we don't update the metadata, well, whatever. So there's a random mix and match of files, there are videos, there are subtitles, there are slides. So over the past few years, we've tried to actually fix this mess and have a new Git repository with metadata for all the files that are in the archive. So this repository and these tools, they scan the metadata that's provided by the conferences. So if there's a schedule that's been written in match-in-readable formats, they can scan that and ingest the metadata. And this metadata can be used to build a proper front-end for the DeepCount Video Archive. So we've had people looking at PeerTube. Tafre has built a code plug-in for it as well. So yeah, it's... It's a work in progress, eventually we'll have a really nice front-end. A fancy front-end with all our videos. If you're really good with web stuff, then talk to us. And yeah. The next thing that we do is that we've come to a conclusion that whatever you do, whatever you say, people will probably upload your videos to YouTube. So we might as well do it ourselves properly with high-quality videos that we control with good metadata and making sure that we have a proper YouTube channel for our videos. Unfortunately, that's where a lot of the viewership is, so we might as well do it. So this was kind of the first user of the metadata repository. We have a new pile of scripts that generates and uploads the videos to YouTube from the metadata repository. So you can have a look at our channel if you should do YouTube. One of the things that are great with YouTube is that it automatically generates close captions. So it's a thing we try to do, but it takes a lot of time and effort to do. So most of the time it's okay. Yeah, I mean, the manually done captions are way better than the automated stuff, but the automated stuff costs zero to make. So it's a compromise. And it's sometimes funny because it has trouble recognizing different accents. The language. So sometimes, for example, somebody speaking in English with a German accent, then it'll subtitle everything in German while the person is speaking English. It's very funny. Machine learning, right? The future. A setup is done entirely through... So we try to automate all the setup that we do. It serves two purposes. It allows us to have a repeatable setup and it allows us to document the setup and show what kind of crazy hacks we had to do to make things work. So basically we have two layers of automatic setup. We have a Pixi server, which we use for Debian installer preceding. So this is just a stub of configuration to be able to start running Ansible on the installer machine. So we have a full repository with Ansible playbooks, roles, tasks, the whole shebang on our Git repositories. So for example, when we need to set up for a big conference when we have multiple machines, the first step is going to be building the Pixi server. And then we can boot machines from there, one afterward the other, specifying an IP address. And it'll build automatically the machines depending on the different roles we have. And the setup of the Pixi server is scripted as well. So we can just run script, it creates a USB installer. So it makes for easy install for a bunch of different machines with different types of roles and different capabilities. It works pretty well. Yeah, it's all right. Another thing is documented in Repository. You can go and visit its video.debconf.org. It's all bit with Sphinx, which is the same. Let's read the docs. Also, but this website has a live player for the mini-Debcoms. So for different conferences where people don't want to be a whole website for the mini-coms, people can just go there and watch a live stream directly. We have two separate documentation pages, one for the actual Ansible documentation and then one for our general setup and what the hardware we use and some diagrams and these kind of things. So we have manual documentation and Ansible specific documentation basically in two separate websites. So this is a large mess. And so there's a lot of changes to making it work and we've kind of picked up a few lessons along the way. So yeah, Ansible is great, but having a loose team of 10 people working on Ansible when we meet like five, six times a year, it's kind of hard. And one of the things we did to make that easier is to start using Salsa. So the GitLab repository of Debian and using a merge request-based workflow. So it's really, it's way better. So we now create merge requests, ask somebody to review them, see if it's okay, whereas when we were using Aliath, we were just pushing master and then expecting things to work. We're also using continuous integration. So we have different sets of tests running through GitLab CI on Salsa. We're using the Docker executor and we're using Ansible Lint first to lend the code to see if we made some stupid mistakes or if something is going to be deprecated in Ansible because things tend to change. And then each test, each role is tested individually to see if it works. And sadly, one of the big problems we have is that SystemD and Docker and GitLab CI is not really, it doesn't work. There's no way to make it work. At least from what I know, it worked like a bunch of time on this. So we just kept the SystemD tasks and expect them to work. So I messed up. This image should be on the previous slide. This shows in a very, very thin and very gray on gray font the actual roles that we are testing. So yeah, there's a bunch of stuff that we test. It's pretty nice. So one of the big issues that we have is that we have a lot of specific custom hardware and it's really hard to emulate it. And even then, it's also not that easy to have, like for instance, if we want to test the video mixing software, it's hard to have virtual sources that allow us to do an integration test of our system. So we haven't really found a good way to do that yet. Some people have started playing with automating setup of KVM machines to actually replicate the network setup and some of the injection stuff. Again, if you want a fun challenge with a complicated setup, it might be something you can help us with. Documentation is a challenge. It's a technical challenge but as you've seen through our talk, our setup can be quite complicated. So one of the things we learned and encouraged people to do with these kind of setup is actually set up good documentation using known platforms like Sphinx to help you link pages and tell you what to do. Especially on a project like this where we have to build the system five times in a year and we're not touching it the other times. Yeah, the hardware is on a shelf in my office maybe 85-90% of the time. It goes out maybe across a whole year, one month, one month and a half. So yeah, it's kind of hard to actually make sure that we don't forget stuff. And then the number of events that we cover is slowly increasing. I think this month is quite impressive. I did Ubuntu Party last week with this setup. We're doing the mini-depcon this week and in two weeks there's a new mini-depcon in Hamburg. And so this is a great thing. There's more and more distributed Debian events around Europe, Debian or otherwise events. But sometimes it's hard to just find people to cover all these events because we're not all lucky enough to get a lot of time off work and travel is exhausting and stuff. So training is becoming a very big part of... Training is a big challenge because it's like doing the setup with very seasoned team members is quite fast. You know what to do, you're focused, you go really fast. When you have to tell people what you're doing, explain everything, it takes a lot more time. I can just do a comparison. The setup that I did by myself last week took maybe three hours. The setup that we did here with new volunteers took a whole day. That's okay, but it's time-consuming and we need to account for it. And if we don't train people, the quality of the videos and of the recordings fall dramatically. So we have to do it and we know we have to do it. And so we're really happy to have trained a new batch of people that are going to do great work in Hamburg, I think, in two weeks. And many finally last technical challenge we're having is unexpected problems at venues. So we have to build the whole system in a few hours sometimes. So sometimes you have things like here, for example, we had a faulty ground loop and what it does, it goes bzzzzzzz on the audio recordings. So it's quite troublesome. Sometimes you have restricting firewalls, so you arrive at a venue and then you find out that half of the ports you need are closed and it's Friday night and the network admin are gone until Monday. So you have to deal with that somehow. Sometimes we arrive in a place, like for example in Taiwan last year, we ended up in a room where they told us there was a PA system with speakers. Yeah, there's a full audio system in the room and you arrive and you find this, this shit. And that's it. What was it in the room? There was the mixer and the speakers together in a tiny, tiny, tiny small box for a large room. I don't even think there was an XLR input on that. It was just a MIDI jack that you could plug your phone in or something. So, yeah, it was... So, lucky for us, we had someone with a full audio setup in the back of his van parked at the university so we could do that. And, well, one last thing. Shipping this hardware is complex. It's not hard. It's just like there's a lot of boxes and you have to know where everything goes and we ship around the world because the hardware went to... Well, at least the hardware that I managed went to South Africa, then Canada, then Taiwan. It's going to go to Brazil next year. Well, this summer, it's going to have to go to Israel. So, yeah, you kind of start to understand how the world's import duty system works and how you get to rent a van in random places around the world. It's fun. It's actually quite good fun. So, help is always welcome if you want to be part of the video team or here. We're happy to train people. You can subscribe to our mailing list or find us on RSC in the debcomf-videoteam channel. And if one of the things we're failing to do is actually subtitles. So, if you want to work on subtitling talks, it's a great talk to us. We can help... We can help put you in touch with the right people that know how to do it because I certainly don't, but I know where to find the people who do. We can root for you. Yeah. And that's it for our talk. So, we have, I guess, ten minutes for questions. Five, ten minutes. No? No, we don't? No. No. We set that timer to 35 minutes to get time for questions. The video team knows. I mean, I think it's okay, but... All right. So, yeah, there's a microphone in the middle of the room if you want to... Please use the microphones. Yes. Well, one of the reasons we're asking people to use the microphones is because if you don't use the microphones, then people on the stream won't hear you. Yeah. We kind of have like two microphones over the room that can pick up ambient noise and if you push the gain really high up, then you can hear someone shouting across the room, but we try to avoid it because it's way better to actually get a clear voice through the microphone. But if everything was clear, then I guess we can go have some more coffee before. Yeah, I have a question. Aw. Aw. Have you ever used or tried or compared the mixing with OBS, Open Broadcast, something? Is it different? Is it better? I've touched OBS once. It's quite different. I think Stefano has done most of it. I guess it's better for streaming different outputs live. And it's mostly geared to people having maybe a simpler setup. They want just one laptop and different cameras or screen grabbing. I don't think it's that true. I don't know. I've seen people with very complex setups, like a sidecam, a facecam, a capture of their huge gaming laptop. OBS can handle that. The way I had answered that is OBS is really well suited to someone who wants a custom setup that they're going to build for their stream where we want something that's repeatable and every room has exactly the same setup that's automatically configured and gives the operator as few options as possible. We don't want them having to go and figure out how to add a new camera. Right. I'm not sure if I understood. Are the multi-streams coming from the venue or are they split from an external server because that would change the bandwidth requirements and maybe you don't always have enough bandwidth to have four streams from the venue? We've never. The output bandwidth requirement is, I think, two or three megabits per second. Sorry? It is configurable. We can actually push down the bandwidth requirement if we need to. Usually when we're streaming several rooms at a depth conf, we've asked the venue to get a very decent internet connection because that's actually needed for the work that happens at a depth conf regardless of the video team. I don't think we've had issues with bandwidth, like with pushing our streams outside. Yeah. One issue we had in Taiwan, we actually had the streaming backend server inside the university and it was open for the frontends to talk to. The university firewall thought that the frontends were dosing the backend server so the connections were shut off. I think it was during the week and so we managed to get a hold of the network admin to turn off the DOS protection on that machine. We maybe lost 10 minutes of streaming because of DNS TTL pointing the stuff at another server. Yeah. Mostly over the DOS firewalls are more of a problem than actual bandwidth requirements. So for bandwidth, you need only two to three megabits, not even megabytes per second? Yeah, I'm pretty sure. That's it. Per room, yeah. What was wrong with your microphone and the talk, Nicolas? I have no idea. I think we haven't changed the batteries before starting. We have changed the batteries yesterday night and now I see the receiver light flickering. I did it right now. I think there might be interference between the channels of two or four microphones because the lights are flickering like in sequence. So, yeah. Oops. As an organizer, I have to say that it was very, very pleasant to work with you guys. Thank you very much. Thanks a lot for organizing great events all the time. It's cool to be around. So maybe time for one more question. Yeah. If you have last one, make it count. If not, we can get more coffee before the next talk. It works. Thanks a lot. Bye.