 Okay, 9.30 on a lovely Friday, New York morning. That's our first talk on the Debian Multimedia conference. And we have Eric here, also known as Edo Rosetta, and he's going to present his talk conference with you. Eric. Okay, so as Adrian said, we'll talk a bit about conference video, a little bit about me. I've just been a Linux user since 2000, Debian since 2001. I've worked on this year at this conference. This is my eighth time working on video for free software conferences, just a few words about why to do it. Not everybody from a community can get to an actual physical conference, so what is ringing? Do you hear that? Are the environment mics in the room? The environment mics are on the piano. Okay. What about that? All right. You certainly can't hear that. Yeah, well, you know, parallel walls. Yeah, look up above your head. Yeah, well, that's... Okay, so anyway, not everybody can get here, so you know, it's good to be able to provide those live streams, if you can, with IRC feedback, so that you expand the audience. The recordings provide a historical record. Doing it, having all that, having the archives up online, having the streams increases the exposure for the event. And the live streams, for me, that's really when the first time I listened to the Linux Audio Conference live streams back in 2003, then it was just audio, but it suddenly, the people that had previously just been text and email, they had faces, they had gestures, they were just so much more real, and the whole community just, there were real people there. That's why I've been doing this. And then, you know, travel is really expensive, and it's gonna just keep getting more expensive, so it's gonna become more and more difficult for people to travel internationally to conferences, so if we can do a good job of making videos and covering them, I think that helps everybody. It's just a quick run through of some of the conferences I've worked on, and what we did there. I said, the first two years of Linux Audio Conference, there was audio only, with just like a webcam being captured every 10 or 15 seconds, so it was like four frame per minute video, if you wanna call it that. But then in 2005, it was the first year we did video. This was a real simple setup, just one camera, one camcorder, and a laptop, laptop doing the stream encoding, sending it out to an ice-cast server. Pretty much the same setup in 2006, 2007. We moved, the first four years were in Karlsruhe at an audio video institute, so we had a really nice facility with dedicated audio engineers, and we didn't realize until we left how much we relied on them. We got to Berlin, and we had to do everything ourselves, and it was in a rundown old university, with wiring that nobody knew where it went, or what it was connected to, and it pointed out just how important a large team is, first of all, to get this stuff done, and the importance of testing the network beforehand. And then Dev Count 7 in Edinburgh, it was the first year that DV Switch was available, and my first, you know, my first DevConf. 2008, I went, Ryan back there is working on the, shot of Ryan. Ryan's on the DV Switch machine here today, but I went and helped him a little bit with the video at LenoxConf Australia. And then two weeks later, with no advance notice, he came to Germany and helped out the Lenox Audio Conference. And later that year, DevConf 8 was in Mar del Plata, Argentina, again using multi-camera setup with DV Switch, and finally, this year here in New York City. So, well, most of you are on the video team, so you've already seen this, so you're probably not gonna panic, but maybe somebody on the stream. Often, you know, the first time you introduce new volunteers to what we've got here, it can be a little overwhelming. But just as a general, well, I'm gonna break this down into smaller components. But this here is a rough, outline of, or, you know, basic schematic of what we've got downstairs in Davis, and this is very similar, just slightly less complex setup here in this room. These then are the servers over in the next building, file storage and transcoding and ice-cast server. And then we have what's out there on the net, very StebConf machines for streaming and for our archive. So just to break that down, I'm gonna go through some example setups, one minimal setup suitable for one person to operate, something you could easily walk into a user group meeting with and set it up with limited lead time. Just a camcorder tripod laptop, and if you wanna stream a stream server some more off-site to connect to. Then making it just slightly more interesting, capturing the projector output, then adding an audio setup, because in general, even with really good camcorders, the mics aren't all that great. And it's better if you can use these headsets and some room mics and mics to pass around the audience to get good quality audio into the stream. And then from there, it's just adding more cameras, which of course adds more people and more setup time and so on. But here's a real basic setup. Just one camcorder, a laptop connected to the camcorder by FireWire. Either the internal hard drive of the laptop if it's big enough for storage or if not, just use a USB drive. And then a network link to an ice-cast server and hopefully some happy viewers. So the laptop just needs something like DVGrab or Kino to capture from the FireWire output. DV eats up a lot of storage. So depending on how long you intend to be recording, you need to plan it. That fuzz is, sorry. But it's roughly about 13 gigabytes per hour. So just plan ahead for that. And finally, if you don't tell people where the stream's gonna be, it's fairly obvious they're not gonna tune in. It's somewhat fortunate with DoveConf. We've got a established relay network that stays at the same place so people know where to find us. Kino is just a basic video editor capable of capturing the disk. It's perfectly adequate if you don't wanna stream. It also gives you a view of what it is you're recording. If you are streaming, you can use a command line piped together like this. DVGrab to capture, FFmpeg to Thiora converts the DV into Thiora and Vorbis and wraps it up in AUG. And then AUG forward just talks to the ice-cast server to pass the data through. But that's very simplified. The doing that from the command line gets pretty ugly, pretty quickly. But just to briefly go through this, the stuff in blue is your metadata, speaker, title, timestamps, so on. And the green is where you're in this, you know, if you were to do it this way, is where you would be writing your files. The top one in the DVGrab, that would capture the full DV output. And the AUG in the T to capture that. But, like I said, it's better to wrap that up in some sort of script or a more user-friendly piece of software, which I'll get to in a moment. So, you know, putting up one camera, pointing it at the presenter, that's nice. I mean, it's better than nothing. It's a good record of the event. But it's really difficult to get the presenter and their slides from the projector with a camera and have both be visible in a useful way. So, it's good to have some way to capture the output from the presenter's laptop, which is what we're doing here. We have the VGA out, going to a device, it's called a Twin Pact, sitting up on the AV cabinet over there. And that captures the video out and converts it to DV. And then also passes it through to the projector in the room. And from there, it appears to the laptop as just another firewire input, just like just another camera. And from there, it's the same. But since we have two sources now, you need a way to choose between them and possibly mix them. And depending on the capabilities of this laptop here, you may need a separate machine to do the stream encoding before you send it off to the ISCAS server. But as I said, when you've got two sources or more, you need a piece of software to help you mix those. So this is a screenshot from DV switch, which is what Ryan's back there operating for us. What you see here, this main window shows the program out, that's the video that's actually going out to the recordings and to the live stream. And down here you have thumbnail images of something funny happening on IRC back there. No, nothing, nothing, nothing. Picture in picture of the picture. Picture in the picture of the picture in the picture. Yeah, something that, okay. So here's one camera, the main camera from the back, on the presenter, an audience camera with an empty audience. The capture from the TwinPak of the slides. We have also the ability to capture the audio directly from a also supported audio device and just generate blank or black, in this case, DV frames. And here this final one is the loop file, just played as a loop while we're not live. So yeah, we've got the two inputs, picture in picture, and everything that was in that, all of this is still happening, it's just been split up between these different programs in blue here. So we have DV source grab, DV source, DV grab, which runs the capture on each of the DV sources. And DV source file plays the loop, DV sync, then are where the main output goes. Files is where we record. You can either send that to a file server somewhere else, or you can record it locally and transfer them later. And DV sync command is used to run the encoder and send it off to the ICAS server. So once you've got multiple video sources, the next thing you might want to consider is improving your audio. Like I mentioned, the built in mics on camcorders tend to not be all that great. It also doesn't give you much flexibility in where you place your mics. So as you can see here, I like these wireless headset mics. It prevents any problems with keeping speakers sometimes don't. If they've got the handheld mic in their hand, they might gesture with it while they're talking and you lose their audio. They might actually hold it where they need to and turn away to look at their slides and then you lose them. So this, it's attached, they can't get away from it. Then these condensers in the middle of the room there give us a chance to give the recordings a bit of a sense of the ambiance of the room. And also if the audience starts to ask questions before they've been given a mic, you can bring that up a little bit and get a little bit of what they're saying. And of course, having all these various audio inputs you need some sort of a mixer. Ideally, you want one that can do two independent mixes, one to go to the speakers in the room. And that one you never want to send the room mics into the room because you get the feedback loop and it's painful. And then the other mix is for the recordings and the streams. But this does then get into more than one person can reasonably handle on their own. And also more time for setup and testing. So how do we get that audio? As we mentioned earlier, we're using here, we're using DVSource also and the USB audio device. Which is I think the ideal solution. The TwinPak has audio input. It's not the best option because if the laptop output gets disconnected, it'll stop sending DV or stop sending audio into the stream. And if you've got nice prosumer cameras like the one in the back there with proper balanced mic inputs, you can send the audio directly into that. But it's nice if you are running tape in the camera just as a backup to not do that, to have the camera mics go to tape. In case everything else in the room just falls apart on you. Then from there, it's making more complex. You just, you know, adding a second camera, you get a chance to show shots of the audience and you can provide an alternate angle. Fried some variety. But then of course, you know, FireWire has a five meter cable limit so you, depending on the size of the room and in room, even this size, we need at least two capture machines. In this case, we've got three, one for the TwinPact, one for the audience camera and the DV switch is grabbing from the main camera. And yeah, I said it in the opening, opening session on Sunday that this just can't happen without a team. Once you've gone to this level of complexity with this many machines and cameras and mics and yeah, it takes a team. And as far as the software side, each new source is just another DV source fire running somewhere. So we're back to the overall big diagram of what we've got here this year for DEBConf. Hopefully it makes a little more sense now. There's other things to talk about. Let me do it on time. Oh, I'm out of time. Yeah, it takes a good deal of planning to make things work and even then things don't always work. It's good to, it's necessary to have a workflow established for reviewing and encoding the files afterwards, getting them posted up online. In many cases, getting those recordings up shortly after the event is almost as good as the live streaming to keep the sort of momentum and interest around the topics discussed in the conference going. Just a note on speaker wrangling. Often people will suggest, well, wouldn't this all be easier if you just made the speakers all use their same laptop? Wouldn't this be easier if you got the speakers to all use the same slide format? And it just doesn't work. Everybody wants to use their own laptop, write their own slides, do their slides in the talk before, or if there's no talk, come set up and do it like I did today. It's really important to protect your cabling and your equipment by taping it all down. It also protects your team members from tripping and falling on stuff. Just make sure you've got enough space and some thank yous to Yorn for starting the streaming at the Linux Audio Conference all those years ago. Getting me inspired. PyCon Video Team has started using the same tool set as us here at Dubconf and we've gotten a lot of help and they're here today. Show them. I mean, yeah, Ryan and Carl back there. And the organizers of the conference, without which, of course, we wouldn't have anything to record and stream. And as I've said many times, the Video Team, you guys rock. And Holger, for, you know, there was a time earlier this year it wasn't clear if he was coming or not and said to a number of people I just can't imagine Dubconf without Holger. I'm really glad he decided to come and help out again. And Ben Hunchings, the author of DV Switch. Everyone at ZIF.org for the codecs we use and for Icecast. And you for coming. So, that's. Thank you, Eric, for your excellent job. Are here, any questions? Did they ever fix DV Grab to actually work with the new FireWire stack? Yeah, we're using it now. Okay, so the whole kit works with the newest stack? Yeah, so this year, the DV Switch, the Dubconf video machines are all running a squeeze snapshot and 2632, I think, kernel. So it is now working with what they call it? The new stack? Juju, yeah, so yes. Oh, I find that impressive what you're doing and to have a mobile studio like that. But how many, do you have stats and how many people look at the videos of their video contents? Is the red light on at the bottom? Yeah, it's on. It's on. Keep talking. So do you have stats on how many people look at the videos that are produced? So it's a little bit difficult because we've got this distributed network of streaming servers. And so we have to try and pull that together. And I'm not always sure that IceCast reports those statistics completely accurately, but earlier in the week, one of the server admins was saying there was several hundred people at peak amongst all those different servers watching the streams. But we also had someone who came up with a nice page with the videos where he was pulling from the IceCast servers and people came through him. So if someone sets up their own relay, we don't have those stats. So it's a little difficult to tell, but it's in the range of several hundred. At least there was live streams. And I think, Carl, for PyCon with the videos you uploaded, you've got good stats there. You've got hundreds of thousands of downloads. Yeah, we don't actually stream live, we just post it. Yeah. Use the mic? Use the mic? We post this stuff to BlipTV, and each year we've had about a quarter million hits on the recorded videos. So yeah, it gets out there. There's a need, I see that. There's a need there. Yeah, we get a lot of good feedback from people. So we appreciate the work we do. I think one of the biggest problems is actually advertising that this stuff is available. Periodically I run into people going, yeah, I've heard about PyCon. Boy, I wish I could come. Well, did you watch the videos? What videos? Yeah, someone pointed out, I think Zach, the DPL, pointed out that we really need a Dubcom video website that explains what we do and advertises what we do. I would suggest murocommunities.com. It's a nice video aggregation, a lot of software. Yeah, yeah, so I'm getting the word out. There's one more talk about it. Any more questions here in the room? If not, I have a question. Eric, do you see room for improvements? And if so, where? Oh yeah, definitely room for improvements. One of the biggest things is just good audio equipment. It doesn't buzz and hum. No, one of the biggest things is really organizational. And just making sure that the overall organizing team and the rest of the conference understands what we're doing and what our requirements are in advance, and also understands how important the people who can't get here, how important it is for them. Sometimes it's hard to, for an organizing team that's mostly volunteers who have never done this before or only done it a few times, it's hard to understand sometimes, I think, why the video team needs so many resources. But on the social side, there's that. But then technically, there's definitely room for improvement. In the development version of DV Switch, we've got Wouter, wrote Wouter's on the camera back there, wrote a fading function so that we can fade between shots. As with so many things in Debian and Tree Software in general, it's just people power, people time. Ben Hutchings, the main author, has in the past year become part of the Debian kernel team and just hasn't had time to work on DV Switch. The reason we're not using the fading function this year here at DubConf is because he didn't have time to release. In the development branch, it's also in a transition from inverting the sense. Right now, DV Switch, the user interface, acts as a server to which the sources and syncs connect. But it depends in the middle of a transition to using... It depends on the transition. Right, it depends on the transition. Using RTP and what's that library called? Live555. Yeah, 555. So that each source and sync becomes a server and the interface, the switching software becomes a client to all of those. And I think once that transition is completed, that'll give us more flexibility. We could have Ryan sitting in Australia doing the directing through some sort of web client or something, I don't know. I have long talked about the need for a DV source jack to get the audio from a jack stream so that we could both record the audio off the mixing desk separately independently of DV Switch and also run some if we needed to process the audio in any way within jack. That would be interesting. There's a long to-do list on the Alioff project page too. So what would be the advantage of having the audio stream recorded separately? Because I would think you would... Just as a backup, just in case you lose the whole video set up if DV Switch dies, the audio is really... Oh, right. Not to mix it in afterwards, but basically... So if you don't have a video, at least you still have the audio, that's what you mean. And then, yeah, also, you could possibly afterwards use that in an editing situation. Do you know how much work that is? Yeah, that's why it never happens, but I'm just saying. It would be nice. But I don't know. Do you guys have any, Carl? Ryan? Cousin or so issues on the issue tracker. Yeah. Make the fade button. Most requested feature is the picture-in-picture swap. So I think the main and the picture-in-picture can be reversed out real quick. In the interface, these buttons here, the speakers, the little speaker icon selects where your audio is coming from. So in our setup, we would generally have it on this blank video audio source coming from the mixer. In what you see here in this screenshot, the slides are the A source. And the picture of me there is the B source. And what Carl's talking about is some way to click a button and have those just reverse. Right now, we don't have that. I'd like to be able to crop the picture-in-picture so that I'm not actually having to use the full frame of that feed for the little thumbnail or well. Yeah, so that you could click in the thumbnail drag and select just this much. Actually, the backend already supports that. It's just that there's no UI for it, which is very useful. Yeah, lots of room for improvement. How about AV sync? Do we have to take care of audio latency? Commensate it? Or is it a perfectly lip sync? What we're streaming right now? The TV source also has a delay setting so that it doesn't have anything like what's the tool that with Jack, you can run Jack delay. Yeah, check your latency. To just the source and things. Are you worried about it drifting where it progressively gets worse? I think the way this works, the TV source also becomes the clock master. And the camera sources, it just drops or repeats frames from the video, stirring away the audio from this. What's that? Well, part of it is because this is all being mixed real time, whatever the sources are coming in right now, they are all in sync. Because it's all right now. And if something starts drifting, then the extra data just gets tossed and everything gets realigned. So I mean, it's probably not scientifically precise, but it's close enough. Reminds me, though, not specifically related, but I just wanted to mention that we are also using FFMPEG 0.6. It's not in squeeze. It won't be in squeeze, but it's in experimental. The experimental. Whatever. What's the? Yeah, that repo. But backward it to the snapshot of squeeze that we're using. And also the latest FFMPEG 2.0, 0.7. Anything else? What do you think about using WP8 in the future? I would love to. I don't know how long it'll be until the streaming tools are there. It would certainly be possible. I mean, this VB Switch is modular. It would be, if there's, say, there's a G-streamer pipeline that could be used to encode VB8 and send it to IceCaster, some other streaming solution. There was actually talk of doing it for TEPCOV-10, but that would have been a bit too soon. But I believe there was a decision that we were going to do the codec videos of VB8. So we might not get it done while we're here, but I intend to make that happen. I'll be taking home a full copy of all the DV and as time permits, we'll do some VB8 encodings too. Also interested in, what's the BBC codec, drag, yeah, trying to encode some of the files in that too. And do you think we will run into problems once the screen size increases when we have projectors supporting 1,600 cross 1,000? Yeah, for this kind of production, I'm just wondering how long we'll have many DV cameras to use. We're sort of at the point where there aren't that many new models, if any, coming into the market. The market sort of moved on. Actually, I did see that currently the only kind of cameras you can really buy that have DVR, these kind of prosumer things. But I did see that Canon was already bringing out a slightly less expensive model that also had a DVR. What I think the problem is is just that doing HD and DV requires some extra chips that currently are too expensive to do in really cheap models. But as time goes on, it probably will reappear. That's my belief anyway. Did this die? Did this die? The other thing with HD over FireWire is that it's interframe compressed. Yeah, interframe compressed. Whereas DV, the compression, it is compressed, but it's just intro frame. Each frame is compressed, but each frame remains independent of all the others. So to go to HD, DV, DV switch would have to do a lot more work to decode it. I just don't know what the next step is beyond many DV cameras. Would you say it makes sense to have an on-percentage laptop capture software to get rid of the twin packs? It would be wonderful, but as I said before, speaker wrangling just at least to free software conferences, I haven't seen it happen. Trying to get everybody to submit their slides in advance and make sure they all work in the same format on the same machine. I mean, it would make our lives easier. Definitely would, but I've not seen it work. I don't know how to make it happen. You will never find a speaker that is going to let you install some whack piece of software on their laptop right before they're ready to speak. Yeah, it could only work with a common laptop in front. And then, Eric, pop out to a shell to show us something. You would then never see that. Because this is Eric's laptop that he's got his stuff on. And he wants to show us something. And if it's not his laptop, he's like, well, I'd love to show you this. And so yeah, I considered the whole software thing. And once I found the twin packs and how well they worked, which, by the way, I'm going to give a little plug to the twin pack. I worked with some other hardware devices that were complete crap, compared to the twin pack. So yeah, twin pack, Kenopus, Green Valley, Glast Valley, whatever you guys would call yourself if you're great, go twin pack. Yes. So this is Pure Customer Advice by Twin Packs. Yes, more questions? If not, I would like to end the talk here and thank Eric once again for his presentation We now have a slight break and we'll be back after this. Thank you.