 Okay folks here we go and only one minute late so we're doing better than ever. This is Ryan and Ben. They are from our AV support people so they have set up all of what's streaming out of here etc. So without further ado I just hand to them. Hi my name is Ryan that's Ben as he just said. This is a two-part talk so I'm doing about 15 minutes 20 minutes and then Ben will do about 15-20 minutes. I'm going through a very broad grasp of the view of Budget Conference AV and Ben's the author of DV switch being the central software you're using he'll go through that and we're setting up a buff I think after. When's that? Okay there's a buff somewhere look at the wiki should be up there. All right myself I've been working in AV on and off since 2001 originally started doing a lot of apple syrents and stuff back in 2001 so that was all sort of single or multi-camera sort of stuff with traditional hardware mixes very expensive but it was good fun and since 2004 primarily my day job has been running an open source century IT solutions development company and I recently formed the company called next day video to sort of get into doing a lot of this sort of stuff again. Most of it won't be using DV switch but it'll be using a lot of the workflow ideas I'll be going I'll be going through in my presentation and yep okay my first volunteer experience was LCO 2004 this was interesting because I wanted to do what using the apple syrents and stuff I was very familiar with this wasn't approved but we ended up doing DV tapes and it sounded simple I've done DV tapes before get a bunch of cameras I called the DV no issue at all it turns out doing the stuff at scale is pretty much impossible I mean you're talking I think 2004 was four streams five days you know you're talking 200 tapes or something like that it's just ridiculous and volunteer IV is a very very unique challenge you've got very different things I mean budget technologies using equipment generally isn't as robust your skill test is different I mean typically with conference sorry the commercial IV you can pull you know school people in each room so there's issues they can solve it with volunteers obviously you haven't got that luxury so it's very very different to manage it so yes it's been quite interesting I've learned quite a lot doing it these are the conferences I've been involved with LCA 04 was DV tapes 06 was using flu motion 07 was Sylvia ran did the DVD recorder method which was really good 08 was using DVD recorders but with an improved workflow and essentially improving I guess a feedback loop to ensure the quality from volunteers done FOS stem of the Debian stuff and debcoff this is where I met Ben this is using DV switch which is great stuff which he'll go through they also use a software called penta which is their workflow software which I run through soon now they're very very different conferences they're two rooms and generally speaking they have the same flow of volunteers every year so you end up with a core team or I guess most volunteers are quite skilled with only some new newbies so there's not so much focus of all of your training where LCA being a very big conference and you have a different team every year it's a very different challenge you need to you need to effectively you know work out a train people who have never made it before and manager that's managed at scale pike on USA 2009 was taking the DV switch stack from debcoff and mixing it with some of the stuff we have developed for us here 2008 being the being things like recording sheets better training feedback loop for you know feedback dedicating people with the specific tasks debcoff generally has a whole bunch of people who know what they're on about where LCA was more having a you know dedicated central team with specific tasks and they're getting down things to IV team pair room I won't go over that too much though LCA's typically had very mixed success most years since 2004 actually had video recording most of them actually had the successful video getting recorded somewhere whether it's DV tape or hard drive but it takes months to come out sometimes not at all why is that turns out many of the issues we've had aren't LCA specific most conferences in fact I think all conferences have been to have had the same issues at some points in the past I run through some of those each component AV is very very simple especially as a geek I mean you know it's not it's not problem at all I mean you know you can you can get the mixy you can plug mics in you can get sounded that's easy you can record stuff for the DV camera no problems you can plug in the video AV no problems you know you can encode record a video you can upload it for a gig these aren't difficult problems the issue is dealing with this stuff at scale you know especially with LCA you've got five rooms it's just you know you have a minor issue it's completely impossible to try to resolve so it's effectively I guess one of the biggest issues especially with big perspective is understanding how you piece all this stuff together in a way that actually makes sense you know you get A to Z workflow occurring and this is one of the issues with LCA and other other conferences is an incomplete workflow you know there was a way to develop recording video and a way to develop publishing that they necessarily didn't marry together very easily and when you test it in a love in a lug for example it wasn't a problem but of course when you're dealing with you know two 300 videos it's completely insane so a small issue tends to just basically you know throw a spanner into works there are tech issues delegates and sometimes speakers unfortunately can make life interesting and they're understandable mistakes you know things like unplugging cables unplugging you know AV machines things like hard drive corruption occurs very very simple things can occur that can make the AV team's life very difficult and things like I mean I think typically a lot of AV teams are small variants in hardware so you sort of get to the venue when you didn't envisage you know something was going to be interfaithed in quite some way so you make some small variants of the hardware and you don't pick up there's an issue until you've recorded a couple of days worth of recording and it just tends to bite you I mean our good example is SCI 2004 we just use DV cameras borrow from people and we actually had speaks boxes recording directly from the lecterns into speaks which is great you know we could get recorded audio we could get recorded video we see it together in theory shouldn't be an issue but it turns out that the I think the timing is slightly different so basically there was a variance of about two seconds by the end of by the time we played the one hour talk there was a two hour two second difference so you effectively had to stretch your audio and that was complete nightmare stuff like that other things I think of too is you know sometimes you might change you might get the venue AV they didn't have a cable you envisaged so you end up plugging in a different cable in that would left channel only audio which gets encoded you know stuff like that it's there's more things but to deal with them at scale as a nightmare so it's effectively getting that that stuff right now I workflows the most important part of any video strategy it's again the stuff I'm covering it's effectively having an A to Z solution for dealing with video avoid dealing with data manually it's a complete nightmare automating as much as possible and componentizing every room set up so effectively coming up with a room set up that works is solid is documented and that replicate that in every room and yeah effectively test for all potential issues and have procedures defined for dealing with them because it's impossible to be in many places once okay things that work first should include things like managing the schedule with recorded video data I mean one big logistical issue which you don't foresee is okay you got all these video files what do you do with them then you know you need to work out where's that video file associated with what what talk what should be named generate a title for it stuff like that so if you can have the software that will basically take the schedule off the web website and associate that with video files that makes your life a lot easier that's something that's very easy to underestimate alone control the quality and provide feedback blue this was something that I for all taught me I guess is it's very very hard to pick up because you can't be in you know five places at once it's very very hard to pick up whether volunteers are making mistakes so there needs to be a way to actually control that that control that you know control that quality so allow rapid communication machine rooms record user recording sheets which I'll show you later stuff like this so you're effectively getting feedback from from volunteers allow for rapid post editing one of the biggest mistakes people make is wanting to post edit the stuff in a video editor it's nuts especially this sort of stuff I mean you know you can record a log and have no problems editing a video but when you're trying to record or trying to post edit you know 200 talks it's just just doesn't happen distributing out tasks essentially the volunteers there should be a way to do that and yeah allow automated transcoding uploading for total generation stuff like that yeah and some workflow issues can be solved by software but others are management issues okay common mistakes people make you know I could record a vga feed and a camera feed mixing should be easy right and I think I already covered this previously you know you're talking about a huge amount of data it's just logistically impossible manually editing lc video would take over two hours of man time it's just it's crazy and distributing it out is difficult as well I mean you know you can record say you got dv tapes or record stuff to two files you know I mean you've got people from lc you know physically distributed around the world away in the country I mean how do you get done to them how do they get data back it's all sorts of stuff like that there's another one I record mix video use a kissing or camera source and upload very roughly car video should be simple again you've got issues logistically you know farming that out again same sort of stuff and again manually dealing with this stuff is just complete it just takes ages many examples I can keep giving okay again this is a very rough overview so I'm not going through much detail but this is an example solution vga capture device now these things are fantastic these things here are the twin packs these take a vga signal in from a laptop and convert it to dv the reason why we do that is pointing a camera at the screen generally not great especially once you encoder down you just you just can't see what's going on they're only 600 dollars they're brilliant investment in fact you can't get them cheap if you look around basic mixer lepo handhood microphone usb sound card about 300 dollars these things are important I'll I've seen I've seen video done with just the the microphone and a camera the problem with them is they're they're ambient mic so pick up a lot of background noise very distracting you can't hear what's going on getting getting clear vga and getting clear audio are very very important to get for successful video a basic firewire camera this is something people overestimate you can just pick up a basic firewire camera and along your lighting is decent enough this room is fantastic for that it's good enough you know you don't need to spend big money in a camera you know the most important thing is to get these people's slides and the audio the camera is so important to communicate you know what the speaker is doing but it's not not not crucial using dv2 software which then we'll go through but essentially this takes multiple sources multiple audio and video sources it spits out a single dv file which avoids the whole post editing situation edit on the fly very very important the way we distribute out the encoding and uploading is we are single the laptops essentially to some giant NFS storage I think this year we've got eight terabytes or something and we use some software this case we're using vapor which basically does all the workflow stuff so effectively taking those video files you know associating the files with talks allowing to do basic qa automating the encoding allowing qa of those encoder files and uploading them and encoding powers kind of neat because we just use the encoding machines for sorry the the capture machines for encoding machines every night so they're just connected together and vapor will dispatch our jobs so essentially we just are NFS NFS mounds share and run a script and it just goes and does the rest from our postgres database been through some of this but audio very very important to get right absolutely crucial I mean I mean record something with a normal camera trial listening to what it's just distracting you get about five minutes in and it's just like I'm not paying attention anymore video quality again vga get vga raw again pointing a camera at the projector it's just yeah it works it's better than nothing but if you can definitely go for something like that and yeah the AV look deployed rooms need to be significantly tested before the conference I've seen this many times where you know a team will record like a small meeting and they'll have some technical issues but they're not big because you can be in the room and you can solve them right they're not a major problem but of course when you're deploying the stuff in two several rooms you know small issues are big issues you know you're not there someone else is there so they don't necessarily recognize their issues or alternative they'll solve them the wrong way so you know ensuring that the loop is basically tested as well as possible and by loop I mean everything from putting it coming in place running it recording it saving it putting it somewhere dealing with it uploading it the entire disease process is important and training your volunteers this is really key they need to you can't just basically set up equipment to go use it you know I mean each user occur you avoid as if possible but they need to understand how the stuff works you know especially with the with conferences like this you know you have geeks you have people who are capable of this stuff so you know training them on on how this stuff works and how to problem solve it is really really key strong team management you definitely need clear roles responsibilities to find us in a few AV teams completely tank because you know they'll many many many school people involved but no one knew what you know what who should be doing what so multiple people running around trying to solve the same issues and it will just tanked unfortunately on the same note though you know you need to make sure that there's multiple people capable of solving these issues and you know you're defining essentially how to solve these issues you know so you don't know the two points of making there that you don't have a single person as a bottleneck so you have one person responsible for a few things you know and they get busy stuck in one issue and they can't solve other issues and you don't have the second issue I run through as well where right you've got too many people solving issues in in conflicting ways big reason a lot of AV tanks is just yeah multiple people trying to do multiple things in different ways example video that website up there is really really good that's mirror community is a video aggregator what that does aggregates video from various sources that we've done Python video for the stuff from pike on us 2010 and pike on Australia 2010 is really brilliant examples of the DV switch and vampire webcast I'm talking about all these things allow for really high quality video done by volunteers on a budget and most of these videos are published within two days of the videos being recorded it could have been done quicker if they weren't some other logistical issues but you always have them with AV it's just part of it some URLs there the first one there's a psycho video collective that's an attempt it's a wiki and it's attempt to basically document a lot of the stuff centrally so the idea is for a various AV teams to document the way they do things the way they've solved issues stuff like this open community basically the second one is the get repository for vapor this is the workflow software I was speaking about third one being DV switch but I'm sure Ben will cover that and the last one is an example of things like recording sheets which I'll actually will show you guys now how are we going for time cool okay so I mean really important things I have for the teams are things like this you know you basically got things called cheat sheets they run through individual components of each AV system so I mean training is crucial but you know a lot of it's going to gel over people's heads because it's a lot of information to take in so you know provide them with information how to do individual things separated onto different sheets so this explains how to run the software out the back for example you know how to start it how to record how to switch sources how to do picture in picture stuff like this the twin pack which is device at the frontier you know how to how to use it recommender settings if things aren't working you know series of steps of problems of because again assuming things aren't going to go wrong as silly it's AV things will go wrong same thing general tips and info you know before each talk during each talk general suggestions and tips you know very important to have for the volunteers and these things are fantastic these are recording sheets these are basically what you do here is these are now automated from vapor which is fantastic they actually pull out the schedule and print them out automatically but these are the older ones you write down the session name what it is when the recording started when the recording ended and then you note some problems now the last thing about dv switches every time you save a file it's puts out a different file name so you effectively write the file names down here because the file names dates time stamps so what what effectively happens is you know recording team records stuff when we get the qa the qa team takes these sheets they've got files they can look at this and go okay well that talk has these files associated with it there's an audio issue in the middle there so they can quite quickly qa they can go through these are the files i want all right i'll go through there i'll cut that bit out done so you basically take qa for a video down from you know maybe an hour and a half two hours of video editor to five minutes you know so you can get videos out very very rapidly let's see if i can show you if it's still got it there this is our current server running videos so you can see there can you see that yeah cool we've got a five or got a 7.2 terabyte data s this is the nfs server where everything's being synced to so we've got dv all the room numbers so say n519 got all the dates there yeah there are yes days talks for example there are six nightly from each machine so that makes it very very easy to deal with you know even dealing with this manually it'll be much much easier than just having a whole bunch of dv files or tapes or stuff like this and each one of those files will associate to a recording on the recording sheet so you know much easier to deal with and this thing's called the vapor um again it's an example of workflow there are different ones debbie and use um penta is it okay right but okay so yeah right right that means it ties into the schedule and stuff like that right yeah so this is an example but again i mean this effectively will search it with the video file so we'll get a it'll see us here the css is broken i don't know why um so these are our talks and the nice thing about it is we're actually kind of script that actually will go through for example and if i've got say one from the 24th you know it's already gone through and it's already associated videos with it because we basically run a script which will go through and look at the actual schedule um look at the start and end time and associate files with it so this makes QA very rapidly basically go through here and go all right play that play that all right uh we want to cut off the first we want that file for example we want to cover the first two seconds of it click click click click and we've got various states so it can change the shape so right now it's in uh okay normally it will start an edit which means please edit the file you'd go through you'd select what files you want you then put it in the code then there's a little uh script that runs in a runs in a loop looks for anything in the code state grabs it and codes it spits it out pulls it in a review state which means we then review the actual final file if we think it's good we stick it in post another script running in a loop we'll sit there look for anything in post uh a post then it will go and post it then it will tweet it then it'll be done so it's completely automated basically it takes QA down to you know five minutes per video rather than hours and hours um yeah so have a good time good yeah sweet good but yeah this was meant to be a very rough overview I haven't got through any of the details I'd like to um yeah comes here to the Siboth should be on the wiki I'm sure is that good yeah okay right well I've already been introduced so let's move on to uh I've been working professionally as a programmer since 1998 um now mostly contributing to the Debian project um aside from my uh professional work I've been involved with the Debian video team since 2005 um I got involved after the conference and have been involved in uh actually at the conferences several times since then I started the video link and db switch projects uh specifically for debconf um and uh Ryan says they've been they've been using several other conferences or dv switch has been used at several other conferences since then um so dv switch is a software system for video mixing uh which is done live recording uh to files and streaming which hopefully you've seen here it's primarily designed for free software conferences starting with debconf as I said um the uh concerns of free software conferences are community benefit and not not commercially enterprises mostly so uh the aim is to to make videos available to a maximum number of people so both recording and streaming are important um they have a very limited budget um um by an example of uh uh uh budgets but actually you can go you can go a lot of that with some loss of quality uh you can uh the first conference that's debian you used dv switch out was debconf seven where we had pretty much nothing left over after hiring projectors and pa so all the cameras were borrowed all the tripods were borrowed the computers were borrowed there were some money spent on dv tapes as a backup um and that what the the the resulting videos aren't brilliant but uh they're still better than than the average at the time uh we have eager volunteers but not much time to train them so dv switch is I would say fairly fairly easy to use fairly easy to to learn many conferences don't actually care about streaming but they've still found this uh the live mixing useful um live mixing obviously saves editing as Ryan said and it's interactive um sounds obvious most video mixers are interactive but some uh some software used for streaming has very limited uh very limited options to change the way mixing is done uh while it's running uh yeah I'm I'm the primary developer of dv switch I get some some help some patches from other people um but it's mostly just me at the moment and I have other projects to work on so uh not uh I'm limiting what I want dv switch to do not a general video it's not going to do I'm not going to support any kind of any kind of non-linear post-editing it's not doing audio mixing uh you can get pretty cheap audio mixers that do a good job I believe you could use in theory you could use jack to do this in software it's not a complete recording and publishing software for that you can use uh vapor or uh Devin's penta buff extensions or something I quickly whipped up for picon it's not too difficult to do this sort of thing brief explanation of what dv actually is um the there are several variants of this dv format sorry um I got the mic on um I'm on the right is that better okay higher up yeah okay there are several variants of the dv formats um a lot of consumer cameras uh support it less so today you're seeing a lot of uh hd cameras using h264 format but they're still pretty easy to get cheap dv cameras um most sort of prosumer and low end professional cameras also support some flavor of dv the cameras that I've dealt with have been uh that used their basic dv formats which is sort of meant for consumers there are dv cam and dvc pro you may see around which probably work but I haven't had a chance to test them um there are several other variants that just won't work uh with dv switch uh nice thing about dv is that each frame is compressed separately um with mpeg mpeg codecs make make use of the fact that each frame is similar to is usually similar to the previous one and so there are dependencies between the compressed information for each frame that makes it significantly harder to do editing in real time uh with dv we can just cut between frames um so there's an overview of uh how the various software components will fit together you have sources cameras um a bga grabber is pretty much the same as a camera uh for our software purposes audio mixer those all feed into dv switch which can then uh which then outputs to syncs file storage uh streaming server potentially any command you like although I don't know what else you want to use um you can also use a file storage as a source uh you can have a sort of um ident or logo as a kind of a holding screen uh for a for a stream well people waiting for the talk to again so the source types are uh a firewire or usb dv device um I haven't seen them but I know some cameras uh some dv cameras have a usb connection which you can get dv over um but mostly you'll see firewire connections for for dv cameras um audio you can use any elsa any dv device supported by elsa um uh what we've been using lately are usb audio uh devices um the trouble with the audio inputs on most laptops is they're designed for microphones directly connected those don't work with the with the um voltage that you would get out of a uh an audio mixer uh the usb audio devices normally have line input which works fine uh physical sources of course are around the room um got this here and you can't run a firewire cable well to the back of the room or you might be able to get away with it but it's out of spec so instead we have a computer as a computer below the desk here connected to that and then uh ethernet connecting the the several computers together um so then the uh dv streams are are encapsulated in tcpip the original the current protocol for dv switch has a network server running on the dv switch mixer which is a little weird because this is a GUI application running as a server and so there are some scripts to start up there's the uh the mixer the sources the things in the right in the right order um a little enhancement to this protocol um which is not being used here um but has been used at debconk with some success uh tally lights when you have uh several camera operators around the room and not close to the mixing desk they need to know uh they need to know is the view from my camera being used uh if it is then they shouldn't they shouldn't be moving the camera very much if it is uh sorry if the if the view is not being used then they're free to pan zoom as fast as they want without making the viewers sick so it's hardly lights uh indicates uh is this camera in use and so there's an extent there was an extension to the protocol to support that what i'm in the process of doing uh and hope to include in the next release is using the standard rtp and rtsp protocols for connecting sources this is it's more extensible uh it's standard it works with uh it would allow dv switch to interoperate with other other software uh not just its own programs that would also mean that the mixer would be a client so you could get the sources running a service uh just start them up when the the source computer is switched on and no need for scripting you just have a configuration filed for for dv switch uh so the syncs are for types are for recording uh file sync record to diff that's the dv interchange format and you also have a sync that pipes to any commands and that's what's used for streaming also connected using tcpip possibly not a good idea for recording because if you have any disruption to the network then uh you're going to end up going to end up dropping uh dropping frames it's really best to record locally and then move the files over later so the original protocol is very similar to the protocol used for connecting sources as a later enhancement sync tells us tells the mixer whether it's a recording sync or a streaming sync and a recording sync will get information telling it to when to start and stop recording when to cuts so to create a new file um in the next release that will have a built-in file file sync just for local recording um a pipe sync which you can then connect up to netcat or whatever if you want to to send the data elsewhere and also an rtp rtsp server which you can use for remote monitoring uh so I believe with the current version of vlc you can just use that as a monitor for you will be able to use that as a monitor in addition to the the display on the main GUI so there's a what the user interface looks like um it's fairly simple um we have the recording control buttons uh hopefully self-explanatory um and mixing effects which are very limited at the moment this is not there's no real bling it's just uh for recording information so here's a picture in picture in fact you can see uh so you've got the speaker and the bottom right of the corner overlaid over the slides uh there's also although it's not shown here there's a a fade for slightly nicer transitions between sources there's an audio level monitor which is basically good enough to let you know whether the audio is actually plugged in properly um and to see if the audio is clipping the audio level is too high then it will will be limited and it will sound awful if it's too low again people won't be able to hear uh there's a monitor for each source each video source and selection buttons so you can independently select the video and audio sources and there's a secondary video selection which is used for selecting what goes in what is the smaller picture in uh picture in picture so here we have source three as the as the primary video selection and source sorry source four as the primary video selection and source two as the secondary uh and source one which has no picture that's the audio source that's just connected to an audio mixer at some point in the future tv switch will actually know the difference between an audio and video source and it won't display this pointless black thumbnail um i could explain about the mixer internals but if anyone's interested but um how are we doing for time five minutes yeah i think we'll just move on to ice cast which i promise to talk about um the ice cast uh streaming server is used as hdqp which is not really the most efficient way of streaming but it works uh anyone can use that in uh in html 5 video audio elements uh except in cramium where it's broken is that going to be fixed in cramium um i hope so i think someone pointed out on the point to a bug report on the um chat list for for this it's going to be fixed by i hope it's going to be fixed so each stream that server uh handles can either come from a source client or it can be relayed from another ice cast server so in that way you can start to build a network um we have servers inside and outside the conference which will provide the same set of streams uh so here we have coming in from the left of this diagram you have the uh dv stream coming from dv switch it goes into a sink which is there that's a command sink pipe sink uh which is running a uh ffm pair to theora to transcode that to aug theora and vorbis and then that's piped into a uh an ice cast client that sends over the the theora stream to the internal master server that's then can there's an external master that relays all those streams and a bunch of public servers out there in the cloud that's relayed from the external master so when you connect to a stream from here you'll be connecting to actually it's a bit more complicated here but let's let's ignore the complications if you connect here from to a stream from inside the conference you're connecting to the internal master somewhere out not someone out there on the internet if they connect to the streams they'll get to one of these public servers and there's very little bandwidth being used between the inside and the outside possibly not a huge concern here thanks to AR nets but a lot a lot of conferences that would be you know you can have quite a thin pipe so so yeah there's sort of there's this flexibility in a way to set up ice cast service is rather useful um that's really all i've got to say got some links for further information about those two programs um and now i think there's a little bit of time for questions to both of me or to ryan does someone have a sorry um i'll comment first like at least in chromate it seems to be working okay so it's maybe it's just something they broke recently and fix again because i'll i could just see your feet all right that's great on my laptop so um the question was with the picture-in-picture is that configurable like where that shows up is there some kind of template that you could configure it's configurable you just um hit picture-in-picture and drag out the rectangle where you want the uh the smaller picture to appear sorry it's in the so it's in the gooey you just drag and drop it wherever you want cool thanks in in a boom turn i think app get installed dv switch correct that's right yep others when you were talking about qa what a kind of activities actually go into the qa process it's it's fairly simple what we do i mean we used to do a lot of editing in a in an editor logically um and yeah just just just a nightmare but generally what is that to happen is you basically do top tailing so chop off the excess video um occasionally might have things inside the video you want chopped out very very rarely but you get those requests sometimes you know um audio normalization which can be automated um you know some color balancing is about it really most of us just chop chop encode off you go i notice you've got the uh mix of the um the screencast and the um person speaker cast and are you able to smooth that around when the slides getting away and things like that oh yeah absolutely a picture-in-picture is configurable so you've got you've got a picture-in-picture button there if there's not clicks you've got one you've got sources right all right we should probably explain the did you explain the ui yeah you did okay well you explained again um so you set up the you select your the the uh large picture of the background is the is your primary video selection that sees the top top video selection button the picture to be displayed inside on top of that is the secondary selection and then when you when you press picture-in-picture you then you then drag out an area uh in that uh in the video monitor where you want that secondary video source to appear and then uh click apply or press enter primary that's secondary i mean not to be the going one two one two we click picture-in-picture drag it out and then click picture-in-picture which is an effect and then two becomes yeah i mean um like two minutes into the video he changes the slide so it's covering where you've got the video so then you get no effects so you just have to keep playing around so you'd hope you'd hope you could do it for the whole thing but in reality you might have to adjust every few minutes yeah you might hope that's never going to happen you do have to do you really have to pay attention and and uh and if you have to keep switching throughout the throughout a talk and is that process of moving the picture just a case of clicking on the picture and dragging it around or do you have to specify the moment you have to drag out a rectangle you can't actually you can't move a rectangle any others yep two uh some of us use um aug theora for for small things but you guys are probably the biggest user of some of this sort of technology how have you found um encoding into the aura uh it's not totally fast um it's the results used to be pretty bad at low bit rates it's got better with uh the aura 1.1 uh aka thus nelda um there are promises of improvements but i would guess in in future we'd be looking you should probably be looking at web m one more um i was i was gonna ask with a lot of the tube of consumer cameras now you said that's what you're using but um a lot of them coming out now recording to a vc hd yeah that can be a problem because hd64's yeah so this is going to be a problem yes it's not uh not open source friendly um well not so much that but it's the technical problem of um uh you have this dependency between frames it gets a lot harder to depends on the whether it's power and gsc or whatever but you have a fixed size per frame and you can just buy here buy here in your video so you can cut it just like you could cut so it's gonna be remixing that's a problem because it's in peg encoded so it's hard you sort of have to the major thing i think is that the um the processor processor processing requirements would go up because you'd need to be decoding each frame uh trade all frames of all sources just in case they're going to be uh needed in a moment's time uh the moment the um the display of the the small source monitors don't need to be updated for every frame uh if we're short on cpu time no okay so we drop some flame frames from the monitors doesn't matter really okay last question over here um i saw a number of years ago a chip that was doing real time um it could do um full 800 by 600 aug theora compression at i think it was 640 by 480 at 60 frames a second uh at 800 by 60 600 at 30 um and obviously you know different machines are more powerful have you it is the back end or that is that handled in vapor that it that is deciding all the trans coding is outside of dv switch okay so it's just as i said it's a command sync uh i use any commands and it might be f of mpeg 2 theora today or it might be you i don't know anything you want just give you an idea cpu you said your computers are massively powerful these days the encoding for each stream um we've got two machines in each room right up the back we've got the um mixing machine that's taking in uh source from the twin pack and source from the camera um and source from the mixer and mixing up the back there i've moved the encoding uh live encoding basically down here again um so there's a machine down the front there that it's moved there okay in the front of each room anyway and that's running a um f of mpeg theora pipe to aug forward i think yeah yeah um that's using i think as a call to joy or i actually what will be you know anyway a lot of my call to joys using i think 50 percent of one cpu that's for the full pal stuff you're saying so yeah it's it's not too bad and unabundant cpu okay can we uh just thank ryan and ben for um for explaining all that