 Talk about mixing music with free software. How about now? Yeah, that better? Yeah, I'm Mike Tarantino and I'm here to talk to you about mixing music with free software. I am a musician and a recording engineer. I have been a fan of free and open source software for a long time. As many of you know, I live in a free software loving household. And I came here... Yeah? And I came and spoke last year in Perth on a similar topic. I wanted to give that talk to explore how possible it was to incorporate free software tools into a music production environment. And that was a lot of fun. So I'm back here now to talk about mixing specifically. I've been doing that for about 15 years working in the music industry. When I started out, I moved to LA and I got a job at a studio, which is kind of how everybody does it. The place that I was at was weird in that it was this fantastic room with this like storied history and this gigantic beautiful space and all the gear you'd ever want. But when I got there, it was being run by a guy who mainly used it as his own personal writing room. And so the good parts where they had one of everything and you could go on downtime, you could learn how whatever you wanted to use worked and record your own projects and whatever else if you were sly about it. But you didn't get that kind of lots of people coming in and checking out how everybody worked and networking opportunities that you're kind of supposed to get in those situations. But those writing sessions were great. They were enormous, you know, two engineers and a midi guy and synced computers and tape machines and a 108 channel console with every channel filled and you're constantly trying to combine channels from the computer to free up one more slot on the mixer to bring in one more drum machine or something. Oh, still? My face too big? Alright. So there were these massively complicated sessions and it was great to be a part of it. Whenever I got to do it, I was like, you know, this tiny little peon and I was running the vocal switcher which was this custom made box for sewing up different tracks and you could do that on the console but they didn't like the switching apparatus so they had to build their own device and they'd be sitting there and say okay, we want to audition the first three words from this line and then the next word from this take and then the next 11 words from this take and they'd hit play and then they make the edit they want to do and those kind of jobs are funny because there's nothing really complicated or involved, you're not really using your skills but at the same time, when it's going well you feel like you're part of this incredible machine that's like making music possible and that's just kind of magical. But after 18 months, that was kind of enough of that I was ready to go just do something else when my next session came in and the band was Coldplay and it only lasted three days. There was this weird communication issue where they were sending files back and forth to LA to London but not getting any response. So they packed up and asked him what I should do, explain my situation and so we went out to lunch and he gave me like an hour of boilerplate, you know, what you do is you work the studio and you make the connections and what I had been trying to do before and then at the end he said, oh by the way I'm starting up a studio at my place and so I did, I helped him wire that up and I helped the engineer out on the session that came into the place and when that guy was booked and the session kept going, I slipped into his chair and I ended up working with that producer for a long time for the next nine years or so and in that time I worked on a lot of big projects that was when I did the big James Blunt record, this is me with him outside of Sunset Sound Factory rather, although it wasn't big at the time we were working on it and getting yelled at all the time and wondering if it was going to pull that from under us in downtime I worked on a lot of tiny projects, this is me in my basement at the time with Baltimore recording artist Victoria Vox who if you haven't heard of you should because a couple of months ago she played with the Kiwi Lailies which is this year long educational program where kids learn to play the uke and then give a performance at the Auckland Uke Festival I think and that's her doing her thing, not the guy in the yellow pants in purple hat but the other person on stage so after LA I moved to New York to be with Karen and I was still going back to Los Angeles a lot and still doing projects at home, that kind of got scaled back with the birth of our child where I now still work on things and still mix but mostly do it in napsized increments but in that 15 years I have mixed a lot of projects and I want to talk a little bit about what that means and how free software can be used to do it I've been sticking my toes into the free software waters since I had a distro that I unfortunately cannot remember running on a PowerMax 7200 and I could barely check email and I thought I was super cool regardless and I've looked in from time to time on what you can do audio-wise with this stuff and it's really been cool and encouraging to see the steady progress and when I did the talk last year I was really amazed actually at how far harder specifically the digital audio workstation program had come and this here in going over that talk last year to prepare I was very much heartened again to see that I feel like the usability and stability and kind of everything that you want has taken another jump which is great and I'm also pulled in the direction of free software of course by Karen as I mentioned before who is one of the smartest people I know and I have no doubt that when she turns her attention to mastering the ins and outs of audio production she will excel at it as she has at everything else but that has not yet transpired so when I told her that I was going to give this talk and what I wanted to do she said okay great so what really is mixing well in the most sort of top-down way of looking at it mixing is turning a multi-track I have file written here but can also be a collection of different audio on tape into stereo so and the effect doesn't matter whether you're dealing with a two inch tape machine and a thing to play it back besides your dishwasher and a head stack that's this big or a digital audio workstation program that requires a lot of specialized experience to know what to do with the point is that you need this specific playback environment to listen to it and you're trying to turn it into something that anybody can listen to on their car in their car or on their stereo or on their phone or on their phone and when you do that you have a lot of decisions to make the basic ones probably you've got an idea about getting the relative volumes of all the elements together and panning where they go in the stereo field you will be using effects to correct problems to transform the elements into something else to give them more impact more of a sense of what they are supposed to do and you're telling the story of the song and not getting in the way and what I mean by that is when you listen to a successful track there's kind of a sense of inevitability about it where it's hard to imagine it's like it just sounds like what it sounds like and you wouldn't want to change anything and if you're listening to a less successful one, if you're listening with technical ears you might think to yourself I didn't hear that kick drum and now the beat's not working as well or you might think that line in the vocal I would have raised that 3db and then you could hear it better but if you don't have that sort of technical background you still hear the same stuff but you might think of it like the beat kind of went away and I'm not grooving anymore or I was following the lyrics and now I can't because I didn't hear that line and all that stuff kind of it takes you out of the moment and it makes you think about it instead of feeling it and all of a sudden you're outside of the song and that's bad mixing so if you're going to do it with free software, if you're going to do it like I did you're going to use Arter, I used version 3.5.143 and the CAF suite of plugins there are a ton of plugins in the free software world and that last year actually was one of the big a pretty large barrier to getting into using Arter as a mixing tool. You may have been able to get a better organized view of it last year but I was not able to figure out how so when I went to put a plugin on a track, when I wanted to try to find an equalizer for example, you got this huge list and maybe a third of them would say have the word equalizer in the title but they weren't sorted by name but if you did sort them by name some of them were called TAP EQ and one was called something else that starts with M EQ and one was just called EQ so it was very hard to get a kind of mental picture of what the options were and then to top it off when you would try something that didn't have a very descriptive name and you weren't sure what it was it was about 25% of the time it would suddenly make no noise or give you intermittent static instead of the track they were trying to affect so it was hard just from a, I don't know it's not a user interface thing exactly but it was kind of impenetrable to sort of get into what you could actually accomplish that's been much improved when I was interviewing time the first time I did it without even remembering the problems before the first time I tried to plug in you got a list of folders with things categorized into what they do which is great the CAF plugins however which I knew were the ones I wanted to use were all sorted in a folder that was cryptically called plugins but if you know that you can find them all and get to what you want to do the other issue that I had was with the version of the CAF plugins that came with Ubuntu Studio you couldn't add too many or you became increasingly likely to cause some condition which would make your session not open and it wouldn't say there are too many CAF plugins you can't open the session anymore it would say I don't know something cryptic has happened and you must start again but when I figured out what was wrong the answer was to get the latest version from Git and build it from source and so I did and I don't really expect that to get a big reaction from this room but among my friends in the audio world this is a huge deal thank you there was some way towards answering the question that I had from my talk last year can I recommend this to colleagues of mine the answer last year was pretty much no I think in the right context it would be something that somebody without a huge interest in hacking and building things from source code could get into and get some real work done at a fairly high level and that's exciting that's really cool to me so what are the CAF plugins ways to apply effects to your tracks to change the sound of things some basic categories of those effects are equalization and filters that's going to affect the frequency information of the sound give you more low end or high delay and reverb, time based stuff where you're creating more copies of the signal but delayed in time compression affects the dynamic range make the distance between the quiet parts and the loud parts less modulation effects sort of falls into delay and reverb as well modulated up and down which causes these weird swishing metallic phase effects so let's talk about them EQ first, used to emphasize or deemphasized frequencies that you're trying to get at, if you know what that is bear with me, if you don't it's just like the low and high knobs in your car stereo if you've ever done that and turned up the low end and all of a sudden you can hear the bass drum and it all sounds full and warm and then you turn up the high and the guitars probably speak a little better and you can understand the words and hear the symbols you're doing the same thing with an equalizer plugin with more sophisticated control over the frequencies that you're affecting the goal is to clear out what you don't want to hear and emphasize what you do the best thing of course, and anytime you talk to anybody about recording, I always tell you you have to get it right at the recording stage and you're not trying to need to add a bunch of you know, add a bunch of low end later, you should try to capture the beginning with the microphone and that's true, but oftentimes you're not mixing your own stuff and other times it's impossible if you're recording at home, you probably have a pretty limited space and it's going to be tough to figure out exactly how to get the sound that you want without annoying the neighbors and also tearing out all the walls and putting in weird amounts of insulation and foam and everything so yeah, so EQ can help you sort out a lot of those problems the CAF EQ plugin let's see, Karen told me not to do this but I'm going to try it anyway looks like that if you've seen one, you can probably figure out what to do with it pretty intuitively it's worth looking at the bands there are three types, the filters which are the ones on the outside high pass and low pass, I have the low pass one engaged which is the right most knob the frequency is going to be where it starts to cut off everything at the ratio which there is 24 decibels per octave and at that frequency point it actually starts attenuating the signal a little bit before and I believe the frequency point is where it's down by 3 dB you can also on this one go to 12 decibels per octave or 36, steeper is going to be more pronounced effect or more filtered and less steep is going to sound more natural there's one thing missing, do people know what else you might expect to see on a filter that's not here cool, resonance is a control that's on a lot of them and what that does is put a little bit of emphasis right at the point where the filter engages, I see B-down knotting and I'm sure he knows exactly what the mechanical justification for building analog filters is and what this is replicating that's fantastic and I'd love to hear that talk not today but what that does in audio terms is really emphasize the frequency where the filter is and it's going to make a less natural sound if they do like a breakdown in a dance track and there's a snare drum going and it goes that's the filter it goes up and you know leading you to the breakdown where you start to dance, it makes you dance, that's the filter's for the shelving bands are the ones on the inside of the just next one to the right from the high pass filter I've got the low pass on just to show you what it looks like you can see it raises frequencies below the frequency that you select and that after a slope because it's just everything below that to be affected and then the middle bands are the parametrics where you set the frequency that you want to affect the level, how much and then Q which describes how wide the curve is going to be higher values of Q go really narrow that can be really useful for correcting problems if there's a I don't know a singer with a really annoying bit of presence in their voice or a microphone that picked up a lot of that annoying presence you can kind of notch that out with a bit with a narrower Q it's going to be more apparent so if you want a more transparent effect lower values of Q will widen it out let's see this is probably good now there's no way to tell you what frequencies you're going to use to make something sound good it's going to be different for every instrument and every song and every person doing it who has an opinion about what sounds good and so this is what you should do for the kick drum I tend to like 80 hertz a lot and it seems like it's hard to capture that without also getting a lot of low mid information a lot of like 200 to 400 hertz so I end up punching up around 80 a couple of decibels at 80 hertz almost every song I did next on the kick drum and the 200 to 400 hertz I generally end up pulling out that's pretty typical on drums those low mids are real problem frequencies because kind of everything the fundamentals of all the instruments the acoustic instruments anyway are going to be in that range and so you can fill it up really quickly and end up with a track that sounds kind of dull but if you carve that out of your drums then you have room for guitars and pianos and voices and everything else that wants to try to cohabitate in there and it's kind of the same thing with toms with tom drums you end up cutting those out you do too much you sound like you're stuck in the 80s and that may or may not be a bad thing for the snare drum that can often be a problem frequency that gets into vocal intelligibility like 1 to 3k is where a lot of the frequencies that help us to understand the words people are saying are located and snares seem to have a lot of that information so you can get into weird problems where you try to you know you try to turn the snare up to where it's supporting the groove in the right way and it sounds annoying and you pull it back and suddenly the groove is not happening and that issue can be sorted by cutting out a little bit of 1k overheads on the drums the same misuse as before cutting out 400, 2 to 400 can help make room for everything else unless you're trying to use the overheads for the main sound of the kit you know try to say if one was better or worse than the other but it's not really sometimes you'll be leaning on the close mics you'll want really very present close articulated kick and snare and for that case you're going to use the close mics mostly and cut out the low mids from the overheads other times the sound of the kit in the overheads is really nice in which case you'd probably cut out less of the 400 on cymbals a really wide boost at 10k is often pretty nice but every cymbal is really different of course too and you end up having to use your ears because often when you do that there will also be some kind of peaky ringing frequency that as you turn it up gets emphasized and has to come down 300 hertz on electric guitar often sounds good 2k I almost always end up boosting 2k on electric guitars it just makes them sound more like guitars to me I believe that's where the treble knob on Fender Amps is located around and it kind of yeah it just makes it sound like I expect them to sound piano is problematic and because it covers such a huge frequency range it wants to take up all the space so you often in a rock context or something where there's a full band arrangement you often end up carving down a lot of that with filters around 1-2k again often can be problematic and a lot of problems can be sorted sometimes by cutting it out and voices that one kilohertz range is obviously important for both intelligibility and also with too much it starts to get annoying and nasal and you know you just figure out which is happening in either boost or subtract also 6k for some reason seems to add a nice little sheen to me that I always find appealing I wanted to play some things to illustrate what I was talking about mostly about the drums so this is a song I worked on for a bit I pulled it up in Arter this is the intro with no EQ on the drums and the bass guitar I have no idea what it sounds like standing like parallel to the speakers so I hope this is you know you can hear it at least a little bit I'm going to play just the drums and the bass I'll still with none of the EQ that I finally settled on so to me that kick drum is really flabby there's a lot of information in those low mid frequencies that we were talking about before and the bass also kind of extends down below where I want it to and it kind of obscures the low end punch of the kick drum this is just the drums and bass again but with the EQ applied and here is the whole track so is that instructive at all let's see if you know it's good when the bass extends down that low there's a kind of counterintuitive thing that happens where it ends up kind of feeling like it has less bass or it can and the point is like I'm trying to get the main fump of the kick drum to happen pretty low down and if the bass is there too then instead of getting a sort of rhythmic injection of around 80 hertz then you get just this sort of constant barrage of information which reduces that rhythmic effect so that's basically the idea of what I was trying to do and conversely if there's a bunch of those low mids in the kick drum then it can affect where the bass guitar is speaking and you get less of its rhythmic function which is to go dum dum dum dum so compression we use compression like I said to reduce the dynamic range to make the soft parts closer to the loud parts that can help things not get lost in a mix where if somebody's voice for example is loud but then sometimes gets quiet if you can imagine that if that's just playing along in the mix then you suddenly won't hear it and the compression can help make it easier to just hear it the whole time. It can also be used to manipulate the attack which is the first thing that you hear when an instrument plays. It's the sound of the pick on the guitar string for example or the hammer on the piano or the stick on the drum and it can be used to totally suck the life out of any recording and to illustrate that there's a couple of songs from some popular songs the top one is Blueberry Hill by Fats Domino and what do we notice about that it doesn't extend all the way to the top or bottom of the available bit space it doesn't even approach the maximum volume that it could which is fine because there's a lot of space between the peaks the things that are hitting and punching are happening fairly relatively infrequently and then the signal is allowed to then reduce back to a natural level and compare that to the next one down which is Michael Jackson's thriller from about 30 years later I think it was 82 and 56 now there are different sounding tracks obviously they're different types of songs so they sound different regardless but it's a lot fuller if you had to describe the waveform compared to the one above it's just fatter it's bigger there's more information and the average volume is going to be a lot louder and further the peaks are a lot closer they're a lot more of them and they don't extend back down as low although they're still very clearly defined so there's a transient in the middle stereo file and the last one is Shake It Off by Taylor Swift which in spite of that bit in the middle when she tries to rap it's basically a square wave when you're zoomed out this far if you zoomed in very close there are peaks and valleys and stuff but there's also going to be a lot of places where the level of the sound is just completely cut where the waveform rather is completely lopped off and if you do enough of that it becomes audibly distorted I listened to it on a laptop which kind of increases the harshness that you would get from lots of distortion anyway the speakers are kind of designed to make you go deaf it's good, it's well mixed, it sounds right if I could suppose to, it also fatigues your ears really fast and probably is bad for your mental health after a while but the trend obviously has been to get an increased average volume and a lot of that is achieved with ever more extreme amounts of compression on both the individual sound sources and on the stereo file that you're mixing so for you to do that you may use the CAF compressor talk a little bit about what the controls are the graph there is trying to tell you what it does it's input gain on the x-axis and output on the y can you see the faint line going from the bottom left corner to the top right? that's the unaffected signal so whatever the level in is going to be the same as the level out what do we notice about the solid line above it? well it's higher which is showing the effect of the makeup gain towards which is the bottom right knob and that's just going to be the amount that you turn up the output to compensate for the amount of compression that you've got the threshold next to that is also a very faint no it's not a line it's just the point on that graph where the slope of the solid line changes and what the threshold does is as the signal the compressor if its volume exceeds where the threshold is set then the amount of gain that is output above that threshold is going to be attenuated by a ratio which you can set which is the next knob over 2 to 1 is a good place for subtle effects 4 to 8 for more aggressive and if you go as high as 10 you're basically limiting where you're just not letting anything go above that threshold like Taylor Swift the knee is set to zero there which means that as soon as the signal hits the threshold it immediately has the ratio applied to that input gain if you turn it up then that sharp change of angle becomes on the graph a curve which just means that as the signal starts to approach the threshold it's going to be attenuated a bit and by an ever-creasing amount until it reaches the full amount of attenuation so talk a little bit about how to affect the attack because those are the two controls that we didn't talk about there's attack and release and what that means is how long it takes for the gain attenuation to take effect and then the release is how long it takes to return back to its normal level if both of those are very low that makes it more like just a limiter that stops the gain from increasing above a certain amount if the attack is at zero it's just going to kill any transient information at all, any sense that there's a sort of peak sound that happens before the sustaining part of the sound with lower values but still not particularly high a sense of attack that wasn't there before or emphasize whatever is there because the initial transient happens which happens quickly enough not to get attenuated by the compressor and then the sustaining part does get pushed down if that's also above the threshold so by setting it right if you've got a lifeless bass guitar for example just kind of thumping away and not particularly functioning rhythmically you can put a compressor on it to increase that and get more rhythmic drive out of it and higher values are going to just make it more transparent and also make it do less release oh yeah if you set a very low release time it's going to increase the average volume because it will immediately rise back up to the threshold instead of waiting to do that after the level of the input goes down below the threshold in the first place let me try to play you what I'm talking about this is the drums from that same song with no compression here is the same drums this is an extreme amount of compression applied to them do you hear what that does? the snare sounds a lot longer that's the main thing I was trying to get out of that, that and more cymbals and the reason is the whole kit is above a certain level and then it's all getting crunched down so the decay from the snare that you get in the activation of the room ends up happening at about the same level as the initial attack on this extremely compressed version and what that lets you do then is combine them back down to the same level and when you combine the two, the uncompressed with all the unaffected transient information and the compressed one with the sort of extended emphasized decay of the snare then you get that which sounds to me more like what I expect drums to sound like in a rock context how are we doing? I didn't talk about delay at all but I think we're five do people want to ask questions or should I keep going or what should we do? question just in terms of if I get a vote in what we do now with our time I'm loving being able to hear the differences between two different tracks and for me if we could have the comparison side by side so like play one track and then play the other track right after it and then talking for an aft but not in the middle. Is that possible? Am I being too picky? Yeah, no I cannot talk. I can just play stuff and because I'm loving being able to hear the differences. Sure. Thank you. I'm going to talk for 15 seconds about what was going on there for the audio geeks which is just, does anyone know what parallel compression refers to? Yeah, kind of. Okay, so in this setup all the drums are feeding two buses and one of them is compressed and one of them isn't and so if you mute the compressed one you get the first thing we listen to if you mute the uncompressed one you get the second thing. When they're both playing you get the combination which is what we want. We want both compressed and uncompressed signal for the transients and the sustain and the ambigination. Yeah. In the compression plug-in down on the bottom left I think I saw a mix knob I was wondering how doing parallel compression is different to adjust to the mix. Literally reading my mind. Yes. The mix is a handy knob that lets you just adjust the amount of unaffected versus the amount of affected. Now in this setup you can't see it unfortunately. The buses are nah, the plug-in window is broken. Okay, the buses are over there towards the right and there aren't actually any other plug-ins going on to accept the compression. So in this case there is no difference except usability. I personally would rather look down immediately see what the level of one is and what the other be able to adjust them individually rather than having one knob that affects the level of each but what the parallel setup would let you do is apply more effects to, apply different effects to the compressed versus the uncompressed. So if you want to EQ them differently you could, if the compressed one was giving you a lot more thump in the kick or really harsh symbols which could easily happen. That's kind of with extreme compression especially it ends up kind of emphasizing the stuff that's already emphasized. Then you could use some EQ to dial that back but not have that EQ affect the uncompressed tracks. So I have a somewhat of a tangential question. I don't know very much about audio engineering but from things that you've told me and producer of my podcast Dan Lynch have said there's kind of a garbage in, garbage out truth about audio engineering. If your source tracks don't sound good there's only so much you can do with a tool like this to clean them up. And so given that a lot of people in this room probably if they're going to do any of this they're going to do it with whatever equipment they have around whatever mics they have around do you have suggestions about how to get the source tracks good enough so that when you start feeding it into this process it's going to give you the best results? That's my recording talk. So come next year to... is it Canberra next year? One thing I'll say is controlling your space. We know what it sounds like when you try to record your voice on your microphone on your laptop and you can hear it reflecting off of the walls and this sort of really but not quite horribleness and if you try to put that stuff into a track all those artifacts get in the way. So I mean I've done ridiculous things like set up some mic stands with towels over them so that you know to keep the reflections down. I knew some people who had a closet where they just put foam on all the walls and were able to get kind of a deader sound that way and you know you don't necessarily want the deadest sound possible but if you have that you can then apply like reverb and delay effects to get a sense of space whereas the space that you actually had available was terrible. You know what I mean? So that's a big issue is just controlling the space here and kind of eliminating it from your recordings. I was interested and you said you had all these different libraries that you could use and I know that people will always lament they're like digital you lose all this wonderful retro analog stuff and I was wondering if you could talk about are some of the libraries in there sort of designed to give you back like you know like stimulate like ribbon mics and Leslie effects and all those weird retro kinds of things. Yeah. That stuff is not so great at least as far as I was able to determine within the plugins that are available in order. There is a bunch of distortion plugins that you can kind of add in a little bit actually speaking of which to kind of get you into that sense of equipment running near its limits like hearing the effect of circuits that are kind of starting to maybe have a problem or not. And the compression does some of that same stuff that can start to correct for something that was recorded conservatively where you don't get that sound and it sounds kind of limp and anemic and if you run it through a compressor that can be one way to start to try to get some of that life back even though I said compressors before we're going to suck the life out of everything. For everything I say the opposite is also true which is I don't know. I mean it's very frustrating about recording but it's also one of the things that keeps me interested in it. Now seems like a good time to play the bass. There's also some parallel processing. And what's that? Just a follow up on that. The outdoor software will load Windows VST plugins and that opens up a much faster library of effects that you can possibly use. Yeah, it absolutely will. And why didn't I? I don't know that there's a... I didn't honestly look into it all that hard. I don't know if there's a huge library of free VLC plugins. Yeah, that's great. Yeah, cool. So yeah, worth noting absolutely better options here. I also had stability issues still doing this even this year. And so adding another layer of... it doesn't run them natively, does it? Like you need to add... You do need to install wine to Windows Emulation layer. Right. But I think you can actually load them straight in now without hooking up Jack or anything. Yeah, since I... if things had run swimmingly and I hadn't had any crashes or weirdness, then I would have gotten more adventurous about adding in more layers of software. I was scared off by somewhat frequent crashes and issues. But you're absolutely right. And there are many more options besides the CAF stuff, even just that come with it. And many other options that you can look into finding online to enhance what you've got. What are we doing? We were thinking about... oh yeah, listening to the bass. So this is what the bass guitar sounded like without compression. Sounds like a bass guitar. Exactly the same to me from theirs. It may not be the system to really demonstrate in a pronounced way the effect that that has. However, there was also some parallel processing applied to that. There was a second bus of bass that's distorted. And the sum of them sounds like this. So what that second bus did that was running in parallel was add some distortion. There's actually... it's funny that you talked about getting those analog sounds back because there is... I think they called it like the tube saturator or something like that. And that's what used for that one to get the distortion. What distortion tends to do is increase the upper harmonic content. And so from that sort of very low bass guitar, you suddenly get it speaking a little bit more in the midrange. And in this track that seems to be beneficial. We have time for one more quick question. I'm very conscious of the time, but I'm wondering if in about three minutes you could tell us how not to go horribly wrong with reverb? Yeah. When you say horribly wrong, you have specific problems in mind, like turning it into mud and... I'm just conscious that you do tend to use reverb in recordings, particularly vocals. And it's an area where it can go very pear shaped very quickly. And whether you have... with what you've talked about so far, whether you have similar recommendations around or comparative recordings around reverb. Yeah. I don't have recordings, but the big thing that will do is it just muddies things up really fast. It can. And so often what I find is that instead of in places where you think you'd want to use reverb and make a create like a virtual room to put the sound into you're going to use delay instead. Which kind of gives that sense of space. It's not really reflections that you'd get from a reverb plug-in or from an actual room that you're in where, you know, when I'm talking to you, you're getting the sound from the speakers directly but you're also getting it bounced off the walls and coming to you which takes longer. So putting on just a little bit of delay, like on the order of under 20 milliseconds. And not super loud and not with a whole lot of feedback either can kind of give you the sense that something is seated a little further back or put it a little more in a space which can help sit things into the mix. It's kind of the classic thing with vocals, right, where you do all this work. You compress it a lot and you, you know, you eke you up the frequencies that will let you hear it and all of a sudden it sounds like that sort of right in front of you and the band is back here and kind of adding a little delay then to push it back once you've brought it too far forward can be a useful thing. And also just using it when there's space to not when things are really full and when there's a lot of information happening in all frequencies and not when the tempo is too fast. That's all we've got time for unfortunately. Thank you very much. I can present you two with one of these beautiful things. Awesome, thank you. Thank you very attention.