 is that the bug wasn't in Blender itself, but it was in another package in a library that Blender used. And yeah, it was pretty annoying to spend a whole day to search in the Blender code and I found out that the Blender code base regarding audio is pretty, yeah, let's say ugly. So I decided to do the whole thing from scratch and yeah. So since 2.5, Blender is using my audio code. So Blender Audio, I'd like to know who of you has used Blender Audio before, okay? Not really many people. And did you have a problem with it? No? Okay. So yeah, everybody knows Blender Audio is one of the main features of Blender and it worked all the time and yes, I'm being sarcastic, but I hope that this presentation will introduce you to Blender Audio and its use cases and you might want to use it later. Okay, so the contents of this presentation will be, I will give you a short introduction to what sound, audio and music are. So for computer graphics people, as most of you are, I will do some comparisons to how things work in graphics. Then I will tell about Blender and the audio, what works and what doesn't and what can it be used for. Then I will show the sequencer, the video sequencer and then the 3D audio which is the project I worked on during this summer. And as last point, I have some other topics that are possible with Blender. So audio sound and music. Sound is a wave which is actually a pressure through the air and works in some way similar to light, so light waves and the difference is that life spreads if the direction of light is forwards, then the light wave moves left, right, up and down and if the sound wave moves forward, the wave also moves forward and backward. Okay, and the digital representation of sound is called audio. So you have to, you have a physical wave and you have to use samples of it and then you get audio and can use it in your computer. And the properties of these audio waves in the computer are first of all the sample rate which you can compare to the frames per second you have when doing an animation. Then you have the sample format which is 16-bit most of the times which you can compare to how your pixel data starts. So if you have 8-bit per color channel, that's exactly what the sample format in audio means. And last but not least, you have a channel count. So you have not only one speaker but probably stereo or in home cinema, you have 5.1 for example. So five channels, front, rear and subwoofer. And this is what the channel count tells. Now you can compare that to stereo, to two images, what you need for 3D visuals which is used quite frequently now in the cinemas. So and then we are coming to music. For music people are using notes and that's probably what you can compare to pixels in graphics. And every note has a pitch, so how high is the pitch, which kind of compares to the color. The volume which is used for dynamics, can you compare to the value? So if you have the hue, saturation value, color model then frequency would be the hue and volume would be the value. And the duration is also important. For still pictures you might think that there is no duration of how long a pixel is displayed but actually the duration is infinity. You can do the same with sound but if you have the same sound playing for an hour or so it's getting pretty annoyed so you typically use normally a shorter duration for sounds. And the last note property is the timbre or texture which actually is a quality of the sound so that you can differ between a piano playing the same note as a violin for example. Okay, and now what does Blender have to do with audio? Here you can see a list of use cases. The first three basically have been available before 2.5. The video sequence editor has its own audio strips then the game engines had its own sounds and you had the audio window which basically only shows the waveform which you could use for lip sync and all of these systems were pretty disconnected and had their own code and this is unified now. And the new feature since this summer or since 2.6 is 3D audio where you can play speakers in your scene and render 3D audio animation. We will come to that later. And sound based animation is also another topic. You can use any sound file to create an F-curve and base your physical visual animation on the sound. Okay, and then we have non-use cases so people were asking me for some things of these points here but Blender is not intended to be a digital audio workstation which can do everything. In German we have a word for this, it's called Eier legende Warmilchsau, which you could translate word by word and would mean egg laying wool, milk, pig, so an animal that has about everything, yeah, but Blender is a 3D application and mainly a 3D graphics application and not an audio application, but there are open source audio applications and you can use that in combination with Blender and that's also what I'm going to show. Okay, the sequencer is pretty straightforward from its usage, you can as with any movie strips you can add the sound strips and do some cutting, moving them around, trimming, et cetera, and since in 2.6 you can display the waveform directly in the sound strips so we don't need the audio window anymore. But I'll show you an example of this, this is 2.6, yeah, that's what I wanted, I just opened the sequencer and add a sound effect and have here a music file and now I can move this around, I can cut it, move this also around and if I can hear the weird effect because I just moved it so that the sound plays twice basically. Then under the properties you can enable drawing the waveform of the sound and this is exactly what you can use for lip sync so you basically see how the volume at this position is. So if you have a speech audio file, you can just trim your animation to work with these volume structures here. Okay, the next new thing that Blender 2.6 has is some animation, volume animation has been there before but we now also have pitch and panning and I want to quickly demonstrate this, I've moved these two strips above each other here and now I would like to blend them so that one gets silent and the other gets louder. So I just animate the volume here and I have to set it to zero here, I have a smooth transition. Okay, but that has been possible before and what I can also edit now is the panning but for panning to work I need a mono sound file because a stereo sound file already has the channel information and for this reason I should be able to add this as mono. I will just edit. I can change the setting afterwards but I have to use the outliner. So we have sounds here, it's the second we loaded and I set this to mono here so now we can animate the panning and I start with zero and just to show the panning so you should hear how it goes from left to right. That's panning and the probably most interesting sounding animation we can do is pitch animation. So pitch of one basically means that the playback speed is normal and if you get faster the pitch also rises so what basically happens is that the sound file is played faster and this changes the frequency of the sound and here it gets lower and back to normal again just for a effect. Okay, this is how pitch animation works and unfortunately we have one problem with this animation type. Tonner already said that there's no real dependency graph for animation yet so we have some difficulties calculating all this stuff and we can't yet calculate the actual change of the strip length so it might be that if you do pitch animation that if you seek somewhere in between that the playback position is not correct anymore so to make sure that you get it to get it working correctly you might have to start it from the beginning to get the right playback positions but I hope that we can solve this problem at some later point. Okay and then as I've already said I'd like to show how you can use Blender with other open source audio software so that you can do all that fancy stuff that pro audio applications provide and for this there is an audio server called check which you can use in Linux and here I have OpenMockDAF Media or OpenMockDAF Studio which is another open source project from more or less known Blender user which is who is called Christopher Charrette and I've got an example session here from his wife yeah it doesn't work check needs exclusive control over the sound card so I just had to kill any application that's using audio and you can see that this is some composing application you can place notes of different musical instruments and in the background the pro audio server check is running and I want to show how you can use this with Blender so I start Blender again and in the user settings user preferences I set the audio output to check and can now combine the playback of Blender with the playback of all the other check applications so what I'm going to do is add a movie strip this one let it start frame zero also the animation and then an enable audio video sync which then basically combines Blender with check so that every application that uses check play plays back at the same time and I would like to show this to you just like press play now you can see it's just playing the video and at the same time OpenMockDAF is also playing the audio so I can for example move the cursor in here and if I stop animation every other program also stops playing back the next thing is 3d audio and you might wonder what 3d audio actually is and these are the basic effects that are there we have some different calculations of distance based volume basically then you might know the Doppler effect directional sources are also so that you can hear the sound coming from the left or right or backwards in front if you have more than two speakers and yeah that's basically multi that's multi channel panning directional sources are kind of like spotlights so if you add a single speaker object you have kind of a point light and you can set angles of this to just be a directional source and last but not least there is simple volume and pitch animation also okay so what properties do we have here so for distance based attenuation you can set a distance model which is possible in the audio settings of the scene which are here and the default model is physically correct but you can also use exponential or linear falloff of the volume so linear would be if you are one meter far away and the volume is 100 percent and you go five meters far away and the volume falls linearly instead of yeah stronger then you can set a maximum and the minimum volume so no matter how far away you will always have at least the minimum volume and at maximum the maximum volume you can also set a reference and maximum distance the reference distance basically just tells where the sound has a volume of 100 percent and the maximum distance tells when is the lowest volume reached and then you have an attenuation factor which works differently depending on the distance model and influences how fast the volume goes down the further you go away then the Doppler effect is an effect based on the speed of sound you for sure have heard it already if an formula one car for example passes you can you can hear the sound and basically the engine is playing a constant sound but the new is a pitch shift which is created by the Doppler factor and is based on the fact that you are standing still and the other object moves and for and to demonstrate this I have also an example prepared I'll just first stop this and use normal audio again and now I can demo this scene and we have an ambulance car which is moving at the yeah let's say high speed and the camera is following it just play like this okay now you hear some strange effects that's because sometimes because of the animation uh problems we have uh we have uh wrong animation values and we have to update the animation cache that's what I can do is this button and this should have fixed it now I think you heard the effect when uh ambulance car passes okay and we can see the Doppler effect okay sure sure um I should have yeah this is the original sound so the the pitch of an ambulance car is is changing but I can I think you can hear the difference when exactly when the ambulance car passes the pitch is getting lower in some okay that was the Doppler effect yeah and the settings you have here is also the speed of sound and you can also set the Doppler effect factor which is not uh it's physically based but you can exaggerate the effect a little with it okay then we have directional sources uh very can basically define a cone which tells how loud the volume should be so there are two cones the inner cone has full volume um the outer cone as so outside the outer cone you have uh specified outer volume and between the inner and outer cone you have an interpolation of the volume and to demonstrate this I want to show the uh test file we have and all these settings also work in the game engine but in the game engine you have to use um the sound actuator instead of the speaker objects um yeah but they have the same capabilities mostly you can hear the same siren sound again you can hear a difference when you're inside the cone and now it's a bit difficult to see and if you're outside the cone but um yeah this um has doesn't have so many use cases but imagine for example when you're uh in a room if that's a room and you enter a door you want to get the volume louder so you place a speaker with a cone angle of 180 degrees so that's basically at one half you get full volume and that the other half less well uh less volume so that you can hear uh different volumes if you're inside and outside the room okay other topics we have the sound to f-curve uh operator which can bake the sound to an f-curve for animation and we have a lot of properties here and it's quite difficult to explain those so my tip for you is to just uh play around with them and see what they do just a quick example I have a cube here I will quickly insert a location animation and then in the graph editor I have now three f-curves and I can now bake a sound to these f-curves and I will use this pink painter sound bake it and now you can see uh that the animation is based on that sound um if I quickly add the same sound to the video sequence editor you can now see that the cube that I can show you a video which is land but I fear that the internet is not fast enough so I'll probably let it load and we can maybe watch it later so then one more thing we have uh in blender is the game engine audio and here you can't use the speaker objects uh the new speaker objects but you have the same settings with the sound actuator and additionally you also have a playback mode so you can set the mode to play and stop as soon as the sound ends or stop at another time and you can say that the sound should loop or play ping-pong and for this I will also show you the test file we have here um where I have keys to demonstrate this uh playback mode so this is our test sound we have here and if I press the first button you hear the sound exactly playing once and stopping as soon as I release the key um the second playback mode is uh end so it it plays until the sound ends so if I release the key the sound finishes playing then we have the looping mode so if I stay on the key now it loops all the time and this also has a stop immediately as you just hear or stop as soon as the next end of the sound is reached and then we have ping-pong which just plays the song forward and backward also in a repeating manner also again with play until I release the key or play until uh the complete sound stops okay um the last thing I have here is the Python scripting which was uh my google summer of codework uh the year before so not this summer but the summer before and during that time I wrote the python api for the audio library and uh I will try to get the demo I did for this again in slow let's see if we can okay so this is the demo on audio animation a short demo I made so we can have for example an equalizer effect by uh using the uh sound to f-curve operator okay but I see that we just really slowly loading so we'll probably switch to the python demo now I have to do now is create a script I register the script and I can now okay something is wrong here I don't know if it has the main function okay let's try the same problem here let's try a text editor uh text editor works um yeah let's just I import the script and just say okay something so all this script does um is that it synthesizes the sound with the python api so I have no sound file in the background I just have the script uh with the string of notes and then it's using the python api uh to play the sound so just to quickly view this here I'm creating a sine tone with a frequency based on the string so on the notes then I change the volume for a pause tonal uh or a square rate so that we can get the original tetris effect um then I can limit the sound fade in and fade out and then I shown trying the different notes and yeah get the tetris sound off out of the script and the advantage you have now is that you can use the python api everywhere so you can use it in the game engine or you can use it in blender itself okay let's see time is nearly over so here just to finish this some future points we have uh head related transfer functions that would be interesting this is basically a technique to simulate um the direction of the sound so if it's coming from the front or behind or up or below someone uh through speakers uh not speakers through headphones so where you have a um yeah it's a bit difficult to explain but it it's based on the anatomy of our ears that the sound is changed depending on the direction the sound comes from and uh the head related transfer functions use exactly this uh to be able to create uh directional sounds from any direction uh with just two speakers which are the headphones then we have automatic lip sync as possible audio target so you just import the sound file and blender detects the full names of it and you can connect the full names to your rig rig animation so that you just have to import the sound file and you can immediately see your face rig speaking and one really advanced topic is uh reverberation so that you can hear the sound uh it's basically a physical simulation similar to the lights uh bouncing around in a room uh which can be done with audio also so that you can these effects um the sound is different when you speak in a cathedral as if you're outside you get echoes and touch things for example uh okay but the problem is these are really advanced topics and uh as I had no sound coding knowledge before I started working on blender it's a bit out of my scope and if anyone's interested in training the blender audio development team I would really appreciate that okay and just to show a final example which you probably have seen already is this space uh scene with models from land swap I created to demonstrate the 3d audio effect in for my google summer of code project a bit slow on this laptop but I hope that you can hear for example depending on where the laser shot is coming from you can hear it right or left the Doppler effect in this example is pretty yeah physically correct so it's not really uh easily easy to hear but you can also hear that depending on how far away anything is the volume gets louder or not you for example have to listen very closely to hear the shots from the other spacecraft yeah it's it's really silent okay okay so that's 3d audio do you have any questions no clue what that was questions yeah there is basically no limit it depends on the output device you have so if you the normal output device in blender is open al and on windows it depends on your sound card and on your driver which supplies open al how many sources can be played and on the linux we have open al soft which is a software audio renderer and basically it has unlimited sources so as long as your cpu can calculate it you can or as long as you have enough memory you can use as many sources as you wish further questions yeah uh so you want mixing of audio in blender well what the sequence edit or basically does is mixing audio yeah okay um not sure if i understand you correctly now but uh you want to get uh external audio source to play the animation inside blender yeah okay okay for real-time applications or can't use the sound file and okay so you want a more interactive version of this so that you can change yeah okay that the the reason that this is a bake operator is uh just that um doing that in real time is pretty slow and we had a modifier before so that you can dynamically change the sound but this was basically unusable because it was really too slow and this is partially caused by the animation system which unfortunately is not laid out for audio it's it's for graphics not for audio and there are some problems we have and this is also the reason why we have this update animation cache button here so yeah i wish it would be different but um maybe we can tackle this when you have a better dependency graph and such things yeah okay any other questions yeah okay yeah there is a setting you can see here the format you can choose how many channels you want to output so if you do yeah those are presets okay you want to have a dynamic setup um basically it's possible but it's not coded yet okay further questions yeah okay that's a good point the ears is are always on the active camera at the moment and as we have now had related transfer functions yet we don't have to define where the ears exactly are it's just a single point which is at the yeah where the camera is and yeah with head related transfer functions we probably need another mechanism but maybe we will add microphones later so that you can set where listeners should be okay but at the moment we just listen as the camera yeah from this point in the list here hmm had related transfer functions where the last point to do for this year's google summer of code which would have been optional um yeah i'd like to implement it but it's quite complicated and at the moment i don't have the time but i also had automatic lip sync as google summer of code proposal and i think this would be better yeah a better feature to implement as i think more people will need it okay because all that audio stuff is fine in blender but there are other applications which are also capable of it and you can already link blender to it and automatic lip sync is something that really has to do with the things people do with blender okay nobody uses blender to play audio they use it to to animation and for this automatic lip sync would be my primary target is next okay yeah shouldn't be a problem as soon as automatic lip sync is there depends on the algorithm yeah i don't have it implemented so i don't know if it's real i'm capable but uh if the algorithm supports it then why not yeah so you can could use it for the game engine also for example yeah yeah the audio library supports it but so far there's no use case for blender so yeah okay any other questions then i thank you for listening yeah we would like to show the straight from you