 What I'm starting up is Rosegarden, which is a fantastically good midi-editor Q-Synth, which is a synthesizer, Rosegarden's Flash, and we should also get Jack running in the background. That's the system capture device, so that's your input microphone. I'm just disconnecting it from Rosegarden in case it feeds back like it did in testing. Okay, so we just heard a piece by Matesart. I'll take you quickly through what Rosegarden has to offer, and then I'll move on to Arda. But just here we see a small preview of what's going to be played, so you can get an idea of what you're looking at and what you're about to hear on screen if you're going to do some editing. If we have a look at the Notation Editor, this is the actual notes being played, and this is probably the strongest point for me for Rosegarden, is that you can write notation as you're thinking about it, and then have a playback for you whenever you need it, which is just beautiful. So for example, if I take a note, I can drop it on the screen. So that's the Notation Editor, which is very handy, but most uses for Rosegarden, you'd use the MIDI Editor, which looks like this. So you've got your keyboard down here, and you would write your note by placing a value for the keyboard with MIDI, so note duration, velocity, everything that you need to instruct the computer how to play that note. That note gets passed through Rosegarden through the MIDI player, which in this instance is Q-Synth, which I'll show you a little bit of later on. There's more than a few super good synthesizers for you in Linux. This is one of its strong points, just to give you an idea of what sort of things we're looking at here in MIDI. So this is an example of a note, one E-note. It's a bit harder to see on this resolution, but let's take that one. So that's the time the note starts, the duration the note goes for, the pitch, D4, and the velocity that it comes in at, various other bits of information that you're passing through to say this is how I want the note played exactly. Obviously it takes a lot of time to plug in MIDI manually. Most people don't do it, they'll use a MIDI controller, a piano keyboard or guitar or something like that, which can pass MIDI signals live into your computer. There are some very, very patient people who like to program computers for music, but I'm not really one of them. I'll show you a little bit about Q-Synth now, just to give you an idea of how the synthesizer works. Each one of these tabs is a preset sound, let's say. Let me show you Jack. Jack is the thing that's connecting everything together, so I've got two instances of Q-Synth up. I've got Rose Garden, I've got my system inputs, and this is all under the Audio tab, there's also a MIDI tab and an Ulcer tab. Under the MIDI tab I just added a virtual keyboard, which I can connect into my Q-Synth. And it's just like a patch base, so basically you've got every single input and output device of your applications open and running, you can connect them all to each other however you want. It's particularly useful when you're processing sound or making effects or connecting up your effects rack to your recording device, and it's very, very easy to use. So in that way I've connected this keyboard up to the Q-Synth, and I can play that just like an ordinary piano if I had really great skills with a mouse. So normally you'd have a MIDI device plugged into your laptop and be playing it that way because it's a hell of a lot easier than using the keyboard. That's about all. I'll show you one more thing. I'll just kill a few of these extra applications. So Rose Garden is excellent for MIDI work. A lot of the work I use it for is for recording audio directly into the computer. A lot of my gear is very cheap, like Jonathan was saying, the FireWire devices are quite expensive. This is my first version of a home studio on Linux, so the hardware, as we've seen earlier, ain't that reliable, and the interface that I'm using is just the AC97 card controller that comes on the laptop, and the instruments that I use are, you know, $100 guitar, $100 keyboard with a cord between them. So it ain't high-tech stuff, and it does have a lot of hiss on the line. But as you heard before, if you're just working within the laptop, the sound quality isn't too bad. You don't want to change moods out. So this is just a test piece I did. Three guitars running one after each other in a rhythm section on top of it. So you get the hiss from three guitar lines and a keyboard line coming through on every single track. So bear with it that it's quite raw, but you kind of get the idea of what the thing can do. It's not bad, and Rose Garden's pretty good for this sort of stuff, but there's no real control over the inputs that you've got and the mixing and the sound quality that you're getting out of it. So if I'm going to do recording of audio, I usually use Arda, Ardaur. Mind you much, this stuff is just me playing around, so it's not being used for anything except for my fun. It's a little bit hard to capture it all on screen here, but so this is a rather large piece, which I'll play to give you an idea of what it sounds like. So with Arda, each track here, not including your master track, which is what everything gets mixed through in the end to go through it to your outputs. Each track is designated as an instrument, so I've got a Misa loop. This is a piece of music that I stole from the internet. There's an instrument out there called the Misa digital guitar, which is basically a MIDI guitar with a MIDI interface on it. It's very, very naff looking and is an open source Linux built machine. So check it out, it's quite funny. And also quite good, mind you. So I stole a bit of that loop from the internet. Used a looping piece of software called Freewheeler to generate something like a rhythm track and laid drums and keyboards and guitar over the top of it. I'll mute these up to start with so I can bring them in one by one to make it a little bit more interesting. Music going in. So each one of those is an iterative take. So I've started off with a drum tech or a rhythm track. I've had a drums on top of that, the next track. I've routed guitar on top for the next track, keyboards on top of the next track, vocals on top, so on, so on. And the nice thing about working with Arda instead of Rose Garden for this sort of work is that when you have, say, a couple of different instruments, you can do a lot better balancing in terms of the sound stage. So you can shift your guitars all the way to the right. Your keyboards all the way to the left, you can have the instruments out the front. It makes mixing a lot easier. It does have a very nice mixer, if you can see that there. Each column here represents each row within the waveform editor. And you've got balancing panors down there and individual volume on each channel. Plus each channel can be separately channeled or separately looped in and out of Arda. So let's just say I wanted to have one of my keyboards, say, going out to an effects rack and then coming back to re-record at a slightly different pace, say maybe to simulate an echo or a reverb. I can pass that out into an effects rack and then pass it back into Arda. Do any number of processing bits on that sound wave. I'll shut that down for now. So once I've finished with the track using Arda or Rose Garden, I'll export it to a file. Usually I'll use a WAV file because it's easy for me to pass around to all my other laptops or sound devices. Audacity is what I'll use to do final mixing or mastering of the work. It's nice and basic and easy to use. It lets you see the waveforms easily and it doesn't complicate things too much. So I think this is an example of that piece of music there. You can see the waveform there on both channels left and right has been clipped at the front so that it fades in and clipped at the back so that it fades out. Because I was working with that really rough loop at the beginning I had to make a nice big synthesizer squeal over the top so you didn't notice the jarring start of the loop and then I had to fade in over that because that was too loud and was drowning out everything else. It's been slightly compressed because the work I did was so loud when I recorded it the first time. I'll just show you what it is. Start from the beginning. Just to give you an example of the sort of effects that are available and whoopie in this we might run a phaser over it. So this is not normally the thing I do to a you know a finished song but it would be useful if I had a particular guitar track or line of something that I wanted to give it a little bit of a different sound. So you can see the differences in the way for all of this. So that's basically the process. The main thing I suppose that I want to bring out of this is the quality of those applications that you're using Rose Garden and Arter and Audacity including Jammin which I didn't show you because it doesn't run that great on my low CPU computer are all really really good for your sound processing jobs using Macs using PCs I've never been able to get the quality or the stability of the system as I do with Linux. It's not generally recognized as a sound platform in Pro Audio world not being a Pro Audio technician but there's very few studios out there that will be using Linux. There was one in Melbourne for a long time called Laughing Boy Records that had a very nice setup but they closed down last year or so but really if they knew the capabilities of this stuff and had the ability to use it like the support that you get for the Windows and Mac world considering how rock-solid the platform is and how good the software that runs on top of it is it basically kicks ass over the rest of it. So just to give you an idea of all the different applications that come here and all the utilities that are available. So that's about it for me. There's a lot that I could show you but I think we probably run out of time so have you got any questions? Have I? They gave me the five minute card. So if we've got a bit more time we might have a look at maybe some of the loopers and things like that because they're quite interesting and what they can do if we can get it to run properly. So this is a looper called freewheeling. Basically every single key on the keyboard allows you to hold the loop so if I get some input of some sort and this is going to sound really daggy because I think the only input I've got is my external mic. I'll just use one channel to try and stop it. So that's just running in a loop by itself and you can hear me say I'm recording that. And these sort of things you can use over and over beats or whatever plus you can also feed your instruments into it. But I also have a look at maybe some of the synthesizing software that comes with it. The Zynad SubFX is a pretty old synthesizer for Linux. It's been around for a while and it's good. Like most synthesizers it's kind of based on a DX7 style of thing. You can choose a whole bunch of banks and preset them up so that you've got them ready to play if you're doing a performance or something like that. And as before it needs to be connected up with Jack so that you can actually use it to do anything. Audio out. I suppose the nice thing with working on Linux is that when you do work with the instruments inside it and you want to do something a little bit more interesting with it you have got an immense number of tools to work with. We might maybe have a look at Jack Rack. So Jack Rack's a series of plug-ins like you have a guitar pedal with a whole bunch of different noises in it. This is basically the same sort of thing. So we're going to throw a bit of distortion upon that nasty keyboard sound that we had before. That's what it sounded like. When we plug it into Jack we just connect it from the playback. Plug it into the Jack Rack in. Put the Jack Rack out to the system playback. By the sound that way I'm building through loops and loops and loops of your RackFX or the different applications that you want to run through to make as many sounds as your heart desires. I think I might leave it there. If anybody else has got anything they want to ask, go ahead. Need a light. It's not being composed using Sibelius for orchestral score. How would what you can do here compare with what he can do with this? Sibelius is a sampled orchestra package. So a company's gone out and gotten all the different instruments from the orchestra and sampled every instrument within every range within certain parameters and then produced that as a package which you can then upload into your system using whatever PC Mac. There's a huge industry built around Mac and PC sound system. People like vendors who produce Sibelius market directly to those industries and produce their packages solely to run upon those pieces of software. Linux doesn't compare in that direction. When you're talking about full orchestral scores there is some stuff out there and there's some very good stuff out there. As you saw that piano sound that came with the... that I played first off is not that bad. But yet there is a difference between Linux and the PC and Mac world in that respect. There's a lot of products out there that just won't run on Linux that a lot of money and a lot of development gets put into. Under Wine? Under Wine, yes. There's a package called XST. It will load the VST plug-in and produce and get the jackpots for the new processor. Nice. It is very nice actually. You can use that as a core of your audio setup and then actually use it. It opens you up to a lot of software because there's a lot more made-all under it. I'm still working through the plethora of stuff that I've had in the past couple of years and I'm nowhere near exhausting any of the stuff that I'm playing with. Probably the other thing that was just pointed out then too is that in a lot of cases Linux has the ability to... has the programs to be able to play back your orchestral stuff and whatnot and when you look at programs like Sibelius the application itself is fairly complex in what it can do but I would probably hazard a guess in terms of the total cost of investment to develop that that the sample libraries are probably the biggest component of that and at least in theory we should be able to make use of a lot of those sample libraries because they're just samples but because they want to keep these things proprietary they lock them up in their own formats that are generally encrypted decryptable only by their own engine and that's really where the problem lies is that to record orchestral samples as some of you would be aware is extraordinarily expensive and in the open source world unless somebody does a Paul Davis and like effectively donates hundreds of thousands of dollars worth of time and effort to the community that we are unlikely to find and those quite high quality samples that we could then plug into these synthesizers like FluidSynth and stuff like that and you know I think it's a problem and I don't know that there's an easy solution to it I mean Artle is only where it is because Paul Davis was extremely generous with his time in the early days of the development in that he did you know a lot of that early... a lot of the groundwork was done and he self-funded himself and we'd be nowhere near where we are now if it wasn't for that and with sample libraries I think it's a similar thing we've got projects like the Open Sound Library sorry it's called Open Sound I think it is OpenSound.org which is like an effects library on the net which is actually for effects and stuff is really really good and I use it all the time for theatre work and the quality of those recordings isn't consistent and really isn't of the same quality as you get if you hire out of studio and actually record that stuff but again that's where the expense is and I don't know if there's an easy solution to that The experience I've had which is like try demo planning it does exactly what I want, I'm prepared to pay money for it get the finished product and find that it doesn't work under emulation not because of the emulation but because the licensing instructions aren't emulated Yeah so the comment basically there was that the emulation will often work in terms of the program functionality but the licensing methods that they use like iLok and stuff like that basically doesn't work through that emulation layer and that's true Yeah exactly and that's a problem with a lot of this proprietary stuff the prior audio industry has mostly standardised on iLok as the method of licensing a lot of this stuff and of course it's a proprietary encrypted thing and you've got no way of... we really don't have a way of being able to deal with that under Linux so even as you said you can get the code runs but because of the issues with the iLok it won't license itself and it will refuse to run and yeah it's a problem and I don't think anybody has a solution to it that I've heard of at the moment it's one of those things and yeah there's nothing inherently problematic about using the samples or anything like that it's just that the licensing is difficult and awkward in the context of the way that we're forced to have to use it so yeah agreed Your earlier talk on real-time audio which is just one thing that I thought was interesting to notice that you talk about the necessity for low-lacency in a production environment or a particular performance environment but what's interesting to mention is what low-lacency actually means in the context of audio and musical production for musical playback you want a licensee in the range of 5 to 10 milliseconds is workable ideally under 2 and that is an order of magnitude different from what a stock kernel is going to deliver you out of the box so I mean this is the major benefit of moving to a real-time kernel which is that it's actually going to give you that kind of performance that being said there's another problem which is you might be able to get that performance at that licensee and ideal conditions almost always you find a situation where there are bottlenecks in the applications that you are using and at that stage they start dropping out and they start producing these run errors I've actually never had a problem with X runs on Linux ever I've had I've chewed up too much CPU from trying to do too much processing on an audio file but I've never had an X run on an input and I've never had X run on an output mind you my recording is one line only something like Laughing Boy Studios down in Victoria I know his production environment has six processors or six CPUs joined up with NetJack he can get 40 input and 40 output channels going at once he gets sub 2 milliseconds as his response time absolutely no X runs whatsoever I'm not a pro audio technician so really I can't give you a really good answer on that for my uses I've found it to be absolutely awesome it kicks us over my PC I don't know to map it same I think with the latency the issue of the latency as well it does very light the minimum latency you can get on any Linux system very much depends on the hardware that you're using to get your inputs and outputs just to put some numbers to it I have heard I haven't personally done it myself but I've heard of people who have managed to get sub 5 millisecond latency on the FireWire stuff with FATO which is pretty amazing although admittedly the last time I pushed it was a couple of years back and I only had single core CPUs then so sub 5 millisecond on that is actually pretty impressive with PCI in theory you should be able to do better than that and there are ways there are ways of increasing your channel count without compromising some of this stuff as well and that really even with the higher end the Fireface UC from RME which has just been released that purports to have 60 IO 60 in 60 out so really once you start needing that number you really should just get a matty card and use matty to get it in and out of the PC because it's designed for that sort of thing it's 64 channels per matty port and it's designed for that sort of thing so you'll whack a couple of those and you've got 128 and with matty cards in theory at least you should be able to get that level but then it that assumes of course that you've got enough CPU power to be able to handle everything that you're doing in the time allotted and to a point that becomes less of a real-time kernel thing and more of a CPU loading thing yeah that's it I mean at the moment our current recommendation with the FATO thing is that if you're wanting 5 to 10 millisecond latency with the low latency desktop that's where it is now 18 months ago, 2 years ago, that was not the recommendation and that's because of what Roderick was saying in the earlier talk about how a lot of that real-time stuff has actually trickled down into the mainline kernel now and continues to do so and that over time I think the needs what we're hoping for is that eventually most of the stuff that's important to audio does trickle down there you can now get sub 10 milliseconds on FATO with a stock kernel illustrates just how much of that stuff has actually trickled into mainline finally so yeah what isn't in the mainline kernel now? did you want to take that? I can try with the low latency desktop settings basically most of your threads and processes and your interrupts are brought under schedule of control so there's much more chance that your real-time audio application is going to run as you expected to within certain predefined limits of being able to have access to resources soon enough with the real-time patches it also brings in more of the kernel code so there's less it adds additional preemption points I'm not an expert on the RT patch myself it's been a while since I've looked at it since I haven't needed to but I believe that it adds additional scheduling points into blocks of kernel code that would previously be non-preemptible I believe that there's some RCU work that went in although actually it's preemptible I see you in now that was very recently they changed the way the interrupt handlers work I think still a little bit although when they did the threaded interrupt when they changed the stop kernel to threaded interrupt handlers about 6 or so months ago that helped heaps so there's I personally view the RT kernel now for audio work as more finessing rather than as a major we must have this to get any sort of real work done as it was 2 or 3 years ago a lot of the stuff that was important for audio work has now managed to get through and get in and for certain workflows there's still some of that upper level stuff that the RT hasn't got into a mainline yet that we still use so users like Roderick still need it but in a lot of cases now a lot of the big ticket stuff has actually got through the big kernel lock removal has been another thing that's sort of come to fruition in the last 2 releases so I think at the moment now we're at the point where RT sort of finesses us for audio work whereas 2 years ago it was almost essential if you wanted to get that sort of thing happening if it doesn't work for you the downloads there's a dozen ones you can download and try again for the purposes of the video the comment was that for somebody starting out with this stuff there's no need to recompile a kernel and I would say I'd agree with Roderick that is true down the track if you start pushing things and you discover that yes you need to because of the way you're working or what you're doing then that's down the track but I think it's at the point now where certainly you can get into this stuff you can start having a play and messing around with stock kernels and you don't have to get into that that sort of thing at the present moment I'll take one or two more questions but it's afternoon to you now That's before I started after I started looking at it so you might be right to be correct The status of the RT kernel patches as I understand it is that there was a long hiatus because Thomas and Ringo and Ingo got sort of sidetracked with a few other bits and pieces but they started up the maintenance of them again around the dot 32 dot 33 line I think dot 33 is the latest that's out at the moment maybe but my understanding is that it's still their intention to keep them maintained it's just that obviously it's not a high priority on their work at the moment and so they basically get to it when they get to it but I haven't seen any announcement that says that they don't intend to keep supporting it at this present moment and that's true because more and more of it has got into mainline you're correct but before it isn't as great anymore probably 90% of audio users are no longer complaining and saying we need this stuff to do our work and audio users were one of the loudest voices but a lot of the embedded guys still need some of that stuff in there and so on and so forth because we're certainly not the only users of that stuff and I think it's recognised and my understanding is that it's still maintained but it may not be maintained quite the frequency that it used to simply because a vast proportion of the use cases have been taken care of now so it's just a case of prioritising and the fact that there's not that many people who are working on it so any more questions for Roderick? No? Awesome, well let's thank Roderick again