 Alright, I'm going to basically give a follow-up talk to one that I gave last year, just to explain where the FATO project is at with device support, where we're heading, what our future plans are, or I'll just give a brief outline of what FATO is for those who haven't encountered it before, and then I'll talk more about what we've been up to lately, where we're what we're actually working on presently now, and what we hope to achieve in perhaps next 12 months. There's a bit of background on one of the three or four primary developers of the FATO project. We've got a few other people who are contributing more what you'd call bug, or not necessarily bug fixes, but just tidy up code and janitorial type work, and we've also got other people who are now starting to come up with extra translations and things for the project. So the FATO project provides audio drivers essentially for firewire-based audio interfaces. You don't tend to see these in the consumer world, but in the pro-audio area they are very very common. There's a number of different reasons for that, one of which is that in pro-audio there's a propensity to try and get or try and keep audio signals as far away from the inside of a computer as possible to get away from all the digital hash and all the other rubbish that goes on inside a computer. And as soon as you have an external box you've got to be able to get the audio data into the computer somehow. There's a number of different ways of doing that and firewire is one of those. Firewire tends to be used on the devices, have a large number of audio channels and by large number we're talking like 50 or 60. And the reason for that is because once you get up to those sorts of channel counts with low latency settings the CPU load that's demanded by USB type solution is excessive and basically makes it a non-starter. That's changed a little bit with USB 3 but there's still issues with that that probably mean that the firewire devices are going to be around for a while yet. At the moment the FATO project plugs in as a back end to the Jack server. So instead of being on the same level as ALSA it basically acts on the same level as ALSA within the Jack framework. So at the moment the only applications that can effectively use the firewire devices are those that talk through the Jack sound server. At present that hasn't been regarded as a big issue because the sorts of interfaces are precisely the sorts of interfaces that the users of the Jack aware applications are going to want to use anyway. I have a bit more to say about that soon. So in terms of recent developments, well recent-ish, the version 2.0 was released in December 09 as it says there and about midway through last year we released a bugfix release 2.0 which basically just cleaned up the usual sort of 0.01 release thing, clean up all the things you got wrong the first time. At the moment we're sort of heading towards a 2.1 release which is going to basically just add some extra devices to the device set that we support, fixes up a few of the old things, provides additional infrastructure for some of these other more complicated devices which include things like onboard mixing applications, onboard effects processes and this sort of thing which are becoming particularly popular on these things these days. So the work in progress, as I said, we're heading towards 2.1, primarily it involves adding additional device support to the framework that we've got and I've got a couple of examples of where that's heading with the Mo2 drivers which are the drivers for almost every Mo2 device that they've released to Firewire. The main support or the main area that I've been working on there is to add support for their so-called Mk3 devices. They basically have what I've termed three generations of device G12 and 3. The original Fatto Mo2 driver basically did G2 because they were the most common at the time and also because there was only ever one G1 device released by Mo2. About 18 months ago, Mo2 released G3 and, helpfully, they went and changed pretty much the entire protocol and it meant that our driver no longer worked with the newer devices. I've now been able to, mostly thanks to somebody lending me a Mk3 device for about three months last year, I've been able to deduce how to control these Mk3 devices and I'm slowly getting up to speed with that and getting that code into the situation, into Fatto as it goes. The main issue with the Mo2 is the control of the device. So how do we start and stop the streaming system? How do we control the onboard mixer? How do we control the onboard effects units and this sort of thing? It's that protocol that they basically completely changed for these Mk3 devices, which is a real pain. But we'll get there, I think. The other significant difference in 2.1 hopefully will be that the RME drivers will make it in. RME make the Fireface 400, the Fireface 800 devices and these are widely regarded as some of the better or some of the best interfaces that you can in fact buy, piped only by Metric Halo possibly. That's been a long time coming. I actually spoke about RME at the beginning at LCA last year, at which point we just managed to secure some programming information from RME. I had hoped to have something functional by mid last year, but a few things cropped up in real life that basically slowed things down considerably. But I'm now at the stage where I'm sort of hoping that maybe in a couple of months time I'm actually going to have something that'll support those RME devices fairly well. The onboard mixer thing may not quite be there, but certainly the basic functionality as far as getting audio in and out of the device and actually being able to use it for real work should be there. And that's a significant thing because a large number of people have actually been wanting this for a couple of years. So to finally be in a position where we can actually deliver that is actually a good thing. It's just a bit frustrating that it's taken quite so long to get there. And it's unfortunate, but we'll get there. We're also now starting to pick up additional translations. Now, at first glance, you would sort of wonder, well, why does a driver need translations? And the issue is not with the driver itself, but in some of the support applications that we ship. As I mentioned, a lot of these devices now have onboard mixers. And we have a mixer application that ships that allows you to control the different levels within the mixer. And that has to be done on a per device basis, because every single one of these devices is very, very different in the way they structure their onboard mixer, what can be mixed into what outputs and inputs can be crossfaded between and all this sort of stuff. And that mixer application is where this translation thing becomes an issue. And we're starting to pick up some more additional translations. It's going to take a long time before we get things covered to any great extent, but it's good to see that there's some extra people now cropping up. And I think the last one that I actually saw trickling in about two or three days or about a week or so ago was actually a Russian translation someone did for us, which is kind of cool. And then finally, Echo Audio, there's a lot of work going on that all of a sudden, because we've got a couple of extra people who've got Echo devices and they're now contributing bug fixes and helping to extend support and to shore up support for some of the interfaces that the other primary developers didn't have access to. The Echo actually very good because they themselves have actually supported us with some pretty good documentation. And that's now moving ahead quite quickly and it's good to see. There's still a few that aren't quite, there's still a few Echo devices that still need a bit of work, but the important point is that we're now actually finally getting to that. We've got people who physically have those devices and they can try things and debug things for us. And as a consequence of that, they've actually even found bugs in Echo's firmware, which Echo didn't actually know about. So as far as our future plans go, the RME driver, we want to get into a usable state for non-developers, and I'm hoping to have that done in the next couple of months. Echo devices are going to be more completely supported. And there's also a fair bit of work being done on the DICE platform, which is an embedded platform which is being used by a considerable number of manufacturers for their newer generation devices. Echo included on that one. The biggest change that's going to come possibly during the course of the early part of this year is a kernel mode streaming interface. At the present moment, the entire Fato device driver is in user space and plugs into Jack, as I mentioned. So what happens if you want to run these interfaces, you run Jack, Jack finds them on the Firewire bus, makes them available through the Jack system, and everything is theoretically good. The problem is that with the streaming of audio, the timing is extremely tight. And it's been a lot of effort to get various, it's been a lot of effort to just simply get things to work reliably from user space. There's a lot of issues regarding the interaction between the kernel lower level Firewire drivers and a driver running in user space, which requires very, very tight timing requirements. It must run at very strict time intervals, otherwise things fall apart more so with some interfaces and others. It works, it consumes quite a bit of CPU time, depending on your CPU. And it's pretty clear that it's going to cause a lot of maintenance problems going forward, because as the kernel changes, then the thread model has to change and it just falls apart very, very quickly and it's just not sustainable. It was interesting that somebody came on the mailing list at one point and worked out that the FatO system, I think, has like six or seven threads in it. And they couldn't comprehend why something needed six or seven threads. And the reason is to make sure that this stuff actually holds together with the timing requirements that we've got. It's very, very complicated. There is a better way. And the better way is to put that streaming, the core code that deals with the audio from the system onto the bus, put all of that in kernel space. By doing that, we remove a lot of the kernel user space timing interaction. The scheduler drops out of the equation pretty much. One of the biggest problems that we've got with this stuff is how exactly to keep the kernel code lean and easily maintainable. Because it's not possible and it's not feasible for us to just simply take all of the FatO functionality we've got and drop it in the kernel. Because the size of that code is bigger than ALSA. And the reason for that is that these devices are very disparate. There's not a lot of common code between the different device families, which makes it, which means that there's a lot of code that conceptually does the same thing, but has to do it in totally different ways. So the code base is huge. It's very, very complicated. So what we've decided to do is have a more generic streaming engine in the kernel, which takes care of making sure that the data gets put into packets at the right time and has time stamps applied correctly and all the other streaming type things that need to happen with the data. But all the other control of the device will still reside in the user space. Yeah? Because I've mentioned that in the case of the kernel, it's probably going to be interesting to be useful to other projects just up and just apply one. Sure. The question was whether there's been any other interest in this from other parties, basically, for a generic streaming interface. Not that have contacted us yet. And to be honest, I don't see that it's going to be that generic that others will be able to use it. The reason is because there's some very, very specific requirements for the stream that actually comes out of that kernel module and goes onto the wire. The format of the data packets going onto the FireWire bus are very, very specific into a very device specific. So there's a fair bit of variation from device to device. And they also contain stuff that really is often not relevant for any other concept. So basically we'll see how it goes, obviously, and maybe down the track it will be made more generic and develop into something different. But at the moment, the focus is still on the audio devices because they themselves have enough complexity in them even just on the streaming layer that we'll keep going with this and just see where it heads. And sure, if we can leverage other projects can leverage on this or we end up being able to leverage on others, that'd be great. Yeah, the comment again is that FireWire video has been around for a long time and that they may be interested. There's a big difference between audio and video on FireWire. And that big difference is that while almost every FireWire video vendor followed the spec, all the interesting audio manufacturers didn't. And that's... It doesn't sound unfamiliar. Sorry? That doesn't sound unfamiliar. It's not unfamiliar, no. And this is why the project doesn't support all devices and why we actually have to have specific code for all of these different devices. There is an AVC spec for FireWire and some devices did in fact follow that fairly closely, things like the Ederol FA101. But a lot of the devices that people are interested in the real world simply decided they could do better and went and implemented their own control protocol. And not only did they implement their own control protocol, but also their streaming format is different from device to device. So the Moto do things in totally different ways to what RME do. And RME do things in totally different ways to the AVC standard. And so on and so forth. So we have really a very different problem to the video. With video, once you could get data in and out of the kernel, it was all good. There was a well-defined standard. It was followed. There's a few quirks in the kernel to deal with devices, but that's actually more at the device level, not the protocol level. You know, some devices didn't adhere to the timing spec and things like this. So yeah, the video problem is mostly solved. Unfortunately with audio, it's a lot more complicated because of this divergence from the standard. And in fact, that's happened even less with USB audio devices. That with USB, there is a class definition. And most USB devices actually follow that. And so it's much easier to get a broader support of USB audio devices than it is with FireWire with a certain amount of code. USB has other issues with FireWire device support but with device support, but that's mostly a bus protocol issue, low level bus issue. Okay, question is whether there exists some MIDI control protocol over FireWire, like there is with USB. It's rolled, I think it's, I haven't had a lot to do with the MIDI side of things. It's rolled in, I believe, in some way in the AVC spec. I haven't read that section of it. But again, most of the interesting devices don't follow that spec and so they do it their own way. Certainly things like the Motus, the Motus have onboard MIDI, have MIDI things on them and they do pass the MIDI stuff through the FireWire, but they do it as part of their own protocol. They don't leverage on anything else. So it's actually embedded in the, it's actually a sub channel data on the main audio stream that gets sent to the device in the case of Motu. RME do the same thing. It's embedded in the audio thing. Some of the other devices, I think the FA101 has MIDI, but and I think there's, there is a part of the standard that allows you to put MIDI through. And so some devices use it, some devices don't. It's a device by device thing. Almost never. Question was, is the onboard mixer if the device has one controlled through explicit MIDI messages sent to the device? Almost never. It's off the top of my head. I don't know of a single device that does that. Under other operating systems, it may give the impression of that functionality being available, but if you dig deeper, you'll find that it's actually running a local MIDI sync, which is translating the MIDI messages and firing it through the proprietary protocol. Correct. Yes. Device by device. And yeah, it's all part and parcel of this whole difficulty that we have. We'll get there in the end. So just to continue then, we're actually going to export an ELSA PCM interface to this new kernel mode streaming interface. The motivation for doing that is that by doing so, people will be able to use these interfaces even if their chosen applications aren't Jack aware. I mentioned before that at the moment, if you want to use the Thaddo devices, you have to use Jack. There's no other way of doing it. By exporting an ELSA PCM interface, we will allow or we have the option that people will be able to use this with other applications. There's going to be some restrictions in that the sample rate and things like this won't be setable because it's part of the control, but it's seen that this will in the long run make for easier maintenance and it will also simplify Jack because Jack no longer has to have a separate back end being maintained just for the Thaddo system. So we can drop that off. It just plugs straight into the ELSA PCM and we are hopeful that that will broaden the appeal of these for people who've got them, make it easier to integrate into the system and remove the need for Jack for those who don't actually need it. And yes, as I mentioned, by going into the kernel, we remove a lot of these timing-related complications that we actually have. So this is just as far as what people can do to help the Thaddo project. The biggest single thing is to purchase devices from the manufacturers that are actually cooperating with us. On our project website, there's a whole list of manufacturers that are friendly to us that have either provided us with in-kind stuff, they've given us providers with example devices or devices at hugely discounted rates that we can actually afford or even have given us internal documentation and the sort of thing. We're talking the likes of most of the DICE manufacturers. ECHO have been very good for us. RME are now very good for us and they've provided the information that we need. We can't release it but I can use that information to run a GPL driver. So they're all good. Mo2 aren't. They keep saying, nope, it's proprietary information that we can't release and that's just the roadblock we keep getting. I've used various techniques to deduce how to control those devices and that's how we've got the support in and the motivation for doing that is because there's an awful lot of people that have Mo2 interfaces that if we don't support them, they can't move to Linux, even if they're interested in doing so. So although on the one hand, it's not a great thing to be supporting companies like that with device drivers. On the other hand, if we don't, we're gonna lock ourselves out of adoption by a lot of people who would be interested if only their existing multi-thousand-dollar investment could be used under Linux. I'm giving a talk at the main conference about how I've gone about that whole protocol analysis of these devices at the nuts and bolts level for those who might be interested in that. The other big thing that's useful for us is for people to actually download the beta releases from Subversion and test them for us on devices that we may not necessarily have. At the moment, there's no one is paid to work on Fatto. It's all entirely from a volunteer effort. As a consequence of that, we don't have the ability to just buy new interfaces when they come out because these interfaces are often at well over $1,000 each. And so for people who've got the interfaces, if they can test the new code, tell us what works, what doesn't, it can greatly assist our development efforts. And that's just some various acknowledgements for completeness and some final links with the project website and also that's my email address. So if anybody has any questions about Fatto or what they can do to help or have offers of help or can donate anything for the project to contact me via that link, would be great. And that does me as the update. So there's enough time for some quick questions, I think, if anyone has any. Otherwise, we'll take a very brief five minute or so break so Roderick can set up for his music making demo and we'll continue with that. Yes. How consumer grade devices are available with Fatto? There's actually really no consumer grade devices that have a Firewire interface. If you look at, I mean, there's dozens and dozens of interfaces with Firewire, but they're all very clearly aimed at the prosumer or professional area. So, I mean, probably the lowest quality device in that respect with the likes of the Ederol FA101s, the prosonus, fireboxes, things like that. But I mean, even then, you're still looking at $600, $700 for the interface. So they're not really aimed at consumers in the true sense of the word. They really are aimed at more at the prosumer or generally speaking, the professional audio user who's actually doing this stuff semi-seriously. So, yeah, to answer the question what consumer level devices are supported, none really, because there aren't any to support, they're all really aimed at that next level beyond that. Having said that, they're really nice interfaces. The quality out of them is quite a significant step up from what you get out of the consumer level devices. And that's where the money comes from. That's why you're paying the money. Yeah. I want to give a few thoughts on your views on the motivation of the manufacturers who don't follow the standards. My views on the manufacturers that don't follow the standards very, very quickly. It is very annoying that they don't get it. That they, the official line is we can't release that information because it's our proprietary information and it will disadvantage us in the competitive marketplace if we release the information. I think everybody in this room knows that's complete bollocks. Except getting this through to the manufacturers is difficult. We're basically about five or six years behind the graphics device people. We're about 10 or 15 years behind the manufacturers and network cards. Those hardware manufacturers got it fairly early and by and large, we don't have problems with network cards anymore. There's exceptions, but you know what I mean. I think eventually we'll see most of the manufacturers understanding that this stuff doesn't give them a competitive advantage. I mean, the fact that we've got Mo2 drivers in there now that work is proof that it's actually not that impossible to get that information anyway. They just slow us down. So I think eventually we'll get through. The biggest single problem with this stuff is getting through to the right people. If you contact the manufacturers, what you're actually contacting is their marketing arm and they are basically told anything that involves technical information, you say no. And they never pass that on to anybody who might actually be in a position with their own knowledge to know that what we're asking for is not their trade secrets. The biggest trick is getting through that roadblock and I think if we get a single person who knows an engineer in the company that Mo2 subcontract to make the devices, we'll be in a very good position to be able to talk to the right people. That's what happened to the RME devices. We had like five years of asking no, no, no, no, no, then a chance encounter with the actual person who does the engineering on these devices. Somebody in Germany had mentioned this to the right person. They said, oh, well, here you go, bang, here's the information. They themselves ran out of time and couldn't do it so they passed it on to me and that's how I got involved with the RME devices. So it's really, I think in general, the people who do the real work would actually understand the community and would understand that it's not a problem to release this information but it's just getting through to them and past the marketing people and the only way of doing that is just to keep on marketing the emails. We want Linux support, we want Linux support. That shows up in their customer support statistics and if it shows up enough, maybe there'll be enough questions asked, things will happen down the track but it is a long time-consuming process. It's very frustrating, we have to go through it. Eventually it won't be like that but at the moment this is what we've got to put up with. All right, so thank you very much for that and I'll give Roderick five or so minutes to get himself set up for the next talk.