 So, welcome to the DSD classroom again. Right. Oh, I started it, right. Hello, welcome everyone. When I switched from Windows to Linux, then Linux to FreeBSD for a studio, I tried to gather as much info about, well, whatever is needed for a FreeBSD studio, a music production studio in one block post called SingBeast to Sing. It was, I was using FreeBSD for a year and at that time, I didn't know there's a beast is actually a BSD logo, not a mascot, not a FreeBSD. So, it doesn't work. This doesn't work on all BSDs. I tested it only on FreeBSD and I thought no one is interested in a studio based on BSD. But if you look at Twitter statistics, that's probably the most important thing I ever done in my life. Then it kicked off and it's a pretty amazing studio. But let's start with who I am as I'm pretty new in a FreeBSD community. My name is Gordon Mankic. By now, I think everyone calls me Manka. I come from a hackerspace in Nami Sad Serbia called Tilda Center. I was lucky enough to have a support from my beloved wife to actually do it. So, we are co-founders. I come from my education is electronic engineer high school. I think that's the proper translation of our education system. I studied something vaguely called business IT at a university which actually translates to 80 percent math and the rest. I'm a band member which means I play a guitar, which is the hardest instrument to record in a rock and roll band. I used to sing but I hope I won't do it anymore because we have a guy who is actually trained in singing and does that way better than I do. If you ask any person, they are listening to all kinds of music, but if we narrow it down for me, it's metal on a guitar. Guitar music is usually metal if you ask me, and there is this electronic music type called gore trends. So, I'm not going to go into, they say there are like 70 types of metal, but if you actually want to simplify stuff like I do, there are three types of metal. First one, synths and gingers. I'm playing the gingers style and in electronic music, there is Uts Uts and Piu Piu. I'm interested in chickens, of course, the Piu Piu style. So, more elaborately, I like complex melodies and arrangements. So, I needed something that is robust and works. So, let me show you a typical layout or my layout in a studio. This is it. This is everything FreeBSD or any operating system sees. It's ideally, it's a USB digital mixer, so everything is in one place, the routing, the mixing, the effects and whatnot, which you can control via computer. I'm not going to say FreeBSD because we still lack a bit of support for devices, not audio-wise but control-wise, and you probably need to hear what you're doing. So, there are some speakers. There are some instruments that are recorded ascent through the USB so they can be recorded clean without any effects or whatever. There's effects unit, well, this is my favorite. So, what does the mixer, as a hardware does? It splits, one sound goes to the effects and one goes to the USB for recording. Then when the sound comes back, it's split again. So, one goes to USB and one goes to the speakers. So, it's a monitoring system. If you're playing or singing something, you can hear yourself. For the POP part, there is a synth and it goes straight to the effects processor. So, it's from a mixer perspective. This is all one device. And there's a floor pedalboard so you can switch different presets on an effects rack. And what I usually do is use three sounds. It's clean, it's a djunjun, and it's pew-pew. So, in reality, this is how it looks. These together with the keyboard is a synth. The glowing yellow stuff is a processor, effects processor. On top, there is a mixer, although it doesn't look like that. It's the mixer that is on top of a pile there below the tripod. Luckily, we didn't need it this year for these dev rooms. So, after the talk, you can ping me and I can show you how it works and we can fiddle with it. And I don't know if you can see this, but this is a pedalboard and something glowing here is a desktop. This is actually a 10-year-old computer. So, I have i5 where it's 8 gigs of RAM and it used to run with a hard drive now, it's SSD. So, it's a really, really low-end computer these days. And it's possible to use a low-end thanks to FreeBSD being optimal in this way. So, software that I use or am I used? I started with FreeBSD and it started all working after a lot of fiddling because there are not many people you can actually ask how to run a music production studio. Later on, I decided, OK, how about we go in totally untoward territory. Nobody is using HardenBSD for a music studio. Let's give it a try. And that's what's installed on my desktop right now. It works perfectly. And for the sake of this talk, when I say HardenBSD, you can take FreeBSD and Weisforz. There is this little audio driver on FreeBSD it's called OSS. And there is this little thingy called virtual OSS that does complicated things like resampling, routing, and whatnot. It's a user space program because if you have a big number of channels, user space is more efficient in doing that kind of operations. Part of 5 is currently the only digital audio workstation available on FreeBSD that gives you the power to actually have a production studio. On Linux world, I don't even know how many alternatives. But right now, I tried ported some of them and they all have these Linuxisms and it's really hard. So maybe in the future, we will get alternatives for the drumming because I live in a building and having a drum in a building is almost impossible, although I had it for years and I don't know why I'm living yet still. It was actually e-drums, so it's not a full analog drum. Probably that's why they kind of let me do it for a while. There is this thing called LV2, which is short for Let's Power version 2. And that's a framework and libraries for implementing your own effects and sound generators or porting someone else's work. And there is Jack, unfortunately. I personally don't like it, but currently, Ardor 5 explicitly works only with Jack and FreeBSD, so we have to use it. And if you're talking about music, you're actually talking about real time from a FreeBSD perspective. What does it actually mean? One is if you pluck a string on a guitar, you don't want to hear it in five seconds. You want to hear it right now, so you have a proper feedback and you know what you're actually doing. And the second part is jitters. So if sample is not put it shortly, if it's not on the sound card at the right time, you're not going to get your signal right. It's going to have these little distortions called jitters. And jitters are actually from a real time perspective or harder to get than plucking a string, everything below, I don't know, six milliseconds, no matter how your ear is trained, you cannot hear that. But five milliseconds, if your sample is late for five milliseconds, that's a disaster. So what does it mean? A lot of CCTL. First one is I joined a drum gizmo team and we have tests. And tests shown that the FreeBSD is so slow that tests are failing. It's not possible. I mean, if you read the FreeBSD, I think it's in a handbook. The kernel actually has a concept of real-time priority queue for the tracks. So if your application is asking for real-time permissions, kernel is going to give it to them only to the root but there are ways to get exceptions. And what this does, I asked on a mailing list, the default is equal five. When there's an IR queue, now I'm not a kernel developer, so I might be off. But from a musical point of view, when there's an IR queue happening, the FreeBSD will not react right away. And because FreeBSD tries to save your battery, for example, on a laptop, and not reacting right away gives it ability to, OK, so I have these five IR queues, a kernel lamb, in one sweep. Not entirely true technically, but that's what's going on. Lower this to zero and every IR queue is handled right away. And the tests in drum gizmos magically start passing instead of failing. If you have a USB audio, a USB device, you can tune how many milliseconds of buffer is going to use, but only for USB. And it goes from 2 to, I think, 64. But nobody's interested in an upper limit, only lower one. When applications open sound-wise, they can choose the buffer size of the internal buffer size. And it's also a buffer on the application because of the copying of the data. And if application doesn't know about that in FreeBSD, you can force it to be as real-time as possible. And that's what latency equals zero does. Give me the smallest possible buffer. So if, for example, Jack doesn't know how to handle buffers in FreeBSD OSS, OK, give it the lowest one. And the last one is a bit perfect. It's not strictly real-time. It's more has to do with jitters. And if you know that you're not going to resample or in any way transform your audio in the kernel or virtual OSS, you can set it to bit perfect equals one. And everything actually produced or written to the driver is just going to be sent to the device without taking any care of, well, resampling for start. There are devices that actually can do hardware resampling, so they can use this too. And it's important currently if you have more than, I think, eight channels on your audio. The reason I don't know is because I have 18. You have to use bit perfect and virtual OSS to do your audio stuff. So bit perfect is actually needed for a virtual OSS. And this is one quarter of a command, actually. What it does. So minus says, OK, if I'm working on 44.1 kHz and there's something like 48k coming in, do the resampling, and then send it to the driver. Minus i8 is a real-time priority. B32 means use 32-bit samples, please. Minus s708 is an experimental value for buffer. It depends on many variables, but mostly on your frequency of audio and what your hardware can push. In reality, this translates to a few milliseconds on a very, very old computer, decade old computer. You can say, OK, my card has multiple inputs and outputs, but please use only two. This is what C minus 2 does. And you can tell it with minus m. Don't use first and second input, but use eight and nine. Why do that is because eight and nine for me is a fax processor. So if you're, I don't know, for example, talking on Viber, Skype, or choose your failure of Wipe, you can sound like Dark Wader or something, because you're going through the effects. And as a matter of fact, a friend once asked me who is not living in the same country as I am, OK, did you create something new on a guitar? Yeah, can I hear it? Of course. And that's the way to tell your FreeBSD default inputs are the ones from a fax processor. Minus the DSP tells it, create a device DSP, slash that, slash DSP. And that's the default device in FreeBSD for audio. So if you've got multiple devices or only one, doesn't matter, it's going to create a default one for the device you're actually using. And this capital M, Tingi, the line, is copying my first input to the 8 and 9 outputs. What it actually does is when I play a guitar, I want to record it and send it straight away to the effects. And that's what it does. It's options for virtual voices go to infinity, and it really does a lot of work in a studio. But if you choose FreeBSD for your studio driver, you're going to learn, whether you like it or not, you're going to learn some non-musical stuff. There's going to be a lot of GDB, unfortunately, or LLDB, choose your preference. There are all kinds of glitches and failures that can happen because the community running FreeBSD in the studio is really, really small. I think I can count six people that I met on internet since two years ago, I think. You're going to be part of that community. It's small, but a very, very fast answering community. And because it's really technical stuff, they are really technical. I don't know if I should count myself in. You're going to learn system administration, meaning you're going to discover a lot of ways to improve your real time by either managing, it's always managing buffers, but it's either managing CCTL or you're going to turn off some things or you're going to maybe compile your custom kernel. I don't use custom kernel, so it works great. I don't think anyone should admit it, but who knows? And when I said that USB driver has a CCTL tunable for a buffer, it actually didn't have it when I started. So I started poking around the kernel and trying to find which buffer is too big. I didn't know what I was doing. I was just lowering the numbers. And when I got the result, all right, I can play now. So it's that buffer. So you're going to learn kernel development at least a little bit where you like it or not, and you like it, right? Actually, I didn't do the CCTL for this USB driver buffer. I asked actual developer, can we please make it a tunable? So it's an option. Maybe you don't know how to do development. You're just a musician. And you can ask people, can we do it like this or like that? You might have some idea how it's done. I promised you a lot of GDB, but I'm not going to go into details of GDB. This is like 80% of GDB that I use. Why? Because if you imagine that the war in audio CPP is a variable holding your sample, and you don't get a nice voice out of your PA system, but you get, I mean, you never get a little distortion. You get, ah! And you need to search, why is that sample not right? And this is used like 80% of the time. You probably can use other techniques, but as it's audio, I didn't want to go into debugging too much. And probably, D-trace would be easier to use, but I didn't use D-trace before. I used GDB, so I picked my known enemy instead of unknown friend. And it was really nice, and I hope that in the following month we will, we as a band, will publish something on the internet. So it's not going to be totally empty talk. We actually do the recording these days, and it works. It works beautifully. So there are a few things that, at least, I would like to be implemented or done next. There is this thing called Malan. I started this as a library. It's in C++, and I wanted to explore buffer management in the audio application. I didn't know how to name it. And my wife suggested, how about you go to Google Translate, enter all the keywords for your software, and choose different languages. See what you like. For some reason, Irish language is beautiful for naming, because if you put, I've put a buffer, and chose all the languages. And almost in all languages, buffer is buffer. Audio is audio. Oh my god. So I put buffer, Irish, and it's Malan. I don't even know if that's a proper Irish pronunciation, but that's how the software is called. It's currently running only on FreeBSD. Actually, it doesn't run anywhere, to be honest, because it's only the library. And if you know how the inner workings of the library are doing their stuff, you can program your own digital audio workstation. And that's where I'm, that's the direction I would like to take, as I didn't know any other effort, and I asked to create specifically FreeBSD audio workstation. But it's implemented in such a way that if you implement your, for example, also for Linux as a class, you have your also input and output. And yeah, it's an experiment of using standard library, standard C++ library structures to do audio. You can be much more optimal than that, but I chose to go this way because if it works, it's less work for me. So it works wonderfully. So wonderfully that if I have two channels with one sample of buffer on the input and output, it takes around 30% of CPU on my computer, probably around 10%, 15% on a modern computer. What I'm not satisfied with is a hardware mixers, digital hardware mixers. So in Tilda Center, we are exploring real-time operating systems and audio and ADDA chips. We want to implement our own digital mixer or audio interface because it's going to be equipped with a USB. And what I want to deal with that is currently, I use audio card to route my signals, which means virtual OSS does it. And if you look at the pathway, the signals goes to the driver than virtual OSS, then you route it, and it goes back. It would be much more optimal if the routing could be done in a hardware, and that's what I want to do. Tilda Center is conceived as an educational center, and we are really, really loud about FreeBSD tuning and the education of FreeBSD for the sake of audio is what we do. And there are at least three persons listening to my gibberish talks about, hey, what if they do this or that in audio-wise. And there's Ravenna, I think that's the pronunciation. It's AES standard, and there's no open source implementation of that protocol. What that protocol allows or brings is ability to use your network for routing audio. So you can, for example, have multiple machines without audio cord, but they all route their audio through Ethernet, and one machine does the reproduction of the sound. That would be really wonderful if we could have FreeBSD, and there are some FreeBSD developers that are tackling the problem slowly. And we hope, I mean, they hope that it's going to be implemented soonish. I mean, with the drivers, it's never fast, so soonish. There are a few people and teams I would like to thank. First one is Hans-Peter Sileski. He implemented USB stack for FreeBSD, USB, audio drivers, virtual OSS, and I don't know what other plethora of software that is needed for FreeBSD to be a viable music production alternative. And he has a more crazy studio setup than me, so he needed more real-time than me, so great. Yuri Viktorovich is a guy who ported almost all LV2 plugins. If you go to Freshports and type LV2, it's Yuri all the way. Tony Pernella helped me with presentation, proofreading, and some of the crazy setups for audio. Drum Gizmo team actually taught me how to do DSP real-time and whatever is needed for audio, which 80% of the time is GDB. Order team really done great stuff. It has wonderful audio mixing and routing and not so wonderful MIDI, but it's usable. It's still used in at least my studio, but it can be better. And a Tilla Center crew for supporting me, giving back ideas, implementing a few beats and pieces, and generally not being annoyed by me when I talk about these stuff. That would be all. And if you have any questions, yes? Do you use this setup only for recording music, or are you also using it to mix in the box, basically, with plugins like virtual instruments or compression EQ? So the question was, if I use it only for a mixing order, some effects in the box. The effects are good, but sound generators are not so good. For example, there's, I would say, a crossover called guitarics, which doesn't generate the sound. But if you need, a metal sound is going to generate a totally different sound on the output because of the distortion. And it's still not really good for rock and maybe clean sounds. Yeah, but sound generators are not perfect. The effects are wonderful. And sometimes I use software. Sometimes I use Dirac. Yes? Yes, actually, the reason why I switched from Linux to FreeBSD is that I didn't switch, actually. I had a dual boot. And FreeBSD was in a worse situation because it was on a hard drive. Linux was on SSD. There is no such thing as a real-time queue for threads in a Linux kernel or preemptive threads. There are patches, but when I used it, virtual box broke. So it's better for real time, but you probably have much better support in Linux for other audio workstations like Muse, which I would really like to have on FreeBSD. I try to avoid it by all means. I don't know. Even in Linux, it's a problem, so I never tried it. There is one song on SoundCloud. So if you ping me on anything, I will send you a link. I wanted to put a link, but it's a huge. Anyone else? Be loud. Now is the audio time. I was a little bit lost with your way of using the FX processor. You're basically sending the clean guitar signal to the box, to the input, and then you have the FX processor as a send and return. Actually, almost yes. What I do is, let's say it's a guitar. When I play, it gets into the virtual assess, which splits it into two. So one goes to the computer for recording, because you want to record the clean sound and then re-amplify it and whatnot. And the other half of the split goes to effects because you want to hear it in real time while you're playing. But when samples return from the FX processor, go to recording to the computer and to the PA. We have a minute and a half to be allowed. OK, that would be all then. Thank you for coming.