 Hello, my name is Gordon Winkage, and I'm going to talk about Freebies the audio for professional and audio and amateur Setups and equipment It's basically Mostly going to do with the non-audio stuff basically going through the The system and components how they all come together to give you a better audio experience and Agenda I'm gonna work. I'm gonna talk about is first virtual OSS is You can kind of say that it binds all together and makes Advanced hardware less advanced for the applications It does some routing it has a lot of things Jack to as one of the audio servers But it's a it's a special one because it's designed to be running a studio. There are some other Audio servers that I'm gonna briefly talk about and how they correlate to the to the applications and to the environment There is this Mystical thing called real-time That got really better Over the the time in free BSD and How to actually tune your free BSD for more real timeness and a better response faster response from the your audio device I Have some Wishes for the future, but we will see what the future brings so virtual OSS is Written by Hans Peter Sileski. Thank you very much It's a really amazing software that Extends the capability of a built-in OSS kernel OSS And it can do stuff like Big number of channels Being more than eight probably not problem, but Mixing and resampling is faster in user space. So if you have Big number of channels Like I have 18. I know that Hans has 32 and it gets Sluggish of the kernel does it so the virtual OSS helps with that Combining splitting is Weird concept of if you have Let me talk about combining first What I mean by combining today You can easily find a hardware that has for example USB microphone and Bluetooth headphones. So you kind of have to combine them into one One audio interface virtual interface that application know how to talk to and communicate with it That's where the virtual OSS really shines with how to say consumer electronics and Splitting if you have something like 18 channels of audio that I have back at home and you Say to for example chromium. Okay deal with this weird stuff my my happen and Usually applications that are not specifically for audio are not gonna handle Weird number of channels in a right way So having Ability to split the card into two maybe clone them would be better Term Gives you ability to have one huge card Represented at one stereo device and one device with the rest of the channels That way your Firefox your chromium, whatever the common application that uses audio won't get confused and It's beautiful that virtual OSS can create Slash dive slash DSP, which is a default audio device so all your applications don't have to deal with the Whatever they find weird in audio sense Yeah, and it can also resample and Adjust to the audio so your application doesn't have to know about that Usually you don't want that in a studio, but you want that in on your laptop or some consumer device When I say routing in utilities there is whenever you have a system that has multiple channels or Multiple as called and peer-to-peer routing becomes Important that work is the obvious Example of that, but if you imagine 32 channels or even 64 Then what goes where is really important because sometimes you don't want to all your channels to go to the main mix Sometimes you want to listen to something in the monitor, but not to go to the to the PA and routing in virtual OSS Enables you to To achieve certain flexibility with which channel goes to which output and stuff like that And when I say utilities there is built-in Equalizer there is compression almost by default because Sometimes you're gonna have like two programs output their audio and It would normally clip You would just cut off if it's too loud, but the the virtual OSS has a compression So it gradually Lowers down the volume of the output and if it's too loud it's just gonna hard clip it anyway Just to protect the device There is a really weird consequence of not having your Device turned on while your virtual OSS is working because no Callback is going to be cold or audio callback all your videos are just gonna stand there They're not stuck. They are just waiting for the callback that never arrives So the consequence of that is that if I don't turn up my My mixer, which is also a USB audio device All my ads are just stuck So it kind of is funny that the virtual OSS is an ad removal thing It's it's a funny coincidence. Yeah, sorry and It enables us easier development One of the features virtual OSS has is a dummy device So it doesn't have to be connected to any harbor. You just Create a dummy device and you use it for example in your audio tests. So you don't listen to all the crappy noise that that would otherwise go to your speakers and you can have end-to-end tests and Really be well more sure that your application is working As I mentioned Jack two is Well, there was a Jack one before I think this year we got to Jack two Jack two is Well more actively we maintained Jack implementation and It started by Somebody wrote Problem report on a bugzilla saying hey midnight BSD has in their imports Jack to and I was very interested in that because I am following the Jack Development even before I switched to free BSD and I was really interested in that I Found a patch. I updated to the Current version. It just compiled and worked on my machine, but Florian Vaupen Actually did the proper porting balancing of buffers and Really really good implementation was done by Florian. I just made it compile But I was happy with it for a while the One thing that I'm gonna return to that one thing that Jack knows Since it was conceived is a real time I'm gonna explain what real time means for audio in just a second, but Jack always knew what the real time is and used it on Linux because it was conceived in Linux and it had problems with Free BSD not in the implementation the implementation was good, but the user non-route user couldn't Use the real-time priority for its threads so That was kind In the limbo it kind of works, but not and it was weird and And It's intended for studio the Florian really did a good job there. It runs on Weird hardware 24 bits which is weird because it's three bytes data and It really shines say in the studio you can of course use it in a consumer Like laptops or whatever you want to use it. It's just not that it's gonna bring something to you on If it's not in the studio and that's why although it's Audio server I mentioned it separately because its intention is different from the others and the others are post audio as in the IO and also They can all they all have a back-end for OSS So they can either use virtual OSS or your OSS device in kernel depending on what you want and what you have and All of them are supported and when I say portable audio development you can do Stuff like for example, you can create a jail you can create Sorry first you can create a dummy device with virtual OSS true DevFS you can assign it to a certain Jail and you can have your development in the jail. You don't have to use the jail But it's a possibility if you want it and all of those Audio servers are also working With either virtual OSS or in kernel implementation So free bsd gives you ability to if your DSP developer or audio developer It gives you a really nice ability to be portable and to be nice to other operating system and support them Although you've never seen for example as in the IO own open bsd You can still develop for it and well be nice to open bsd Real time it's It's a It's a really weird concept because it depends on the where are you defining it? For example Web sockets are considered to be real-time communication But they can have like I don't know 300 millisecond delay or maybe even more depends on where your server and clients are so Initiation of real-time really depends on where you use it and in in the industry Embedded devices real-time is like give me now don't be late Even one millisecond if it's possible, but don't be late at all right for audio. It's impossible to achieve that the different components are going to have different delays like your Analog to digital converter is gonna have a small buffer DA Converter is also gonna have a buffer. You're gonna have a buffer in the For example jack or your application. Maybe your application uses OSS directly So there are always small buffers that are gonna add up. So we know you World real-time is usually five to six milliseconds totally like a round trip And if you have six milliseconds, that's okay I mean okay in a sense that if I plug my guitar, for example, I don't get boom Right. It's really hard to to to play if you if something is lagging too much So too much in an audio sense over six milliseconds Real-time in freebies D is not a new thing It always was there or at least Since I use it it was always there, but non-route users couldn't use it so Hanson Florian wrote a patch for the Mac framework that allows Real-time group users to utilize real-time Scheduler So it's not that real-time needed to be Bundled in the kernel. It was already there. Just the permissions were wrong in the audio sense and Why it's important? Well, you can have for example Playback and your playback is let's say a second late But if all playback is exactly one second late You don't care about that because the music is just fine but If you're playing you really want it to be fast and to get the feedback right away if it's possible and One thing that you want to avoid is called the jitters. That's if There is absolutely no real-time implementation and your system is under a load for whatever reason your audio can be late and your Sign wave can distort or whatever you're playing can distort because Sample is not here, but here in a Time-wise scale, so it's gonna play too late and your your Signal is gonna distort now if you have only one system that it is towards it You're probably not gonna hear it because human ear cannot hear distortion under 3% But if you chain that up For example a few devices like we have some some chain here It can end up and you can really get lousy audio at the output and you don't want that and It's beautiful that even applications without Real-time support can utilize real time It's a weird concept that I only encounter in free BSD But I'm Only using free BSD some other operating system might have that I'm just not familiar with it this is All you need to to enable your user to utilize real-time well logout and log in and to actually make it active But that's it and this became The this got into the generic kernel in 13.1. So before that there is I think there is a port that Florian made for older Versions But I'm not gonna lie to you. I didn't use it. So I'm not gonna talk about it much And then audacious is my favorite audio player, but you can swap in whatever command you want in here and What is does is Artiprio is gonna run audacious with run time priority of 20 Real-time priority is like a nice so it has Range of real-time priorities 20 is I think the the least prioritized one and Then if your system for Example is building something and all your course are busy. You can Still have non choppy audio playing and it's all nice and I actually use it on a daily basis on Something that is really old Harbor 11 years old. I I 5 with 8 gigs of RAM. So Literally nothing special You could probably achieve the same goal differently, maybe by CPU pinning or some other Summoner technique but audacious on that on the mention system takes about 0.13 of load and having it sit on a CPU it's kind of Overkill so that this works really nicely for the past Well, whenever the 13.1 came out it was beginner beginning of summer and There are few tunables that you're probably gonna want to have in A real-time setting This is totally unimportant for the consumer devices But it kind of gives you better Work environment in the studio this here is It goes from zero to I think hundred and it's The way I understood it because I'm not kernel developer and I just try to understand stuff the best I can When IRQ appears if you don't have the Deviation set to zero it's gonna wait a bit So it's gonna handle all the IRQs that accumulated over time in a batch But you don't want them to in a real-time Scenario you want okay. There's an IRQ go for it do something right away So this gave us a problem when I say us. I mean I'm also contributor to drum sampling software called Drum Gizmo and tests were failing because it was too late to the test detected that the audio is always Too late, so it took us about I think four months to find that this actually exists and I think this speeds up audio more than anything else on a non-fully loaded system if it's a The high load system then real-time is gonna give you the best results USB as any device has a buffer and The lowest setting currently is two milliseconds of buffer So to achieve real-time The rest of your system has to do its stuff in four milliseconds It's possible It's possible, but it's kind of limiting So You have to tweak more stuff there than just this and I've been talking to Hans to lower this to a one millisecond Because maybe my hardware cannot deal with that. Maybe he's cannot deal with that But who knows what the future holds and maybe we are going to have devices that can do one millisecond of buffer So probably this is going to be lowered to one Maybe some distant future even sub millisecond, but we are gonna see when we get there This actually has nothing to do with Studio environment it's actually if your application doesn't know how to handle buffers with OSS and Configure the device properly You're gonna say, okay. This is the lowest latency. I mean not zero, but zero denotes that Give me the smallest possible buffer that is possible. I cannot tell you how big that buffer is because it depends on a Number of channels number of beats in a sample and so on But this is going to give you the the shortest one. So for example, if you use MPV it's going to get the the shortest buffer and be Real time as much as possible for an application unaware of the real time and This I'm not sure you should use this because sometimes you want it sometimes you don't Bit perfect is going to say, okay Whatever stream I get I'm just going to spew it out to the output and not do any kind of processing any kind of Sample rate conversion or whatever That means that all your audio applications have to deal with Fact that only one sample rate and bit depth is supported Although your device can can probably do more But not at the same time And there are devices that can do sample rate conversion And whatever conversion is needed in the hardware. So they are faster and you get faster responses Uh, luckily I I had no idea that such devices exist But Tommy Pernella was nice enough to tell me, okay, I have one so you can use that and I personally don't because my device doesn't handle resampling and adjusting of the audio But yeah, if you use for example jack Uh, it knows how to resample and and do all the the conversion that's needed. So that's going to speed it up a bit By not needing any Any more sampling any more Processing And the future is uh future is bright actually Uh Why say that? The reason I switched to to free bsd in 2016 Well, there are two reasons first. I come from Serbia and in 2016 euro bsd was in belgrade. So I was I was kind of embarrassed to go with a laptop that is not on the bsd and I tried it Before the conference and it was great The second reason is I had A dimension desktop back then And it has a hard drive and ssd ssd was with linux and hard drive was for whatever i'm experimenting at the moment Uh, and I experimented at that moment with the free bsd and even with a slower drive The free bsd gave me less jitters Uh, I don't know why maybe it's my setup. Maybe I did something wrong on linux So I don't want to blame the linux for being worse. It's just That's what made me switch to free bsd in the studio and So the future even the past is bright Well, but the future is even better because we have all this support everything That linux have uh software wise or most of it compiles on free bsd and you can use it like I don't know effects and sins and whatever you you desire So what we need now is more docs and examples Uh, I started learning about, uh dsp and audio development few years ago and Uh This year I wrote my first example that that is like a full example of audio So it's really hard to start There is no documentation and it's not only free bsd pick any operating system And it's really hard to start with audio Just playing the the I don't know way file That's One level of a problem, but really dealing with the buffers and sample rates and doing Properly in an audio studio sense. That's really hard to find So The documentation is growing I'm Striving to write the examples of audio Scenarios And today we have just one basic example of audio in a user share examples sound Uh, but there is a midi in review and There is going to be Combining of midi and audio True pulling and select Which is what you probably want in your studio midi for control and audio for the the well hearing So we really need more of those and uh better ones When I say more ports It's It's not that we don't have enough ports But the audio ecosystem recently is really growing I'm really happy to say that Open source audio is starting to shine and to do what proprietary software can do Not in all regards unfortunately and we are not Ready yet to to switch Proprietary solutions with open source ones, but we are really really close there is There is a really small window that that needs to fell then with With the ports and Linux normally has more audio developers than free bsd. So we need porting To be done to to catch There is sorry There is some optimization that can be done Maybe But dr. Hans In a lobby and we said okay, how about we profile stuff and One of two things is going to happen either free bsd. So perfectly implemented Has so perfectly implemented audio that nothing can be done But then it will The profiling will be like a hard the tangible proof of that I kind of suspect that's not the case, but we're gonna see The second thing that can happen. Okay free bsd is not the most optimal that it can be and we can make it better And profiling is probably going to show us with flame graphs where we are losing time and The that's the easy part with optimization. The hard part is what if we need I don't know for example different structure. There is no profiler that's Gonna tell tells us. Okay. Maybe you need to align these Variables better. So it's gonna be better and so on. So it needs to be done by somebody who is Really into it and understand it Currently, I'm afraid I'm not the one but I would like to learn it and to Contributing in such a in such a way And there is a really new concept. Well, you know a few years For audio that is A few decades old This is new those are Network based mixers. So they don't have any audio ports for input just for output And everything is done over the network Maybe that's the future. Maybe we Decide as an audio community that that doesn't really work well But I would really like to have a driver for such a mixer. They are out there in the in the Market you can purchase one and Kind of have the the Early introduction to them because I don't think they are Common enough to be as stable as they can be Because for example, what happens if I don't run a dedicated network just for audio? And if I do run a dedicated network for audio, does it mean that all my machines have to have to Ethernet cards or So it's new and a bit of uncharted, but it's there and I think it's a great concept and Speeds of Gigabit Gives us Nice ability to be real time and in the future, maybe 10 gigabits are gonna be So fast and efficient that Normal audio mixers are going to be replaced. Who knows? We're gonna see how how all that happens and I would like to tank of quite a few people That brought us here as you might have heard I am Presenting other people's work. I had really little to do with the implementation of this more with the documentation Hans Peter Sileski is the the first one his amazing developer helped me so many times answers all the questions and I'm already embarrassed to ask a next one and I guess he's gonna say once. Okay. Can you even read the code? Come on try it, man But yeah, you're amazingly patient and thank you Florian is another developer who who gave Endless stream of answers how advanced audio works Because it's complicated It sounds simple, but To to a newcomer that I was and then I discovered. Oh my god. What what have I Gone into this is so complicated and I mean there is no debugger for audio because Either you're not gonna get the sound you're gonna get the proper sound or you'll get So there is no debugger to tell you what the third one is doing wrong You have to stare at the code and see that You're off by one or something similar that I created all kinds of Errors before I started programming for this particular reason If the error occurs I want to be able to recognize it and not see it the first time that it occurs So I try to make all the errors I can In order to prepare myself how to to recognize that error Yuri Viktorovich is A few years port committer and he's already second on the list of Comminters who have a high number of ports Now for some reason he's interested in audio, but not so much in In a studio equipment if I remember correctly Talking to him. He's interested in measuring So there's Tons of ports that he did and so thank you Yuri It's it's wonderful and all the lv2 developers lv2 stands for Latzpa version 2 which is framework and libraries For developing plugins and digital audio workstations and hosts for those plugins and stuff So that you can have Delays flangers and whatever you want And they're mostly Mostly, I don't know any bsd developer that's Developing lv2 As a as a framework. So most of those people are linux developers So we are Blowering from them and thank you developers. It's been It's been a pleasure to to do that to to extend where the To go where no lv2 has gone before Right so They told me every presentation should end with the cat and this is my special reviewer Uh, I don't know if you can see but he's actually looking at the code. He's staring at it He's never sleeping when when when he's staring when we do this. So He's a fearsome reviewer and I still have a mark from a two-week old review that he left on my thumb And unfortunately it was his last review because we have to He passed away a week and a half ago on wednesday So I am sure my Contributions to the audio and freebies. They are never going to be the same This little fellow helped me that with that And if you have any questions, don't hesitate. These are my contacts And if you want presentation is the the last one Uh for the reasons of uh, Libra office breaking last week for me, I couldn't Use it to to create a spreadsheet So now everything is in latech and you will need that to build a pdf So you can view it But from now on it's everything going to be in latech for me and most of my Work and research is in audio. So if you're interested bookmark it or whatever Thank you Any questions? Sorry, can you help me with the mask? I have a problem understanding most of the jack Oh, yeah, uh The person is maintaining jack to port in open bsd. So Uh, they're interested if it can be upstreamed if I understood well There is I don't know how it's pronounced folk tx the the person who is The main developer behind the jack to and uh, uh on free bsd. It's a florian so Rc channel jack on Libra chat is where most of the communication is going on. I know for a fact that Falk tx is on mastodon also, but I don't know What are the preferences and maybe rc is the best because it's a real time Again with within its own context So sorry, can you remove the mask for the question? Okay, so so Well, it's not a question. It's a remark Uh audio over network is already established in live streaming and Well live venues So, uh, I need to update myself. Thank you And I can't say mine, right Yeah, well, uh I like bands that are in between like in between rock and metal. So godsmack Volbeat clutch Something like that rock and something else either rock and metal rock and blues and but rock is the The foundation