 I'm Wim Taimans, I'm a principal software engineer working for Red Hat. I started PipeWire some time ago and this talk is a short demo and overview about what PipeWire is and what it does. So, first a little bit of background, this is about the multimedia stack, a bit focused on audio, but it's in general like this. So currently, well in Fedora 33, we used to have pulse audio, mediating access to Bluetooth devices and also. There was sort of an option to run Jack as well, but it wasn't configured out of the box in Fedora 33, so you needed to manually configure things to get it working. There are Fedora spins, like Fedora Jam that set this up properly, integrated with pulse audio and things like that. But in general, if you try to do anything with multiple apps accessing devices, you get permission denied, only one app can use these things, typically with ALSA here, it fails. This is a bit of a joke of all the audio APIs currently available. It's not really true. Currently, it's only ALSA, basically and as a server. You use pulse audio or I guess Jack, the other ones are basically all frameworks just using one of those two. So, what's the problem with the situation? Well, pulse audio is good for consumers, but it doesn't exactly scale for pro users. So it has a limit on the number of channels that it can do, really only eight. It doesn't exactly work very well in low latency situations. And then there is extra trouble because it wasn't designed with security in mind. So with flat packs, this becomes a problem. It's also a bit hard to customize, but those are things that can eventually be overcome. But anyway, so Jack is another audio server. I think it has the right idea, but it was built for one purpose only. It does it very well, but it's actually not so very usable for consumers. It's very static, all extensible hard to set up. And of course, also pre flat pack, no security was built in. So the thing was, can we do anything, can we do better than this? Can we catch up with Weednose or macOS to get the better audio subsystem? So then I came up with an idea to first implement video sharing, but later on it turned into sharing of any multimedia content and processing. So I'll show you how that works. So the idea of pipe wire is that you place a demon in between the hardware, the drivers, also video for Linux currently, but also Bluetooth and the applications. And the demon, it doesn't only manage the devices, it also maintains a graph of how these devices connect to each other and to applications, but also how applications connect to each other. So basically a graph of all bunch of multimedia nodes that can exchange data. So this is not unlike what Jack does. So Jack also allows you to share between applications and devices, but pipe wire extends this to also do video and Bluetooth. So that's the demon. It's very simple, well, very simple in what its functions are. And on top of this engine, basically you can start building other things. Like for example, there's a replacement pulse audio server that services pulse audio clients, converts basically a pulse audio protocol to a pipe wire graph. There's also a Jack replacement library that basically transforms all the Jack API calls into pipe wire calls. And also an also plug-in to interface. So basically the three main audio APIs have a compatibility layer built in. So basically you can run any pulse audio, also or Jack app using their compatibility layers on the demon. On the pipe wire demon. So these are the apps. Another crucial component in pipe wire is the session manager. So basically if you run at a pipe wire setup, it runs a demon, a pulse audio replacement demon and also a session manager. The session manager basically manages all these devices, how they show up in the graph. It does UDF detection to either remove them. But it will also manage things like when VLC or something comes into the graph. It will say you want to play audio, it knows about default devices. It will make the right connections to those devices and so on. The session manager is a big component in this setup. It basically has all the logic to do these, to the actual setup of things. So it's, pipe wire itself doesn't really have any dependencies, just libc. I talked about all of these things. So the engine itself pipe wire is built with latest newest things. So it's built to be zero copy, use shared memory using MAMFD. But it also does DMA buff. So for sharing textures in zero copy way, it uses the event of the twin to go into. Epo loop to wake up clients and stuff like that. It has a security. So each application can have a different set of permissions on all the objects in the graph. So we can, like for example, flap packs can only see certain things. Or for example, they can only connect or do certain actions on these objects. Like we block volume controls on hardware devices for flap pack clients by setting those permissions, right? So it's built to be comparable in Jack performance. So you can have low latency, lower than a millisecond. Very small buffers. It should be able to scale up to that in the same way as Jack. It does things differently than Jack's. It's not exactly the same. So what's implemented currently, this was already in Fedora 32 and 33, which is the screen sharing. So basically a matter. Well, sorry, this is the video for Linux capture. So basically you capture from a video for Linux device to pipe wire into like an app like cheese. Or like a browser. And you can have pipe wire manage a T element so that the same stream can be used by different applications of the same time by multiplexing. You can't do this with video for Linux native, but pipe wire can do this. So you can apply effects on one stream and stuff like that. So for Wayland, it's something similar. So you go through a portal to share the set, the mother rendered images. And then it's dispatched to through pipe wire to applications. You can do like screen recording or remote desktop in the browser. Like that. So for audio support, which is now new in Fedora 34, so you could test it in Fedora 33. I'm going to check the chat here since I'm not missing anything. So you could test it in Fedora 33, but in Fedora 34, it's now by default activated. So it uses a pro audio processing model. So that means that all channels are split into mono stream floating points. Everything is done floating points, all the calculations. So you can have multiple devices in the graph. They are automatically slaved together with resampling to match their audio rates. So there's also MIDI integrated. So people that use Jack, you have all the features also to Jack, MIDI D. So it still uses a timer based scheduling like pulse audio so we can dynamically change the latency. It uses a copy of pulse audio card profiles, a piece of code that was lifted out of pulse audio to detect cards. And devices and profiles. So that should theoretically be exactly the same. So support for hardware should theoretically be the same. I'll say theoretically. There are variables. So it ends up to be the hybrid between Jack and pulse audio. So that's exciting. I'm going to show you in a couple of demos what you can do later here. So the session manager talked about these things. So what we have now with door authority is super bare bones. But the possibilities are enormous. We don't know exactly what to do yet. But as it is right now, it's simple. It does exactly that pulse audio does, but you'll see from the examples that it's very powerful. So as I said, you support pulse audio with a drop in pulse server replacement should be compatible. Not a hundred percent, but pretty close. So also apps, that's pretty compatible. And the Jack apps, there's some things are not implemented yet. It's getting there. So this is to be far tuned and improved for the door of 35. So let's have a look now with the demo. So I'm going to have to share my whole screen now for this entire screen. Yes, share it. There we go. So let's have a look here. So I should actually move this Firefox window away. It's very annoying. The inception. So if you look at the sound settings here, it's still all the same. Everything is pulse audio compatible. You can run like PAV control. It's all there. I have three input devices. It should look all pretty familiar. So it reveals itself if you go to a comment line and you start doing things like EACTL. So these commands work as well. But then you start seeing things like it's running on pipe wire. So there's a couple of interesting tools as well that you can use. PW top gives you an overview of all the components that are running and the CPU that they're using. It becomes more interesting if you actually start using Jack tools. All right. So then you can actually start to take a look at the graph as it is currently running here. So you have GNOME settings here, recording stuff from my capture devices, but also some stuff going into Firefox and coming out. There's some other things hanging around here. These red boxes are the MIDI ports. So let me stop this thing here to get this out of the way. So you can play with this graph here. It's all there. You get the Jack tools, for example. You can start using these things. You can make links whatever you want. This one is a little spectrometer that just follows whatever I say. So you have access to the complete graph here in your audio subsystem, not only what Pulse Audio used to give you. So I can... Yeah, what else can I do? Yeah. You can start like equalizers and things like that. So for the Jack ecosystem, there's a lot, a lot, a lot of stuff. A lot of interesting equalizers that are hard sometimes to use, but extremely powerful. So I can, for example, take the output of this equalizer, route it back into my Firefox here. If all goes well. It's a bit difficult, of course, because I don't really hear anything. Now I should hear myself. Do I hear myself? I do. Anyway. Let's disconnect this bit here. Carla. Sometimes does not like it very much. And I'll stop this for a while. So right now, drop-in replacement. So everything should still work. But there is support for Jack app. So that's interesting. Stuff that is not implemented is like the network stuff. And echo cancellation. It will come later. Other modules are all implemented. So that should all be compatible. So for the jack emulation, there's some couple of things missing, missing like free reeling and no latency reporting yet. So if you use our door and you need to use the jack, you're going to have trouble for now. It's to be implemented or like free reeling. If you want to export the project in our door, you have to switch to the also back end and then export because we don't come to it in by prior yet to be implemented. Also plug-in should be good. So for gamers, they usually use this. I heard that. The latency is much better. So good for games. So, yeah. It's almost time. So I'm going to see if you have questions first. I see there's a lot of scrolling going on, but I haven't been able to reach. And it's not going to work. It's not going to work. It's not going to work. It's not going to work. It's not going to work. It's not going to work. It's not going to work. There's a lot of scrolling going on, but I haven't been able to reach anything. So if you have a question now, please do ask. Otherwise, I'm going to play around a bit more with Carla. Yeah. So the echo cancellation are just, well, yesterday, two days ago, the base, well, the framework for implementing echo cancellation. Now just need to add my party. See the actual method in there, but the basics are there. So it's going to come soon. I am Q and eight. Yes. Yes. Yes. Yeah. So we're not going completely. Yeah. So, um, so these things with like things go away. Um, usually you can do a restart of the, of the services. Um, so you can do a system, a CTL, mine and mine is user and start a pipe wire. This is usually a command that just restart services. And it usually brings back all, all the things that it was before. If it gets into a weird state. Um, so, uh, free wheeling. Yeah. So when will this be done? I don't know. Month or two. Yes. Probably something like that. Um, yeah. Uh, so Bluetooth devices. Uh, so everything that, that we can do on Linux currently should be supported. Um, so high quality for speech is, is always troublesome. Um, because it's hardware dependent. So. Yeah. You have to enable it manually. Because if we can't enable it automatically, unless we start implementing a white list, which is not on that. So, but you can set the property in the Bluetooth config file to enable the high quality, be directional. It's still low quality because, but it's the best Bluetooth can do. It's a, it's better than, than old telephone that. Um, yeah. So, um, small things that are not working. We need to figure those out what they are. I know in KDE, they use the device manager. So device manager is not implemented in Barbara yet. Um, so you can't have that feature, but I think KDE, uh, disables the option. If it's not enabled in the server. So it should just be blanked out. But yeah. Yeah. Yeah. Yeah. So, can you record output of an audio source like Firefox into another app without recording from a device monitor? Yep. So let's, let's show you something cool. So here's Carla again. I'm going to, I'm going to start Q-Synth. Q-Synth is a, a midi synth as I, as an midi input port. I'm going to start Q-Synth. A midi synth as I, as an midi input port. I have, um, here from my audio card. I can do midi like this. Um, you don't hear this, but it's going to my speakers. Well, I probably echoes back to Firefox, but I should be able to, just throughout this into Firefox. Right. You should be able to hear something. I don't know if you hear that. Oh, sorry. The screen sharing was off. So, what I did here was in Carla, just start Q-Synth. Uh, and then I hooked it up to my, uh, card here, which has basically a midi cable going in. And this is the capture from going through. Uh, so it usually goes back to the output here, but I can also manually route it into Firefox. And then you get it. So you can do the other thing as well. I can just take the output of Firefox, um, whatever. And, um, I don't know, route it somewhere else, like to an equal, not like a spectrum. Let's get our spectrum analyzer again. So this is, I'm going to be this. The only thing is now somebody needs to say, somebody needs to say something. Because this is the output that I received from, from Firefox into this. But yeah, so you can route. Uh, oh, where am I? No, there we are. Okay. Yup. So, some more QA. Uh, can we, can you record the output? Yeah. So you can do freewheeling. Yup. So, um, yep. Okay. Then, uh, we leave it at this. I think my time is up. So thanks for watching.