 I'm Im Daimons, Principal Software Engineer at Red Hat. We're working on PipeWire. So this is a little overview about what happened. So for those who don't know what PipeWire is, it started a long time ago, and it went through a lot of things. But currently, it's a multimedia sharing and processing engine. So what do we do with it? I'll tell you later. But it's in Fedora since Fedora 34. Well, it was in there before for screen sharing. But it's in Fedora 34 for as an audio server replacing Pulse Audio. So it has compatibility with Pulse Audio, but it also combines the features of Jack. So it's basically a consolidation of pro and consumer audio into one audio server. So pro cases usually are like lower latency, more flexible routing of application sound. But consumer audio usually needs something like better plug and play of hardware, easier integration with everything. So it also does video and MIDI, so basically anything multimedia, but we are mostly doing audio actually now and some video for the screen sharing, I guess. So briefly, I don't have a lot of time, but PipeWire basically is a daemon that connects or interconnects applications on one side. So you can have applications talk to each other, but you can also have applications talk to hardware and exchange multimedia. So you can send audio around and video frames around and stuff like that. But you can also do it's graph-based. So you can build all kinds of filters and complete processing graph kind of thing. So you can split signals and merge signals together and all of these things. So the way in the current state of affairs is that there is a daemon. We have a PipeWire pulse process that is running that is basically emulating a pulse audio server on top of PipeWire, basically serving all of the legacy, I guess, pulse audio applications, not really a legacy because we don't really plan to change them. But now you can also run pro audio apps. So typically using libjack for which you would need a different jack server to run those. You can all run those with a PipeWire replacement libjack on top of PipeWire. There's also an also plug-in to run legacy. I don't know if it's called legacy, but applications using the also API on top of PipeWire as well. And there's also a session manager running. So I'll talk more about that in a minute. So yeah, modular and so on. Yeah, so it's written to be very performance-y, you will copy all of the good things, but also all of the modern Linux stuff like MMFD, DMA buff for screen sharing, EventFDs for waking up processes and threats. It has a security built-in. Yeah, so it needs an external session manager that implements policy. I'll talk more about that later for what we did since the latest Fedora. So in Fedora 34, the current state is that it's fairly up to date the PipeWire release, but we stopped at this version because of dependencies and because of, yeah. So since the release in 34, there were many things missing. If you saw the talk then, I listed them all on the slide. But since then, they're mostly all implemented. So the LutzPost stuff, the network stuff, echo cancellation, latency compensation, freewheeling for JackApps. In the course of Fedora 34, they got implemented. And gradually, you got those in Fedora. So in Fedora 35 and 36, we actually changed the session manager to wireplumber. And Fedora 36 is basically a continuation of 35. The versions are exactly the same as of now. So what changed since Fedora 34? The big change was really the session manager, which is now a wireplumber. So exactly what does a session manager do? So usually, PipeWire is just a blank canvas. There's nothing there. It only provides a way for applications to share data. But there are no applications and there are no devices, usually. It's the task of the session manager to actually load the devices into PipeWire, configure them, and all of those things. Like Bluetooth, for example, is for the bus, generate the Bluetooth devices, do the security, decide what the application can access and not. Also decide where applications get throughout the tool and how that works. Do they get passed through or not? Do they get channels remixed and all of these things? And also saving and loading of settings. So actually, the session manager is one of the most important parts of PipeWire, actually. And we replaced it now with something that was less of a temporary hack that we put in Fedora 34. So the wire plumber itself is a bit more extensible and actually a bit more modern. So it's used in GNOME technology. So it'll be better integrated in everything. You get bindings. It's also scriptable with an embedded Lua engine. So we write most of the rules and all of the stuff in Lua, which is nice, better than writing it in C. So you can actually just swap out a bunch of scripts or make small changes to scripts to write some custom rules, which is quite nice. It's a lot easier to write and it doesn't crash as easily because it's a scripting language to help. So other things, except for wire plumber, there's a little bit of a problem currently in wire plumber. You can't use pulse audio anymore. You used to be able to switch out the audio stuff back to pulse audio. But wire plumber grabs all the audio devices and you can't really run this anymore. We're figuring that out. So other than that, what's new in PipeWire? I'm sorry, I have to stop this here. So we did a lot of things, actually. Most of the stuff for pro audio to make that integrate better. So you get lots of settings to change runtime things like buffer size, sample rates. There's also a sample rate switching now. So you can make a list of all the allowed sample rates that you want to use. And then you can switch between these things. A bit like pulse audio, but pulse audio only at 2. Here you have 32, I think. So yeah, we also have conflict fragments. So you used to be able to configure things by copying your whole conflict file and then changing it. Now you can just make, for example, if you want to change the sample rates, just make a little fragment in the directory. And these things will be merged with the main conflict file. People that problems, things got upgraded. The conflict file changed and they didn't get to change because they had a complete copy of an old version. So that's not necessary anymore. Yep, some of the sound processing got improved. So we actually now do upmixing by default. It's a little bit controversial, perhaps. So what that means is if you have a laptop that has four speakers, which is some newer laptops, they actually have a subwoofer. We will actually generate upmix, the subwoofer channel, low frequency channel, and sometimes also front channel, because it's better than just leaving the speakers without sound, I think. There's also an upmix algorithm for 5.1. The reason, again, is if you set your hardware device to surround, I think you expect to have surround sound. You can disable this, but by default, this is something we do. A little bit of improved stuff all over the map. We also got something that we don't know exactly what to do with it yet, but it's there, which is called filter chains. So you can build filters, arbitrary, complex, using existing laptop filters or built-in filters that you can place in front of other things, like, for example, things for virtual surround or like an equalizer specifically tuned for your headphones or things like that. You can build these now. Or like, for example, noise-canceling is something easily doable. Well, one of the plans is perhaps to automatically generate these things based on what kind of hardware you have or perhaps have config panels where you can say, I want to enable noise-canceling on this microphone, things like that. So that's all over from these filter chains. So you can just run something like this, a PyPrize process with a config file, which basically lists the chain of filters that you want to run. It creates these things in the PyPrize graph, and you can directly use it. So you can start a bunch of those. You can make a system de-script to start these automatically. Or you can do all kinds of crazy stuff in the session manager. Experimental, we don't really have anything hooked up automatically, but it's future. So echo cancellation as well. So echo cancellation is for if I speak in my microphone, it goes through the internet to somebody who's listening. And then that signal actually gets captured by the microphone of that user and then gets sent back to me. So it's very annoying because I hear myself talking with a certain delay. So on the receiver side, if you run an echo canceler, it will filter out the sound that comes out of this microphone and goes into this speaker and goes into this microphone so that this echo doesn't happen for the speaker. So it's a module that you can now load and do this using WebRTC, the same as in console, you really. So Bluetooth, we have one of the best Bluetooth stacks now with the most features. So we've got almost all of the codecs that you can have. We also have automatic switch now. So if the phone call comes in on a browser, we can switch to the bidirectional profile of your Bluetooth headset. And we can switch back when the phone call ends. We also have some codecs in separate modules because some codecs are patented or have a non-clear license so we can ship them separately. So for example, aptX is in RPM Fusion, not standard in Fedora. The other ones are. Yeah, pass-through is also missing. Something we have also is pass-through of DSD. So DSD is a high-quality or high-fidelity format. There's actually no native player for DSD. There's a program DSD play in PipeWire. It's the only one on Linux that I know to play native DSD. And unusual pass-through. Some of the other pass-through formats actually don't work yet, I don't know why. Yeah, so lots of stuff for Jack, much more stable. Probably you can call it usable now. Everything is implemented as far as I know. You can do all kinds of fancy stuff, changing buffer sizes and sample rates on the fly. But you probably don't want to do that very often. Yeah, so starting to move to Lip Camera. So this is also for future Fedoras. So currently all of the camera stuff, so in this browser for example, this camera is using video for Linux. The API directly, IOC-TLs. So sharing cameras and stuff like that doesn't really work yet. We need to move all the browsers over to PipeWire. And then we actually also need to use a new library, Lip Camera, to access the cameras, not directly using IOC-TLs, because cameras are getting a bit more complicated and it needs a little bit of framework. So we're having a hackfest in two weeks where we will talk about how we can do this. So this is probably something for the future, the near future. There's a little hack, PWV4L2, that intercepts all IOC-TLs. You can run sort of Firefox on top of PipeWire using the camera, but it doesn't work with everything. There are other ways of doing this, but the question is, do we want to pursue this hack or do we want to just port apps over to PipeWire? We'll see. Yeah, so screen sharing, improved negotiation of DMA buff. So screen sharing now is pretty much zero copy. If you have all the right components in the compositor, newer PipeWire version and in the browsers, and you actually just transfer DMA buffs, you can negotiate the kind of layout of the buffer so that you can use an optimal format for the hardware, stuff like that. So there's an incremental negotiation. You can get a couple of DMA buffs, say, I don't like them. You can renegotiate the new set and so on until you find something that you can agree on. And then you can stream in the most efficient format. So like, for example, things like OBS use this, you can actually now play a game and have it record screen with minimal lag, zero copy, so that's pretty cool. There's all the things in there for headless compositor, so where if you do remote access and a client can ask for the compositor for new frames based on the frame rate of the client. So there's like a pool mechanism for screen sharing as well. So that's all pretty cool. There's another mechanism where the compositor can notify a client when something has changed. So we've been adding a little bit of new stuff there. Yeah, so network support is also mostly implemented. So for low latency, reliable streaming over RTP and UDP, we recommend Rock. It's pretty good. It works well. Then there's also the native pulse audio streaming that works well and that people were using before. So that works too. And yeah, some support for Apple AirPlay. You also have the Bluetooth streaming, of course, but these are the main new things that were implanted. Yeah, we finally made Zoom work, not so easy. So Zoom was using the Pulse Audio API to set up a whole bunch of virtual devices to capture microphones and to route audio around. And we had to reverse engineer a little bit to understand what it was doing and what it was expecting. It also needed a bit of quirks because it didn't like some of the properties on devices and things like that. But I think it should now finally work. So yeah. Yeah, so that's most of the things that we did. So what's next? So Audio Portal is one of the things that we should start looking at. So right now the situation is that we are starting to ship flat packs with Jack applications like RDoor, for example, because they can use PipeWire Jack Library to go out of the sound box and do audio stuff. But this is all not very controlled. What we want there is we want to put a portal there. Everyone to say to the users, just like when you share your screen, we want to pop up a dialogue. This application want to access your audio hardware. Usually the problem is for capturing hardware because you're capturing audio because you don't really want any app to do that, especially not flat packs. So we need to have something in between there. So a new thing. There are some proposals for that. So maybe this is something that will be done for next versions. As I said, it's camera with PipeWire. Two big things, really. I don't know if these are gonna be done in one Fedora cycle. There's some other things here, NetJack that I'm working on. So for sharing real-time audio in a pro audio setting, very low latency, much lower than what the existing protocols do. And yeah, eventually also AVB, which is standard for streaming audio. That would be cool too. But yeah, it has not been successful yet. Yeah, so this is pretty much what I had to say. I'm gonna see here if we have some questions. I think we have some time for questions now. Yeah, we have time to do a couple of questions. Yeah, I see a whole bunch of Q&A. So yeah, I don't start with that. Any plans where there's two quirky emulated audio devices out of the box? No, not exactly any complete plans though. No, it's a matter of quirky devices. I think one of the things with the quirky devices, it's not only emulated devices, but other devices as well. We need to use a different also API to actually make these somewhat work better. The timer base doesn't really work that well because the timing is that these devices do, they're not very accurate. So there's another way of using also with callbacks that might work better. So yeah, so that is simply a plan. I think this is actually a post-audio does when it runs in the VM. It doesn't use the timer base scheduling anymore. The SPAPA features for improving playback of loss of compression audio formats such as Flak, not exactly Flak though. So there's pass through of AAC3 and DTS. Yeah, no. Yeah, there is a bug report open for LE audio support but that also involves kernel changes and bluesy changes. Actually, good news. The codec that is used for LE audio is already support. So that problem is already solved. Now it's just waiting for the other bits to fall into place and then hopefully it all falls into place. Yeah, documentation ongoing process, of course, yeah. With Piper, how much or how little of the Bluetooth stack do we need to go with it? Blumen applet or such, I don't know. Bluesy you need to run, but then there's also stuff for configuring for pairing devices and stuff like that. D-Bus API as far as I know. So, you know, control center does that, I guess other things work as well. Yeah, easy effects does not use filter chains. It actually uses separate filters which is also an option they can use. Does it make sense for them to use filter change? May filter chains? Maybe not. Maybe yes. Filter chains were made after easy effects made their design. So, yeah. Playback protocol switching. So this is probably for Bluetooth. Yeah, when something requests a microphone and it is tagged as a phone application, we have some quirks to tag specific apps as phone applications, and it tries to access the Bluetooth microphone, then we will switch profiles and actually activate that microphone. Yeah, so plans for show battery level on Bluetooth devices. So this is implemented, but as far as I know, it's behind an experimental flag in Bluesy. So I think we just need to turn on the experimental flag in Bluesy to have that work out of the box. So maybe something for an extra door.