 I think you're good to go. Can you enable your microphone now? Sorry, yes. Thank you. Better, yeah. So I've been diagnosed, I work for Red Hat at the Graphics Team. And I've been working on a project called Pipwire. What is it? It's basically that allows you to share multimedia between processes. So that's a big concept, I guess. But what is the point? So if you look at the current multimedia stack, we have kernel, drivers, and APIs, which may or may not come with libraries, or not like also for Bluetooth, it goes with the deepest stack and so on. Then you have your applications, and in between there is, well, there are several things, like for example, Wayland is compositor to manage different applications having access to the graphics. There is also something for audio, both audio. It's only one app that can capture video and that's it. So what is the idea? Well, the idea is to put the deal there between the devices that allows you to have reach between the devices and the apps, actually. It also allows you to share the same device between local apps and also as a result of how it builds, you can actually share between apps. One app, one can exchange video. So it's not only between devices, you have to input from video for video. So sending out video also to the team apps. So these things, Jack and Wayland, if they stay there because you're not in the digital box yet. There is also a session manager, which is important because all of this plugging and programming and what kind of things happen in that graph means to be managed by something that's an external process. Because else, traditionally in Pulse Audio, for example, everything was managed inside the view and it's very hard to figure these things. So we'd like to have that outside. So I'm missing all the features, but it's like, it uses a lot of tools from G-Streamer. It uses one or more things like shared memory with memory of DMA of event of these to single apps and data in apps. So one interesting thing there is security per application. So in a world of containers like the traditional unit security mode where a device and user have permissions, the user has permissions on device, but it works a lot. When you have multiple containers all running as that user, they all need to have different kind of security. So you need to manage that in a different way. So it is built from this ground up to be a real part of it. It's a low latency. For video mode, that is important yet. We would also like to do audio. So the session manager is responsible for setting up initial graphs of all the devices. For example, you might want to have some DSP posts in going on, so like mixing all of the students or applying equalizers and things like that. All this plumbing in a graph. So, yeah, I don't have a lot of time, but I'm going to show you a couple of things that are currently working. So if you install Fedora, you might want to do something like this to be implemented. Then you will have things that work that will go over now. So first of all, this video for you is shared. It is not only captured and shared in between applications, but underlyingly it's also capable of doing important checks. So that means that if you have a sandbox application that wants to access the camera, you can go to the portal and use a dial to pop up. This application wants to access the camera. We will allow yes and no if you say yes, then you can actually go to the camera or the license permission denied. But also you can have multiple clients basically getting a stream. So it works a bit like DSP. We have a source video for you, a source that has ports are enabled and they are linked to other ports and basically data is flowing from one port to another. But what is important here is those red boxes are basically points where data goes out of the view into the app. That's just the way they are. The lightning mode. So if you start cheese, for example, you can download the latest Fedora. You can use auto video source in DSP which will automatically switch to a pipeline source which will go to video for Linux to get the view. You don't really see a difference. So it uses a system needs software activation there to automatically start pipeline even when it's on that page. There's also a device mode so you get a moment of all of the devices from the G-streamer API that are available in the pipeline. So the thing is, you can start cheese and then you can also start some of the G-streamer pipeline work and actually get the same picture before the processing has been opened for the cheese. So this is probably going to be nice if you're having cheese open and you try to do a video chat and then it's like, this camera doesn't work, what's going on? Oh yes, my cheese is open. And then you have to close that one. So now it's going to be a bit more... Yeah, so the most thing that currently works in Fedora 29 is the screen sharing. So between and you can't really make an app that grabs the content of the whole screen. That's the point of way-end. There's a big security wall. So there is another way now to open the screen to the screen that is much more controlled way through the portal. So basically if you want to have, for example, a screen recording app, or like a remote desktop or something, you have to ask the portal for a pipeline session where you can get that screen and the portal does all the checks and stuff. And it basically sets up the screen in coordination with Lutter to the pipeline. Basically the portal for them sends a screen to the pipeline where the older apps then can connect to it and get the screen to work. But all of this setup is done by trusted components and you can make that secure. Yeah, so the latest bits on this, for example, is you have the cursor information, a separate network data as well, and a formative together with the bitmaps and stuff. It looks like this. But yeah, so Firefox and Chrome 2.0, I think, are being patched to use FireWire so that you can do a video conference where you capture of your desktop inside the WebRTC session. So what I'll be working on next was audio support. This video support is going to be open for a while, but then I thought, okay, maybe I can try to do audio as well, but not to be wrapped on. But everything, we have to be not reread, but it takes a while. But I'm trying to do that now. So I'll be looking at Jack to see how things could work in a different world with FireWire. So I come up with a system that I used to work with. It's basically all the session manager that sets up a session where true components and mono streams are like a couple of components. And basically, there is what's called a DSP node that is created for all of the audio sources and things. Basically, the score conversion and conversion to Chrome 2.0 that split out the channels. So I'm basically the session manager that sees those hardware devices. We also review the device, the sets of these nodes in the community. And I'm basically, if you have a jackup, you don't have a relation there, so I come out with a list, but you basically implement directly your jackups to those hidden boards. But for, like, the regular audio stream from your desktop, there is what is called an audio stream which basically is an asynchronous stream because you don't want to have those for various reasons. And it also does all the example on the channel. So basically what this is, is like all the APIs, like, for example, Halsa, and also all the APIs that create what the stream nodes are there. They take the format as it comes and then transform it into the score control to be more informative So there's a bunch of stuff here on these things that you can change before you, of course. Yeah. So a lot of experimentation in it. So this seems to work pretty well. So what have I done? For post-audio applications, and I have a lift post with a self-replaceable library that basically marks all the post-audio API with equivalent five-wire API. So basically all the post-audio apps, like the volume control and all of these things, they just work a little bit different, but that can be tuned. And then there's also the alt-fibers that's easier to write for you. So basically you can run existing post-audio apps on top of five-wire and also apps as well. But when this is ready to be packaged up, this is all pre-development stuff. And so for jack support, I'm not exactly sure what to do with that. The third thing that I have, I also have an ejected base that basically marks all the jack API to post-audio apps and five-wire API. And then you just run your jack apps and you can do really nice things to see all these things work together. So another idea is probably maybe better, because there are things that you can't do like jack. To have jack when the jack starts up, that we store our own source and things and actually compare to the jacks with experimentation. We have some screen shots. I'm not exactly going to do a real demo. I'll make sure that in real life as well. So you can run like par. There's no jack running here. So this is using the jack API with basement library to directly go to post-audio. So you see all these same things there. They're max.1. And you can start like vlc or like post-audio. There's actually an outside API and a post-audio API and then in vlc there's a post-audio API. And you can see from the jack app how these post-audio apps integrate into the app. So you can of course use car lot to change those things so you serve filters and all these cool stuff. We have all these jack scripts to automatically integrate these things and so on. You don't know exactly what to do with all of that. If you do, you do all. And you can have some effects and filters there. So what is this state of story? So apart from what is in Fedora and what is working on the video stuff other things, the latest changes they work in the branch because it needs to ensure a bit more I guess. So I've been working on that audio part, the permission API and I actually spent the last month doing new tests in Benchmarks and API keynotes. It's MIT. It used to be LTTL but by LIC. And I worked on the last case for Pulse Audio API So there is a lot of work still to be done. Especially if it would be nice because for example if you compare this to Pulse Audio it uses some less CPU than Pulse Audio. Especially once you start doing to lower latencies and Pulse Audio has a tendency to start using 20% CPU like if you do a a chat or something and you do WebRTC it spikes the CPU whereas despite why it stays at 5% it would be very difficult to move away to that because of the policy and the security that people have that is very important. So there is a all to do here of all the items that we don't we actually have the same feature as Pulse Audio but other than that I won't even talk about the video effects and so on So what I'm going to do is I'm just going to continue doing all the audio effects adding them one by one so there is a little bit support for Bluetooth enhanced with the same feature and so on So part of the idea that we have as well was to move that to a separate legacy and then support the unit or something to do all the Pulse Audio stuff but for now it's quite likely So you can find more stuff on the video page here if you want to read more on the website that's all I have if there are questions Secretly wise who is going to be maintaining this instrument? So is that an external process that you can find in the API for someone to write something? So the question is the session manager who is going to maintain that So the session manager as an external process it was a question actually from global people because they wanted more control over how these things work and how the volumes work and how things connect and what happens when the matter is removed so global people have their own session manager So the question is what you have in your portfolio and currently as you said but if you're using something like that someone can just run something in my locker The thing is the idea for the future is that you don't need apps like without any protection at all you don't need a container and in that container they seem to have access to anything by far except two importantly in there you check and ask for use if access is grand so it becomes more like what is normal So the session manager has complete control and you can see what and can take away permissions for ground control So the session manager is very important and you sort of trust it as well as in the right way I'm quite sure how to make that So what do you have? Is there one key running in the system? Or it says just for I mean with many things today it's always a problem for the actual multiple users on the same machine Is this a global deal that's running there? What's the plan here? It would be a per user deal a global deal is too complex to do with all the infrastructure that we have for you there are rules to how okay God advises can be seen by users and stuff If you do that as a system deal that logic yourself depending on what user collects so right now you know what to do So in certain cases probably would make sense if you only have one user Could it be the task of the session manager then to give up control apart whether you switch to another user so that another user can then use the camera? Well the session manager would also be per user the world that we work So if you just leave cheese open on one user camera then you switch to another I think that that would well I don't know if you have to define the policy for that but in any case there's going to be two pipeline deals and one is not going to be able to get to an hardware anymore so I guess the one that is both of it Since it's very similar to Jack do you think he would replace Jack in the future? Do you try to support other platforms like Jack does? Uh So yeah So the question is can it replace Jack I don't know It's difficult to do but I have to say will I support other platforms for the next 100 years and block not me but it uses a lot of fluid so I don't know Hi I know the integration with Jack would be quite complicated but regarding the session management there are some concepts that basically exist within the Jack ecosystem such as non-setting management would be looking to that back end such as because there are still a lot of people using firewall devices with Jack Yeah so for Jack the session manager definition of Jack is quite different from this one because for session manager in Jack means like restoring a whole bunch of apps into a certain state rather than into a certain state so that you can start your project and find a pre-configuration of the graph but for session manager itself I don't plan to implement anything there are many other session managers available so the session manager to restore your settings so I can implement that API and use those tools so I don't become I want to improve the scene by itself and the old question is support for more kinds of devices like firewall, guards and stuff so yeah so that pipeline itself has a plugin system so you can write plugins for devices quite easily so it would be a matter of writing that It is also used not just for webcams and things but for hardware accelerated decoding of videos as well as webcams so yeah so that's the question do you can have encoders and decoders running in the pipeline graph I don't know I don't think so to be honest I don't know well theoretically it would be possible to have a whole bunch of papers behind the capture nodes but you could just as well move that part to the consumer so I don't know exactly where it should be the same way it can go there's no decoders can we expect the worst latency than Jack or is it going to be the same so for me it's worse because what Jack does is it configures the hardware in a very small buffer with two periods and if you do that then also driving will also do that which reduces the latency but since pipeline has dynamic latency it needs to configure a big buffer and no periods and then the driver says I'm going to delay in 20 minutes so I can't do it in the same way so I have to fall back to the Jack way of doing that but then again you have lots of drops per second yeah well I mean this is I really hate the fact that I cannot use you know Arduino and then like start a video somewhere so sometimes it was the only application that was using so let's say Arduino is the only one that is generating audio then we could probably reconfigure the hardware for this and then go to another mode when we stop being real time well thanks a lot for working on this well we're out of time for one more question so thank you