 We have a minute to go, so let me just introduce this. So this is a meet-up for a network manager community. It's going to be for 55 minutes. But of course, after that, you can go to the work adventure if there will be more to discuss. This is a meet-up, so if someone is going to ask to join this team, I will just allow them to, right? I don't have to wait for your permission, I hope. Definitely, everyone is welcome to join the discussion. You can join the discussion by clicking the purple button on the top right corner, share your audio video, and you can participate. I will be then just checking the time, maybe now and then, tell you how much time is left. But other than that, I will just be here silently after the video stream. So here we go. We can start. Have a nice conversation. Thank you for the introduction, Pavel. Hi, everybody. Still, do you want to make an introduction, or do you want to say a few words, or should I? Please go ahead. Benjamin, you can also join the video chat. I was just checking who is here from the team. F. Philip, if you want to join. So anyway, welcome, everybody. So this is our DEF CON meet-up. We had this already in the past years as well. Well, recently it was also virtual, but so what we usually do, we just talk whatever we want to discuss. So it seems particularly interesting if you have any questions or suggestions or things you want to talk about that you raise it and then we can talk about. Otherwise, we don't have much prepared. Maybe I should say, well, I should introduce myself, and maybe other people should also introduce themselves. So I am Thomas. I work at Red Hat on Network Manager. And I don't want to call other people out, but maybe you will want to introduce yourself. Sure. Yeah, I also work at Red Hat, and I manage the network management teams, which are the Network Manager engineers and also engineers working on some other projects such as NM State or the network system more for networking or for Ansible Network. So it's more for Ansible. Hi. Hi. I'm Binjamino. I work for Red Hat, and I also work on Network Manager. Lubumia, who are you? Hello there. Surprisingly, also working on Network Manager, also from Red Hat's Bruno office. Oh, that's about it. Yes, maybe I would. Hello, thank you. Maybe I should say that I guess the core team who works on Network Manager, we are all from Red Hat, in case you didn't know. Then there are a few other people now, who are still already mentioned. And we also have QE. I'm not sure if it's live. I didn't know if it's live. But so the people who make Network Manager are mostly at Red Hat and here today. So I saw that Lubumia already pasted a surprise change. You blew my mind, Lubumia. Hey, Xavier, I'm actually very pleasantly surprised to see Neil around, because he's commented on a Fesco ticket that's been dealing with the dropping of IFCFG support from future releases. It's still pet view here. And Neil pointed us to Cloud in it that would potentially break when we relied on IFCFG files. It seems to be pretty neglected, actually, because upstream Cloud in it actually always sets NM control no in the IFCFG file. So downstream patches it to make it somehow work in Federa. So yeah, I looked into what could be done today and yesterday to make it work and it seems plausible. I'm not test with passing. I think it works. It needs more testing here. But yeah. I am very happy to hear that, Lubumia. And I am very, very happy to test that. However, I can. If we can get this going in Cloud in it, then we can roll this out with F36 Cloud in it as well. And if it works out, you think we'll see it in Centra Stream as well, because Centra Stream also defaults to this, but then reverts it. And in Cloud in it, they have downstream patches to make Cloud in it, force it back to the IFCFG plug-in and use the old way of doing things. Yeah, so I don't know about Centra Stream. It might be that it's a bit too light for such an invasive change there. But then again, Cloud in it has multiple backends they call renderers that can be switched. So we can ship the network manager code disabled by default and optionally enable it if anyone needs it. I'm not sure whether it's going to be possible to switch the default. But what will it need? Are you a member of the Cloud SIG? Yes, I am a member of the CloudStake and the Cloud Working Group, as well as Hyperscale in Centos. And so I actually would probably want to force everything to network manager as much as possible, because I like network manager as a network management solution. I started using it for my non-wireless devices, I think like 2012, 2013-ish. A little bit just before I started using Centos 7 in some places, or Centos 6. And while it was kind of bad in 6, it was much better in 7. I think the only thing I'm a little surprised about, and there's a little bit of a switch in the topic here, the only thing I'm kind of surprised about is how is it? I noticed that network manager pulls in some parts of system D network, but it doesn't actually support reading system D network units. Why have you not ever considered adding support for using system D network configuration? Thomas, would you like to answer this? Because to explain what parts of system D we pulled in, what's the relation? Yes, I try. It's my sound, OK? It's just Firefox doesn't work. It's a little distorted, but you're at least mostly understandable. If you have a Chromium browser, please switch that because then it'll work better. I will do that later. I don't know if the video. So while we use system D code, mostly for internal libraries like the DHCP client and LLDB. But that is kind of just internal. And why didn't use the configuration format like the unit files? It's, well, one reason is nobody really suggested it and nobody was working on it. Another reason is that the model is a bit different for network D. You have kind of three kinds of units, the network file, the net-desk file, the link file, the network file that's just combined in the profile. So they are different in that regard. So another reason might be that we actually want to go away from different configuration backends because they make everything more complicated. We would rather just have one. Yeah. So I guess the reason is nobody implemented it. And it's not planned to do so. There's actually these proliferations of various ways to describe network configuration be it there's a network manager key files, IFCFG files, traditionally, but also an M-State, net plan, cloud unit using two different schemas to describe network configuration. I think OSNet configures by OpenStake as something else there. A desperate seems to be all around various pieces of software invested in setting up networking. A lot of efforts spent on converting between all this format. And in the end, it seems to me like the only reliable thing there that actually works is slapping an IPv4 address on an Ethernet interface. The rest has a high risk of being lost in translation. So the reason I asked about the system D network unit stuff is because I've heard from people about network manager versus network D where they like how the layer configuration model for network D works. And we don't really have that in network manager. I know that you can drop in network configuration files into user lib. I believe Thomas told me this a little while ago in a network manager GitLab issue. But it doesn't have the same modus operandi that system D units do, where you can do layered overrides and the base configuration remain the same. You do partial settings in Etsy. You shift the main config in user, that sort of thing. And that seems to be something that put it bluntly in the hyperscaler model that I'm working in. This comes in very handy if I'm pre-baking my images to have some basic default setup. And then I only want to have select overrides done maybe through Ansible or for a pre-boot configuration discovery or something along those lines. That kind of thing seems to be not really supported in network manager's configuration model. I think that's where a lot of people tend to ask, like, why don't we support the network D model? Because network D has this layering configuration convention. And even though network manager supports configuration files at different directory levels and has a precedence, it doesn't support a layering model. So what's the use case that you actually need this instead of setting up the configuration directly? I have two primary use cases. The first is VPN configuration, where I want to set a generic-based configuration that will work on machines that I'm sending out to the field, like laptops or edge devices or whatever. And I want to have a base VPN configuration that goes just to be enough VPN to hit an endpoint. And then it downloads and overrides, maybe sets new credentials or whatever, and that layers on top still using the same VPN configuration, but will then change to something more unique like a user password combination or a custom search or whatever. The second thing is basically always having a factory reset case for fallback network configuration. And this is for edge devices that, so at my workplace, we develop an edge device. And we're actually in the transition of moving from legacy, if up, down to network manager as part of, one, having better network configuration, and two, dystrognosticism. And one of the things that would be super nice for us is if we could pre-bake some basic settings that we can at least say will work in most cases when it bootstraps itself, and then afterwards go on and then the administrator or user or whatever can do partial or full reconfiguration, but in the event that they hit the reset button, we will go back to a state that will always work. Now, network manager has some heuristics already to make the second case kind of OK, where it'll do auto DHCP or whatever. But in the auto DHCP case, sometimes it'll do things like set certain hosting prefix configuration stuff. And we want to be able to set some of those settings at the base level and then only have them overwritten when specifically requested. So that's basically the two cases that I'm thinking of. And the second case could also be considered part of where I'm talking about with using cloud images where I'm trying to make cloud things behave a certain specific way, that sort of thing. Hopefully, that is comprehensible. But it seems like, especially like the resets feature would already be possible with the current, because then you have your default profile in user something and then you're updated. Of course, you need to copy everything. Right. And then for reset, you remove that from ETC again. But if the system is updated and the base defaults change for whatever reason. So for example, if our default name servers that we want everything to hit, if those change, I want those to be inherited. That's basically the kind of thing I'm talking about, like name servers, net masks, DNS name prefixes, search orders. Those are the kinds of settings that right now, the way it works in network managers, you've got a connection oriented thing. And you can set a connection config. But that's wholly overridden as soon as you make a new config file in Etsy. None of it propagates up if it's not set. That's basically what I'm talking about. Sorry if I didn't make that clear. Yeah, I think that's basically the one additional requirement that you probably need, for which we need the staff configuration. I think the reason why this is not so easy for network manager or easy for network D is because network D, the configuration is read only. I mean, network D only reads it. So it doesn't matter to it when it merges it from multiple locations. And it's actually always a human administrator who edits those files, at least from point of view of network D. Maybe you have a software that writes the override files. But network D doesn't care. It just says, while you or the software, you need to stitch them together in the right. You need to deploy them in a way that they make sense together. It doesn't care. But network manager has a debas API where you can modify a profile. So when you do that, it needs to write out a profile to disk. And then somehow you would have to do this inverse of the merging. And it's not very clear how that would work. That's why when it could write a profile to disk, it replaces it wholly. And it writes it completely. So with these parts, it would have to write it partially. Or you could say, well, if this is a merge profile, then you cannot modify it via debas API. But that doesn't seem useful either. So I think the problem here is it's not clear how you could write such profiles. And if you disable that, it seems a very useful part of it. You lose as well. So that's fair. It's just checking on this. It seems like the normal copy on write file system use case where you check any changes, you write to the new profile. And everything that's unchanged stays in the order. No, but it would be tough to ensure that the connection is still valid. It could be like, well, if what you're proposing is to remember what you were going from and only write the changed properties. But in Neil's use case, it could happen that someone updates the baseline. And then at that point, nobody guarantees that it's going to result in a connection that's valid in any sense. So this is a tough one. I think you can almost promise you we're not going to help you here because it would mean to compromise the configuration model here. And what I see as a path to deal with this is to push the logic into a client utility, like to have something that takes care of the bookie pink and just lets Network Manager know the finished connection that wrestled to Network Manager. So then for the first case that I mentioned, for VPN stuff, would it be possible to do something along the lines of straight up splitting out some aspects of VPN configuration so that it could be set up in this model like a special case for VPN stuff? Because that one is particularly painful as it currently is. I don't know. I would need to understand the requirements more. It might help, unless you've done that already, to file a bugzilla and explain the technicalities of the requirements there. Because I think this needs some decent work or consideration. I can't see how. I think what would be interesting, for example, if you say, well, at least I would like to override the DNS configuration, for example. Then we would need some parts or pseudo profiles where you could modify the DNS configuration that then gets inherited. So we were kind of discussing that earlier a few months ago that what we already have, like if certain properties, they can be left at the default value and then they get a default value inherited that is configurable. So if you say, for example, which property? I'm not sure. For example, the DHCP timeout that is overriding that way. So you leave that property in the profile to a certain value that says I'm the default. Then it looks up the default somewhere. So currently, that default has to be itself in network manager configuration on disk. But what we were thinking or discussing that there could be a base profile that is just linked. You have another profile. And your current profile says, I have this base. It's the other one. And then it would look up at that other profile for the default value. And I like that because the base profile as such would also just be a normal profile. There would be no new concept like DNS configuration as an entity that needs to be stored on disk. It would still be only profiles. So I liked that idea. And regarding the problem that profiles might get invalid if the base profile is changed, I think there's always a problem that network manager cannot guarantee. If someone writes a new file on the file system, it can be invalid because it's not through the API. So I don't see why this would be. Yeah, the thing is, it would be difficult to, well, given the sort of users, the desktop users and the desktop API they provide, who we deal with, it would be difficult to handle the problem. Like I suppose someone uses their desktop. They have this IT department provided base connection. They use the GNOME control center to modify some aspects of the connection. And then there's an update from their company's IT that changes the baseline. And now the connection suddenly doesn't validate and it disappears. And the user is just baffled as to what happened here. They cannot be expected to consult the logs to look for an error message and try to resolve the problem and restart the real network manager to see the connection back. I mean, I suppose compared to the system that has a similar concept there, that they have unit files and they can drop in snippets of unit files that extend or remove things from the unit. And they can corrupt it as well. And at that time, what SystemD does, it ignores the unit and logs an error message. And probably good enough for SystemD because their target audiences are system operators who edit the configuration with a text editor and re-allowed the configuration from Shell and then consult the journal to see what went wrong. But we can't reasonably expect this from network manager users. But again, if the company publishes another invalid profile on the system, even if they are not over-vised, and then it will... But I think they don't publish any invalid in the first case. They provide a reasonable valid configuration there. It's just that combined with the previous user changes, it comes invalid. I mean, we have validation there that does thing like, if you have this property set, then you can't have this other property set. You set either way. And if you have both, then we give up. We don't know what to do about it. And a situation like this could easily arise. And the thing is, the IT department could have no idea to check whether their baseline is gonna work with all the customer systems. And then the user, the customer can't possibly know that their changes are going to collide with some future baseline. So nobody is able to spot an error. Everybody does the right thing and combined it doesn't work. And it's difficult to troubleshoot. But in general, this would be opt-in. So the IT department would need to have some knowledge about this mechanism, because otherwise they wouldn't use it. Like by default, they would publish the configuration to some other location. And then nothing would happen, but only if they publish it in a way that it would be using this base feature, then the users would even change it in this way. So it's, I think one to assume that they don't know this mechanism at the same time, use it. Maybe this mechanism is not used that much because for example, you cannot, currently you cannot configure DNS search if you disable IP method. Then it will say profile invalid. But so you could say, well, if the DNS search gets inherited from somewhere else and then suddenly, and then the combination could be again invalid. But maybe this kind of validation is just too annoying and strict. And we should just say, well, if IP method is disabled, then we ignore the DNS search that you, that is present. So it doesn't matter when it gets inherited, it will be not considered. So that these merged things would not necessarily result in something broken together. Just ignored. And I guess in many cases that would be good enough, especially like for, yeah, for DNS search. You use it if you can, otherwise it's fine. Of course that brings then on the other side, we usually want to be very strict and validate, right? And say, well, this profile is wrong, you configured inconsistent things. You can't have it both. But I think it would be solvable. This being so strict about it is also a bit annoying. Like I have a profile, I won't temporarily disable IP method. I disable it, then it would say, no invalid, you still have the DNS search. And I would say, no, it's fine. So I think this would not be such a problem. We should not validate too much. That's what I mean. And the next option, if you want to make it easier to set up these valid profiles, you could also provide an API to update the base profile instead of having it updated in the file system. And then the validation would work on the combined profile. Yes, that's what I meant before. If those base profile would also just be profiles. So we already have that API then. Yeah, my understanding was that those would be like greed only profiles that reside in user share somewhere. Yeah, because the user lib network D whatever exists and not network D on network manager, user lib network manager, profile connections, whatever, that path exists. And if you drop files in there, it works according to Thomas. But like it's not a layering model. It is a full on, wag it with a stick override model, which makes things like, if the VPN DNS address changes for some reason, which is something that actually just happened at my workplace. If it changes, then everyone is screwed until they manually go and update everybody, like all the things, like that's really, that's really the case I'm trying to deal with is in, I don't know, three years, we've had three different VPN providers and we've had the end points change. And the only thing that's actually remained relatively the same as the credentials, and those would be at the per user level anyway. And I would really like for this to work and it doesn't. I'm pretty sure all of you can sympathize with my pain here because I'm pretty sure y'all use VPN at Red Hat this way as well. And it is from your IT people and it's probably the same kind of problem. Yeah, I think there's some opportunities to improve this. I think like some other use case very, sort of like these inheritance was like to make sure that there could be system defaults that are available through the API in general, which would be a little bit slightly different where we have maybe like some kind of default profile that's inherited from by any other profile and then you can change things like the privacy for DHCP requests and so on. And then every new profile would use these defaults and so on. Yeah, I haven't been fan of this because I've seen some value in the connection profile being the ultimate description of what's going to happen of what's going to be configured, not relying on anything else because it makes debugging easier. I thought it makes it easier to keep a mental model of what's going to actually be applied on the interface simple, like you do an MCLI connection show and what you see it's what's going to be applied not depending on anything else on the face of moon or network manager.conf. It's not exactly true and they're probably, yeah, I have been wrong before like, but... Well, I would think that using the run and directory path to export the finalized configuration that it actually executes with would be a good way to compromise on that. Like, regardless of whether it's in user, Etsy or in the home directory or whatever, like, I know there are some configurations that can actually be in the home directory, exporting it all out to run for the finalized config, I think would allow you to get that same kind of ease of developing that mental model of how the final connection works. Yes. I think with... Yeah, I think your point was different. Yeah, but reacting to what Nils said, I agree completely that the nice thing about Slash Run and how it's used currently is that it allows for generators to exist, that that would be the thing that goes in unit Rd that generates the connection that would be, I think, and then cloud unit does a cloud config, sorry, but cloud unit also does generate configuration that's supposed to go into Slash Run. Probably a thing that just emerges key files, validates them, does some reasonable thing with it, dumps it into Slash Run. Might be a viable option here. Not sure, man, it's more thought, probably. I like the idea about using the in-memory Slash Run. But I think that these were two different aspects. So I think what you initially said, Lumiya, was that you don't like that we have some defaults for profiles in the network manager configuration that are outside of profiles, which is something that I don't like as well. And then Nils was proposing a solution for having multiple places that are the source for the profile by having the other representation of the final profile in the file system and using Slash Run for this. Yeah, but that would also represent the stuff that he's talking about where network manager has implicit settings that you don't know about. Like basically everything would be exported to the run state. Because right now that's not true for network configuration, even they. Like there's a whole bunch of implicit settings that if you don't set them in the key, that they'll do, and having those all exported every property that is supported for a config type put into the run exported config, means that you can generate a complete mental model of how the configuration actually is set up. And today we don't actually have that anywhere. I'm saying that even if you don't do my fragments idea, having the run export would be a really good idea. I see. So the question for the engineers, like if you have like some defaults in network manager comp that changed something, do we see this in the D-Bus API and MCLI, the setting? The effective setting or is this hidden? No, you don't. And so basically like this would change then. Well, if you change on disk first, in the first moment nothing happens, you need to reload and then it gets reloaded and remembered in memory of the demon, but you wouldn't see it from outside. But what was loaded? The network manager conf file. No, but what I mean in general, like if I think the network manager conf file, like it supports the method how I think IP addresses, IPv6 addresses are generated. And like, if you create a new profile, do I see in the API? No, you don't. In the API you will see the address generation mode is set to default, which means it will look at the configuration, but you wouldn't know what it looks at at runtime. And actually it even depends on the device, right? Which you might say is a miss feature, but in the network manager conf you can have different, this defaults they depend on the device. So it's a miss feature because there's not only one default it depends. That's why you cannot just say, well, in the current way, you could not say it's just this, that's the default value. No. What certainly would be possible to augment, to have a profile that has all these values set to the default and then to on the debars API to say, give me this profile, but with the default set to what would be used. But you would have to say, what would be used on that device because it depends on the device and at the time? Or we could solve half of the problem by just adding the information to the active connection the way we do for a couple of properties so that when you do NMCLIC show and the connection is active that you actually see like the IP addresses that were discovered over DHCP and whatnot. But the downside would be that this would only work for active connections, but it would involve minimal changes to the debars API that I've just said, add a property. Like here, that's a property that's not in the connection profile, but still takes effect when the connection is active. Of course, we wouldn't answer the question like what would happen if I activated this? Yeah, so basically we have now three problems or challenges. Like one would be like the layout configuration in general, one would be API, a way to change defaults to the API and the third one is to actually see the effective profile even in the way it's currently. So Neil, why I asked you before about the called SIG is that I'm sort of afraid about how the cloud in it think would be tested because it seems difficult because there are tons of various cloud providers I certainly don't have access to the majority of them. And I was wondering if I could roll a package or an image with the updated cloud in it, is it possible to ask cloud SIG, like, hey, tell me if this works or if it breaks down and some sort of the cost? Yes, yes, to all those things. Actually, the easiest thing to do is if you once you have it we can ship it into Rawhide and have it generate the images and then it will just get uploaded and we can talk to David Duncan and Dusty May and get access to all the different cloud providers that actually test them. You can even include it into a test week. That's probably actually the right course of action is to we have a test day for the cloud version, the cloud edition and let's we could just roll that into there and make sure everything works but we can also do just straight up quick smoke testing to make sure like all the basic stuff works. Like I can also produce custom images for my internal open stack stuff at work, all that sort of thing. Like I can make sure that we can make sure that it works just once you file an issue on the cloud SIG issue tracker and ask for that and we will take care of it. Yeah, I was thinking if I should do this, I wanted to do at least some testing before I submit this upstream. So yeah, I will be able to provide an RPM. I would submit it upstream but make it a draft PR initially and then because then that gives them the opportunity to review it and we can and if they have feedback, you wanna get that early and then we can just, we can kind of keep going in parallel for both of those because I would like this to be in and done, released and out basically as quickly as possible. So the easiest way to do that is to parallelize everything. And so- Oh yeah, I'm afraid of breaking things to the point that are not testable. Well, actually- Don't worry about it. The coordinate upstream has a very strong test suite that they run for PRs on the upstream project. So you should be fine. Oh, that's good because quite honestly, more concerned as far as cloud unit goes concerned about accidentally fixing something instead of breaking something because seems to be some question of the things there. Well- So- Do people actually- Sorry. Go ahead. I was just gonna say that the cloud in it, cloud in it, Red Hat Maintenance has been, the nicest way I can put it is subpar for the past 10 years. What is it? Yeah, like so- I didn't know what- Quick look and I said to me there did a good things there. Well, so no, what I mean by it from the maintenance perspective is that if you look at the cloud in it package in RHEL, you will see that there's dozens of patches to change the behavior very significantly. These patches also wind up in the Fedora package from time to time as well. And they rarely go back up to Upstreet, like at all. Is it because the Upstreet requires a CLA? Not everyone's happy to sign it? Red Hat signed the CLA many, many years ago. So that's already been taken care of because the CLA, because cloud in it was originally originally written to spore eucalyptus and then later OpenStack. OpenStack actually requires cloud in it for being able to boot images in the first place. So they got over that hump a long time ago. But there was some antagonism because this also happened around the same time that Red Hat was using Upstart. There was some antagonism between Red Hat and Canonical long ago about various things. And I don't really want to say too much about it because I'm not supposed to. But like, but because that happened back then there was a lot of demotivation on the Red Hat side for contributing to through these projects. And when and combined with the coreos acquisition a few years ago, they wound up and the shift in focus for a lot of the people that would ordinarily be trying to make cloud bootstrap work, cloud in it correctly. They all decided to say, well, we're going to try to do everything with ignition even though ignition is not generally compatible with most cloud providers infrastructure. They're not generally compatible with most DPS providers set up models and things like that. There's just been this like weird mental split when it comes to how they think about it. And the result is cloud and it's gotten a lot less than stellar maintenance over the years compared to some of the other parts of the network of the cloud stack that matters. Like, so I'm hoping that that trend will reverse now that people care a lot more about the cloud again, especially the public cloud where things are considerably less flexible when it comes to like what kind of tools you're allowed to use to be able to get systems booting and running. Like you can't really use ignition well in for example, open stack or AWS or whatever without having to somehow stuff and bootstrap externally through some other means. And that makes things a little painful. So cloud and it's here to stay but it is a canonical project but my experience is that they're nice people. I mean, they and they actually do a lot of work to make sure that the projects that they do develop like cloud and it have a good test apparatus to go with pull requests and things like that. Oh, their test suite is stellar. So you need to do this. Well, I don't write Python really and I had never used Python before. So I was very impressed actually. Yeah, no, and they also have a very comprehensive integration test that's used by tool that they call on spread that they wrote originally for testing stamp feed. They also use it for a lot of other things. And what it does is it can connect to cloud providers, boot machines up, it'll build images or packages or whatever and then it'll boot machines up and use that content to test it. So like they can do all kinds of pre-baked steps. They can orchestrate cloud providers with it and like if you look at the mirror or snap deer and I think also cloud in it, you'll see these like dozens of spread lines where they're testing every single permutation and every single configuration on every single cloud provider that they support. So like they are very, very good with testing to an extreme degree that I am really shocked about. So I wouldn't be that worried if you sent if you think it works with your unit tests send the pull request and let them look at it. The worst that they can do is say, oh, you missed this little thing and here go fix it here and there. Here's some suggestions or whatever. That's not the end of the world. It's fine, right? And we're going to chip it in parallel in Fedora and go from there. That's cool. So Stanislas, do you want, would you like to introduce yourself or ask a question or anything? Yeah, so my name is Stanislas. So I'm really happy to join you and listening to you. So my main question will be regarding the main features on which you are working on currently. Is there some, let's say some challenges more related to Wi-Fi, let's say, related to the Wi-Fi handling of network manager. Is there some main challenge on which you are working on or something that is coming on the future? Is there? I thought, well, apart from, well, what was there, the WPA3 plus on here? So the key exchange algorithm. Yeah, the Wi-Fi 6A, I don't know if there was something specific done on the network manager side, for example. Well, I think, well, it mostly boils down to adding a property to express the key exchange algorithm or what else is there. Because the crux of supporting Wi-Fi basically lies with WPA supplicant, right? We merely bookkeepers here. We just keep the configuration as and it send it out to supplicant and that's about it. I think something that I'm missing is for users to easily automatically migrate from WPA2 to WPA3 for profiles that support us because that's something that needs to be done by hand specifically. I guess it would be nice if network manager recognize WPA3 is available for this access point and then starts defaulting to this in the future to make sure it's, it can benefit from the security benefits from then on. Yeah, that's a fair point. Also, I do have a mental note to look at the new, well, new, it's a year or two I think already since it's a support in a supplicant to support the device provisioning protocol for IoT devices. Well, we do WPS push button now. I don't think anyone does it anymore. But, and I haven't really looked into the device provisioning but I think it's sort of a similar way, right? The provision Wi-Fi device that doesn't have a keyboard to enter a password on. I don't know if anyone has more insight into this. For me, I don't really have a lot of details about the DPP device provisioning protocol but I know that it's always required some kind of third party application, some kind of application in order to translate the information to the clients which want to connect on it. So that's just all I know about how it's workfully. I don't know whether some methods where IoT devices get their configuration, I think encoded in the length of some packets that are sent as broadcast unencrypted or something. So basically like if you want to configure your light bulb then you have an app and then it sends some kind of packets in some broadcasting way unencrypted that encodes something indirectly. I forgot the details. Yeah, I'm not sure if this is anything that's officially. Yeah, I have no idea what the market situation is right now. The cheaper devices probably data still do like they just run an access point themselves and you connect the rest of your network to it, right? To the bulb or not. I haven't seen it, my bulbs don't have access point in it. Thankfully, and I don't know if there are devices in the wild that actually use DPP. I guess like if you have an access point it seems like it's even the more advanced stuff and like some device that cannot even do that and still needs to be configured somehow. I think currently we don't work. I mean, Wi-Fi was not on much a priority for us at Redhead. I mean, compared to other things. So it didn't get as much attention as it maybe should. I must say it works well enough for me. One feature I always thought would be really nice for enterprise Wi-Fi is some certificate pinning where we would remember, well that we saw a certificate and then trust it based on that. Where probably the most work happened with Wi-Fi. Well, I think we did something to support WPA3. So that was a bit of work, but where a lot of work happened was actually for IWD that was not really done by us. There is the people who work for Intel and who work on IWD, they contribute the code to it. I know that while I read that people are using IWD with network manager and it seems to work for them, I don't know. I think another thing would be seen as warming from Wi-Fi to wired network at home or in corporate networks. So when are we switching to IWD in Fedora? Oh, we're not. No? It seems to me that at least if we were to decide that WPA3 has both healthier after maintenance, it's a more active project, the code base probably of higher quality. The feature range is better. I mean, the sub-license needs to be better in about every aspect to me. It baffles me why would anyone consider IWD. Why did IWD, what? Now I'm confused. I thought the reason IWD was written was because sub-license wasn't bad, Shane. But it was not. Well, that's what I said, but I can't see how, but their arguments would be, it's an old project, I mean. But I think that's what a sub-license is where most of the development happens. And the feature range is much higher there. I can't see a compelling reason that why would we switch? Well, I can agree that sub-license is not that actively maintained. I would wish that there are some, that more, I don't know, people who really know what they do about Wi-Fi would improve things because Wi-Fi doesn't work that well for me with sub-license, I must admit, right? But we also use sub-license for max second for eight or two, one X authentication. So we wouldn't just completely drop it. So now we have, we already need sub-license. So it's not very clear that it would be a win to use IWD for Wi-Fi. Yes. Sorry, the integration between a network manager and the WP sub-license. I mean, they have compatible models like network manager stores the configuration and the configures WP sub-license. With IWD, there is overlap because IWD also wants to store the configuration, network manager also. So it's not clear how well they integrate together. Well, that's maybe the best point, yes. And also that IWD wants to do a lot of things that network manager already does, which is fine if you don't use network manager, but together they can, the sub-license is just a tool who does the lower layers and for network manager. Yeah, there's a question in the chat by Jan about Wi-Fi direct support. And I think we do support it somehow, network manager. And yeah, about the upstream health of the project, IE, I would call it about IWD that it's basically Intel only project. It seems like an IH case here, whereas WPS applicant has a very wide range of contributors and users as well. It's used in Android. It's used in Windows at least for the wired cases. You can do it, I'm not sure if for wireless. So that seemed to be a project that's truly collaborative here. Yeah, but I think nobody of us use IWD here, so if there's something great about it, we wouldn't notice. So we would like to hear if it solves some problems that subsequent has. I mean, I don't know anything. I was just, people keep asking me about like why we don't use IWD and I don't have an answer. So that's why I asked. Yeah, the real question is here, so why would we? So, yeah. Jan was mentioning Wi-Fi Direct peer-to-peer in the chat. And yes, I forgot about it. Of course, Benjamin Bach contributed all the code for that. I actually don't use it because I don't have such hardware, but it probably works. Yes, of course. And one notable user of this is GNOME network displays, which allows you to share your screen to TV, for example via Miracast. It works very well. So Miracast, is it supported by a FireGV stick or something? Which matters, is it only available in some movies? Let me interrupt you just for a second. So the time for the discussion is up, but because we have a half an hour before the next session, you can continue here for a while. Or you can go to the work adventure and continue there, it's up to you really. So... It's more fun to do it. It's more fun when you start it. Oh, okay then. So I will let you know like 10 minutes before the beginning of the next session and then, yeah, like that. Cool, thank you, Pavel. Yeah. Thank you. Thanks. I anyway wanted to say, even if there is no time here to discuss something or you... Anyway, can always meet like on the IRC channel and on the mailing list, right? So whatever you have a question and you thought there was no time or not the opportunity or we missed it, then please reach out. That's it. And I think we also plan forever to have some kind of public video meeting everyone in a while. Maybe like we just start this year. Like we did this now for January and now February, we will be announcing something on the mailing list and have some like non-conference related meetup. I didn't know that was planned, but we talked about it half a year ago and it's a good idea. Yeah, I thought you told me that it was planned. Oh, no, I like it. It's a good idea. Yes, we should have like maybe every two months or so. Network Manager is a really nice... Jan mentions that they're using it in their network embedded devices very happily and I wanna just echo that statement. Network Manager is surprisingly awesome for embedded stuff and I don't think people give it enough credit for how well it works, even in low resource environments or even in slightly static environments like servers. It does a really good job doing the stuff. Yo, take a note. How are you? It's good to hear. I think in Fedora it's also... Some people are working on using it on the pine phone. I think it's good. I noticed the shift in embedded. Actually, my phone, it runs Ubuntu with a condom on there more and off for now. And I just noticed the shift that the today's generation of Libra phone speed, the purism or the Python, they tend to prefer Network Manager with Modem Manager once Modem Manager could support for voice calls. And I'm happy about this shift. I need to upgrade my phone. Plasma Mobile in Fedora, as it is being brought in, is being brought in to use Network Manager and Modem Manager. We are not bringing in the Ophono stack. We are not dealing with Conman because all that crap is sucks. Well, this is for IWD into the same bunch. So in a previous life, I actually worked in an embedded development, especially with a cellular-enabled device. So I've actually worked with both Ophono and Modem Manager and I vastly prefer Modem Manager. Hello, Alexander. I remember you. So I've also been subscribed to the Network Manager and Modem Manager mailing list for like years now because of my previous job. And so I followed the progress of that well and I'm really, really happy to see how much advancement has come in supporting cellular stuff in that stack. And Network Manager has two backhands for, well, to talk to Modem Manager and to Ophono. There is no problem with the Ophono backhand. It's just, I never saw a bug report. I never saw anybody using it, which... That's because Ophono is buggy all by itself. Oh yeah, I once run it and noticed it being broken to the point it would just crash at the time you try to use it and it had been broken for two years straight. So at which point, oh no, that was a settings plugin. We do have a, we did have an Ophono settings plugin where you just delegate the connection to Ophono because Konman, Ophono and IWD all like to store configuration as well as opposed to being state-plus, which made it pretty obvious that nobody's using this, that a crash or bug that always happened could I notice for years, not sure about the Modem backhand there, but it could be in a similar situation. Alexander, you're still muted. We can't hear you. Yeah, you're muted, can't hear you. It might have selected the microphone. Yes, hold on. There we go. Now we can hear you. I was going to say that most of the phones that have been using Ophono for phone management have been including their own patches on top of like the generic Ophono. So it has never been like the Ophono as Modem address. It's always been Ophono plus these 10s of patches that they didn't bother to try upstreaming because probably maintaining Ophono is not a pretty different kind of animal. So, yeah. I'm really happy for the change of lots of different phones using Modem Manager and actually giving a talk about that in foster next week. Which I don't have to attend that. I already recorded it. So, I mean, and I think it's good, you know? Well, if it's going to be scheduled absurdly early in the morning, I'd be happy to see the recording where I don't have to do that. In the afternoon, like 2 p.m. European time? Oh, well then I might actually be able to do it because that's like morning-ish here. Like this event starts at 3 a.m. my time. I'm not going to be awake at 3 a.m. So, and this particular meetup starts when I make breakfast. So I had to make breakfast a little early and then chill to be here. So like that's the. I'm wondering if we could drop the Ophono Modem backend but you'll be part of what used to be open to phone might still be using it and that they still crank out new releases? They didn't include it long ago. I mean, it was a question. Can I hear you? Just noise coming out. Your audience is like Thomas is at the beginning. Are you using Firefox? If you're using Firefox, switch to a Chromium browser. That will work better. I am very sorry. Anyway, with the Ophono code, yes, I guess we would like to drop it but it's very hard to know whether anybody's actually using it and would miss it. So we, if anybody, if you know anybody using it then we would like to hear it. All I know is people trying to get away from it. It is, you'll be parts, but I'm not sure if Alexander meant to say it's no longer the case or I'm very curious what he was to say. Just right in the chat, Alexander, if you had a response about it, but they wouldn't do, I think, if you had. Top, can you hear me now? Yes. Yeah, I just said that they added not long ago and that I don't think it's a good idea to remove it. I mean, I would ask them what they're going to do with it. I mean, just move it. If it's not working, it's not going to work. If we have changed the API in the manager, all the integration with Network Manager since 10 years ago, I mean, the integration with Modern Manager has been like quite simple since for the last 10 years. So I don't think there is any big change related to modern management in Network Manager that would lead us to say, okay, we can remove it. We should move it on. I mean, I don't think it makes sense if it's not on maintain as well, which I don't think it is. Quite the opposite. I got a couple of years, some years ago, it wasn't done for fixing the build, rather build the pressure back there. So if this happens again, it's great to keep it around. So I guess if there's nothing else, then we can also conclude the meetup. Thank you. Thank you everybody for joining us. It has been a really pleasure to see the actual faces instead of the names. For sure we will continue on DevCon views in the summer and maybe in the meantime, we will publish something just like just video meeting every now and then. Keep it on, I'm demanding this. Yeah, you should do regular community meeting things because you guys do releases quite often with a lot of new features and stuff. And it'd be great to be able to have an avenue for people to give feedback or talk about whatever or bring up stuff, things like that. As it feels a little, right now it feels a little disconnected aside from here. And I think having more of these kinds of things would be great for Network Manager. And it may also help with changing the perception of like Network Manager is like kind of, if you read our Linux, it's a broken pile of trash that doesn't ever actually do anything. And I'm gonna use my old if config scripts until the end of days because Network Manager is not good or whatever and not doing anything. So like, yeah, like the idea of community meetings and having an avenue for people to see, the people that are working on it, have them regularly talk about like what's going on. I think that would be really great. Perhaps it's just broken on actually looks, you gotta try to install it, but they couldn't. So I have no idea if it works there. Yeah, you're rich. I think that's a good point that we should do better community work in the sense, while the mailing list should be more alive, the IRC channel should be more alive, we should discuss more on those channels. That is especially an invitation to everybody. If you have anything you would like to bring out that you send an email and not think, well, nobody cares. No, actually it's good when we discuss more in public. Yeah, speaking of IRC stuff. So there's this whole larger trend of matrix and stuff. Are you guys gonna set up a matrix room or if you're interested in having a matrix room, I could also help set up the matrix room and have it plumbed in through the IRC one so that people can have a nice matrix experience and stuff because like Fedora is doing the whole matrix thing and OpenSUSA, CentOS, lots of other free software projects like PipeWire and stuff, they're making that move over to matrix as well. Sounds interesting. So I was still waiting for Fedora to go public to try it. Yep, that's already been doing a while ago. Go to chat.fedoraproject.org and sign up with your FAST account and you'll automatically get a matrix ID on the Fedora IM namespace and you can join Fedora Project matrix rooms as well as other matrix rooms on other matrix servers. Oh, nice. The IRC bridging to work, like can I keep using an IRC client just as usual and just set up some sort of bridging there? So yeah, if you wanna live on the IRC side, if a matrix room is set up to be plumbed through, which is the technical term for bridging, plumbed through to an IRC room, then the two endpoints are connected and they will be fully federated with each other and people who are on the matrix side will show up as IRC users on the IRC side. People who are on IRC will show up as matrix users on the matrix side and so on. So like that all can be set up, all of the current Fedora rooms, meeting rooms, community chat rooms, all those things are already set up. Yeah, so NM Liberachat is a portal room, is dynamically generated and is constricted to IRC specific rules. If you have an actual matrix room and you plumb it in through to an IRC room, you get two extra advantages. The first is the room name identifier is static and discoverable because it'll show up in direct researches and things like that and it can be used with the matrix.2 URL thing so people can click through it from a web browser and whatever and they'll get that experience. The second thing is when it is set up that way, the rich information stuff gets seamlessly translated correctly back and forth between IRC and matrix. So for example, if somebody posts an image or whatever, it turns into a reference URL for the IRC side, URL short like if you post too long of text it'll turn into a paste bin or whatever, like all those things like correctly work and people who don't necessarily have IRC registration but they have a matrix ID which it requires registration to have one. They can still get into the room because you can configure it to allow MXIDs through without allowing unregistered IRC nicks through. That's actually how all the Fedora rooms are set up now and that can only really be done if you've got plumbing through between a native matrix room and an IRC room on the other end because then they get static references and all kinds of fun stuff that makes the IRC Chancellor rules and stuff work correctly. Till, if you wanna talk to Kevin Fensi, he can actually tell you a lot about how to set that kind of stuff up or Nick Bobit, NB, he will also be able to tell you like those two have been doing a lot for the Fedora rooms. I can set up the matrix native rooms for the RPM ecosystem. And if you'd like me to help you set up the ones for network manager, feel free to let me know and we can work that out. Yeah, thank you very much Nia for the offer. We will get in touch. Hey, then, bye-bye and maybe see you in the matrix next time. Bye. Bye everybody, take care. Bye. Yeah, bye.